How Art and Technology are Transforming Creativity

Adrian Reynolds investigates how art and technology are challenging our perceptions in the world of contemporary art. Continue reading to find out more.

Relationship Between Art and Technology

When I reflected on the article about naming art that I had written, I started to think about how we interpret and recognise things around us.
 
It all came about after I had started to notice shapes and patterns in some of my acrylic fluid paintings, so I decided to investigate the subject a little further.
 
In ancient times, the things people saw were often interpreted as Omens or messages from the deities.
 

These thoughts have also led me to further consider and try to bridge a link between art and technology, particularly in the areas of Artificial Intelligence, Image Recognition, and Deep Learning.

DISCLAIMER
It should be noted here that I am not a computer expert, scientist, or mathematician. In order to become acquainted with the correct terminology and concepts of this subject, I hope that my research and post are generally free of major flaws.

What is Apophenia?

In psychology, apophenia is the perception of connections and meaningfulness in unrelated things. Apophenia can be a normal phenomenon or an abnormal one, such as in paranoid schizophrenia, when an individual sees ominous patterns where there are none.

This phenomenon arises when one begins interpreting randomness as a pattern. We may even fail to notice patterns or perceive ways where there are none, or assign meaning unintended by the artist.

Apophenia has come to imply a human propensity to seek patterns in random information, such as gambling. Gamblers may imagine that they see patterns in the numbers that appear in lotteries, card games, or roulette wheels. With any addiction comes the justification for the habit, as this provides a rationalisation for gambling or “gambler’s fallacy”.

Randomania, the opposite of apophenia, is when you actually experience a revelation but mistake it for a delusion, or when a pattern or ‘patternicity’ exists but you don’t notice it.

Agenticity lies between these two extremes. According to the phrase coined by Michael Shermer, founder of the Skeptics Society, agenticity is “the tendency to fill patterns with meaning, intention and agency.” In other words, the pattern is real but meaningless, but we give it a meaning. Oddly enough, we sometimes see all three when we look at abstract art.

We do not observe or see patterns where none exist, nor do we attribute unintended meaning to the artist. Isn’t it odd that these behaviours are only linked to mental illness? But more often than not, they are frequently associated with creativity, originality, and inspiration.

"As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns, the ones we don't know we don't know."

What Exactly is Pareidolia?

Pareidolia is a subcategory of the tendency to perceive an unknown stimulus as an object, pattern, or meaning, such as seeing shapes in clouds, seeing faces in inanimate objects or abstract patterns, or hearing hidden messages in music.  For example, you may have heard of the famous ‘subliminal message trial’ of English Heavy Metal band Judas Priest.

This lead me to think about how, as humans, we interpret images and use technology in the areas of Image Recognition, Artificial Intelligence and Deep Learning.

For example, for years the faces on the plains of Acidalia Planitia and the Arabia Terra highlands on Mars were real. It was not until later improved imagery proved this these faces were nothing more than an optical illusion.

Indeed, some imaginative people have clasp to phenomenon of pareidolia by turning everyday inanimate objects around them into characters with distinct features and even personalities. A interesting article highlighting research from the University of Sydney, claim that our brains respond emotionally to inanimate objects which have faces.

I think young children have this ability more so than adults. They might see the faces in inanimate objects and come up with little stories about them to bring them to life.

Image Recognition

It’s the bane of data privacy campaigners/fans and their continued avoidance from big brother. From bio-metric identity verification systems used on smartphones, computers, passports, Automatic Number Plate Recognition (ANPR), and CCTV security cameras, it’s well established and here to stay with us all.

Can we camouflage ourselves in plain sight? From wearing realistic face masks or other ‘urban myths’ to try to bypass the law. At a profound philosophical level who are we anyway? It is interesting to explore facial recognition and how we interact with technology that is used to identify us. Not to mention our rights in relation to DNA, biometrics and the protection of our personal data.

Storage – Cloud Full?

When we think about stone carvings on ancient walls, it would probably only equate to very small file sizes, of kilobytes of data. Whereas now we have an ever-increasing need for digital storage, with ever-increasing ‘mega pixel’ quality for images and video that require more and more storage space capacity.

Over the last few decades we have seen advances in storage formats from the gradual degradation and reliability of tape and floppy disks, to laser read disks and silicon based storage. Physically reducing in size, in tandem with increasing in storage capacity, all to accommodate our shrinking personal devices. 

Although not yet commercially available, scientists at the University of Southampton have developed a glass disk called 5D optical data storage. Sometimes referred to as ‘Superman Memory Crystal’ it is a nanostructured glass for permanently recording digital data using femtosecond laser writing process. This memory crystal is capable of storing up to 360 Terabytes worth of data, for up to 13.8 billion years.

To put this in some context, I remember the excitement of my father getting a 3.5k RAM motherboard expansion cartridge for our first personal home computer, The Commodore Vic 20. The excitement was short-lived, as was the typing out of lines and lines code from a magazine, to see a game that did not represent the beautifully illustrated world that it promised 8 year old me! Don’t even mention knocking anything when the tape was loading…

Artificial Intelligence

We are still some way off reaching human level machine intelligence, or HLMI, in machines. ‘Ironically, abstract and formal tasks that are among the most difficult mental undertakings for a human being are among the easiest for a computer’. (Goodfellow, Bengio, Courville, 2016).

In addition to computer scientists, there are thousands of low-paid ‘ghost’ workers whose job it is to classify and label data. This data is the lifeblood of AI systems. Paradoxically, when we train machines to become more human, are we actually making people work more like machines?

Computers and AI systems follow and rely on formal mathematical rules, steeped in probability theories and statistics. The 21st Century has seen huge advances with speech recognition, for example Siri/Google Assistant and computer vision.

The creation of AI generated artwork is being explored more and more. In 2014, Generative Adversarial Networks (GANs) were invented by the computer scientist Ian Goodfellow. GANs are generative models based on an algorithm. An algorithm is defined as, ‘a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation’. (It’s OK, I’m an artist and I’m not sure either).

Another area where Artificial Intelligence is gaining traction is in the area referred to as Business Process Automation (BPA). Specifically, robotic process automation is an emerging field within BPA that uses AI.

Not to be confused with actual robots, the robotic software automation is a means to learn human repetitive inputs which can result in extra security, especially for things like sensitive data and financial services.

The narrative is that this aims to achieve more work and greater productivity with the same number of people. It is worth noting that Oxford University conjectures that up to 35% of all jobs might be automated by 2035. (Ironically including the clerk in the Job Centre? – I hope not.) 

Machine Learning

The world is full of data. There is more and more data everywhere in the form of technology or art in the form of images, music, videos, words, spreadsheets, documents, source code and so much more.

There seems to be a never ending increase in this data on a daily basis, with no signs that it will slow down in the future.

We see machine learning all around us in the products we use every day. From tagging objects and people in pictures, to features like video recommendation systems.

Machine learning is one way to achieve that intelligence, and is based on the idea that machines should be able to learn and adapt through experience. 

Deep Learning

Deep learning is a subset of machine learning in artificial intelligence (AI) with networks capable of learning unsupervised from unstructured or unlabelled data.

In 2014, Alexander Mordvintsev, researcher, artist and Google engineer, invented DeepDream, and in turn has created an entirely new subgenre of art using neural networks.

Also known as deep neural learning or deep neural network, it is a computer vision program that has transformed how we can visualize images with the application of AI. 

You may be familiar with using a variety of Photoshop Plugins and Actions to create a range of visual effects?, and if you are anyway interested in computer graphics or photography, I strongly recommend you explore the fascinating and accessible AI art generator tools:

AI Deep Learning – Computer Vision

In addition to DeepDreamGenerator, the following list outlines a brief evolution of computer vision, and the ways the concept has evolved and developed when interpreting images by machines:

  • Pix to Pix (2016) Works by using paired images.
  • Cycle GAN (2017) Converts images without the need for paired training images.
  • Big GAN (2018) Developed by Dr. Andrew Brock, et al., focused on increasing the scale and number of parameters of the algorithm, which resulted in a ‘High Fidelity Natural Image Synthesis’.
  • Style GAN (2018) Developed by Nivida researchers. This really highlights how we need to be very careful when viewing human faces. For example, check out thispersondoesnotexsist. (or even for Art, Cats, Horses, Chemicals).
  • Sketch2Code (2018) Developed by Microsoft AI, transforms any hand-drawn design into a HTML code with AI. For those that have tried to learn raw HTML, I think this is an amazing concept.
  • Fashion++ (2019) Developed by Facebook AI is designed to recommend you fashion changes, including whether you would be fashionable or not.
  • AlphaFold (2020) Developed by Google-owned DeepMind, has used deep learning to solve a 50-year-old problem of how a protein folds into a unique three-dimensional shape. This has great potential in the development of scientific research for drugs used to treat disease.

If you are a coder or interested in machine learning and solving challenging, real-world problems you probably already know that TensorFlow is your first port of call.

For example; we take for granted the usefulness of Google Street View, but in combination with deep learning, it can be used to estimate the demographic makeup of neighbourhoods, as this research article explores.

Its clear we are nearing a new creative revolution, and I believe there will be many more widely accessible machine learning tools for creators everywhere.

DALL-E Art

AI is more than just a tool for creating content. A neural network that can turn a written sentence into art is changing our understanding of the creative thought process. Dall-e is a neural network that takes a line of text and generates art in the style of the sentence.

However, the ethics of AI art and DALL-E, on the other hand, pose new challenges. The type of technology DALL-E and other solutions utilise will make it simpler to produce misleading images. The technology will make it easier for rogue actors to synthesise still photos and, eventually, video images.

The possibility of text-to-image AI generated art replacing graphic design jobs raises further serious ethical concerns.

You can imagine that more of us will be able to do graphic design because we will be able to say “paint me a picture” and get that picture whenever we want. Whereas that image was previously created by a graphic designer or artist.

Graphic design is not going away; in fact, because these tools are now available to everyone, graphic designers may find themselves with less work.

However, what I’ve discovered with this new technology is that it can help artists get out of a creative slump. This also implies that the thought that goes into the image request would therefore result in more creativity.

The author generated this image in part with DALL-E, an artificial intelligence program developed by OpenAI that creates digital images from textual descriptions.
“An acrylic painting of teaching a robot to do your job, and then wallowing in obsolescence.”

The author generated this image in part with DALL-E, an artificial intelligence program developed by OpenAI that creates digital images from textual descriptions. Upon generating this language, I take ultimate responsibility for the content of this image.

Programming and Encrypting the Art

Artists and computer scientists learn and research the possibilities of applying “artistic thinking” and “engineering thinking” to their processes. Although there is widespread discussion about whether coding is an art form in itself, many people argue that computer codes and algorithms can be understood as a creative and artistic practice.

Let’s survey how the worlds of programming and art interact. In 2013 The Smithsonian’s Cooper-Hewitt, National Design Museum began acquiring “Art Code” for its permanent collection. In collaboration with Ruse Laboratories, they created the first algorithm auction, which was held in 2015. The art sales included key figures in the art world and it was the first algorithm auction to celebrate computer art.

Artists who work with computer art are known as “interactive artists” and we can divide them into two groups depending on the approach:

  • The first group is interested in representing the coding itself. For example, the phrase “Turtle Geometry,” a system created in 1981 that shows how effective the use of computers can change the way students interact and understand maths. The system was developed by MIT professor Hal Abelson, who campaigned for the freedom of the Internet. “Turtle Geometry” had a major impact on the world of technology education and was featured at auction as a commemorative print of one of the earliest versions signed by Abelson.
  • The second group of interactive artists aims to create works of art where code is not directly represented, but the viewer can see their ability to build pieces that are practical or have aesthetic value.

For example, DEVart which is a collaboration between the Barbican, London and Google is a platform where programmers push the boundaries of art and technology to create amazing art  The platform brought together some of the best interactive artists in the world to use code to create unique pieces of art. 

The Digital Revolution Exhibition, explores and celebrates the influence of artificial intelligence, virtual reality and other technologies on the execution and understanding of the links between art and technology.

Art with Technology

Through my research, I discovered the Paris based arts-collective Obvious and took the time to read their manifesto which gives a great insight to their modus operandi.

For example, La Famille de Belamy is a series of Generative Adversarial Network portrait paintings, constructed in 2018 by the collective. These paintings are based on numerous images of classical European art which are then subjected to the mathematical formulas of the GAN.

Founded by Jeff Bezos, Blue Origin will carry Edmond De Belamy in its first journey to space. This iconic work was the first piece of AI-generated art sold by Christies in 2018 for $432,500.

I strongly recommend you check the work of Obvious out, their manifesto and concepts make for some thought-provoking reading and viewing. 

I think it would be very interesting to see how an algorithm would treat a variety of abstract acrylic fluid paintings, as this technique in itself is so full of ‘luck and chance’. (Luck is inevitable but chance is optional, or maybe, chance is the first step you take and luck is what comes afterwards?…)

“The shadows of the demons of complexity awaken by my family are haunting me.”

Virtual Reality in Art & Technology

The advent of digital language expands and promotes hybridisation and contamination processes in the creative media and languages that have changed art and artistic processes. When, on the one hand, analogue media (photography, cinema, video, music) gained almost instant acceptance in the arts (although full of tension), the use of computers and digital language on the other hand met resistance from artistic media because of the anti-technological sentiment of the 1970s and 1980s.

In addition to the significant militancy of ecological groups and anti-nuclear movements, this feeling led, for example, to the fact that structured artistic processes with an exclusively digital language such as collaborative network creation, projection mapping and interventions in the virtual world did not exist until the end of the 20th century, environments, interactive works, among others.

From a technical point of view, the creations for virtual environments (Virtual Reality) consist in the fact that the user (we can no longer talk about viewers) is completely immersed in a three-dimensional reality generated by a computer and through the use of an optical device (3D glasses) and/or accessories such as gloves, clothing or joysticks.

Argumented Reality

Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory.

Although a number of patents have been filed for AR contact lens, so far glasses have made it to market. Initially cost prohibitive products like Google Glasses and Microsoft HoloLens are now mostly marketed as enterprise versions, for use in industries from Doctors to Mechanics.

A number of Kickstarter products and undoubtedly reversed engineered products are starting to bring the cost of technology down. At the time of writing this post it is rumoured that Apple Vision Glasses may redefine wearable computing. Apple may even attempt to turn us into bats if this patent is anything to go by.

All these changes in science and technology in recent times have altered the human condition. As anthropologist Professor Paula Sibilia claims, contemporary technoscience or post-organic technoscience (Sibilia, 2002), ‘in the face of the failure of the instrumental-rational narrative of the modern world, strikes an unconstrained one conquering body and nature without limits.’

In this movement, the life sciences and their relationship to technology dissolve the traditional metaphysical dichotomies of mind-body, mind-matter, subject-object, nature craftsmanship, the human condition, the failure of the physical body, and the limits of space and time connected with their materiality.

The question is no longer the development of material conditions for human existence through technological development or the mastery of natural forces, but the transcendence of human conditions through an advancement. (I feel a Terminator plot coming on).

Relationship between Art and the Internet

Logically, most of the modern production that has somehow to do with science is centred on the internet and the online virtual world. This has changed the concept of art in so many ways, thus offering literally everyone the opportunity to see their works and to present the audience looking for every piece that interests them.

Additionally, the internet has influenced the art market more than we can tell. For example, Petra Cortright creates her images to study online consumption issues, from webcam self-portrait videos to rendered paintings. Cortright make endless changes to the computer file until she is satisfied with the final image, before printing the final two-dimensional image to a range of substrates.

Relationship between Art and Technology

With these aspects in mind, the diagnoses that predict the decline of art in general as a result of exposure to technical or media art become relative. The fact that technology and technological products, particularly technology as a goal-oriented process, have influenced and played an important role in creativity.

The fact that every work of art is based on a system of rules and techniques in addition to creative intuition, show that technique is a fundamental process of art. But also the technological products, as well as the historical state of technological development, have always influenced artistic ideas and works.

For example, sculpture could only flourish after the development of metal tools, linear perspective in western art developed in the early 15th century and colour pigmentation improved during the 19th century.

The problems of contemporary art result less from the use of technology and purposeful procedures than from the fact that the rationalized paradigms of expediency of technology are beginning to suppress artistic criteria and intentions. And this fear seems to be particularly justified when aesthetic criteria, however questionable, may behave to give space to the idea of feasibility and mere sensationalism.

Hence, we need to consider in particular the gradual changes, the fragility, and the intersections that occur in the integration of technology into artistic activity in order to determine whether the currently generally observable dominance by technology restricts and explores this last remaining scope; the free space of the human expression with true emotional intelligence, or whether there is freedom for creative innovations and new forms of interpretation.

Edmond De Belamy a GANs Algorithm, Inkjet printed on Canvas by Obvious

Conclusion: Thought Provoking New Opportunities

After all, art and technology can be related in a number of ways, from using coding to create wonderful works of art, encouraging people to learn and researching computer technology, to adopting the mindset of artists to reshape traditional technology practices.

When the evocative power of art is combined with the practical solutions of technology, innovative and meaningful products and experiences emerge, and the areas in which technology and art take place expand and develop. Art and technology are migrating to non-traditional practices, and both artists and the tech industry can take advantage of these thought provoking new opportunities.

Closing Thoughts…

Hopefully, I haven’t lost you in this somewhat eclectic rambling, and if you notice any inaccuracies or have any suggestions to correct the record, please do not hesitate to contact me.

Maybe you have an idea in mind, that Ren Creative Works will be able to apply creative skill and imagination too. You might even be lucky with a unique ‘human produced’ acrylic fluid artwork that reveals, through luck and chance, your very own element of Pareidolia!

So I might not be staring, I might just be thinking and trying to Bridge the Gap between Art and Technology. 

If you think a friend, family member, or business associate might be interested in this article, please feel free to share and subscribe.

Thank you for reading.

Post Illustration:  Obvious, ‘Edmond De Belamy’, 2018

@book{Goodfellow-et-al-2016,
title={Deep Learning},
author={Ian Goodfellow and Yoshua Bengio and Aaron Courville},
publisher={MIT Press},
note={\url{http://www.deeplearningbook.org}},
year={2016}

‘Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States.’
Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen, Jia Deng, Erez Lieberman Aiden, Li Fei-FeiProceedings of the National Academy of Sciences Dec 2017, 114 (50) 13108-13113; DOI: 10.1073/pnas.1700035114

Facebook
Twitter
LinkedIn
Pinterest
Email

This Person Does Not Exist

Hyper-realistic portraits of completely fake people.

Latest Artwork

Scroll to Top