Share this article:
I am facing an existential crisis that is echoing throughout the land amongst fellow creatives in multiple industries. It’s a crisis not seen since the invention of photography 150 years ago, when painters and illustrators had to adapt or go under when the juggernaut of a new technological innovation known as photography revolutionized the world.
Yet despite my reservations about facing artistic annihilation, I can’t help but be swept away in the hype surrounding the new revolution of generative-AI in the creative arts. These models generate detailed original images based on text inputs. The big tech players like Open AI’s Dall-E, Stable Diffusion, and Midjourney emerged over the summer and have caused a frenzy for all the right and wrong reasons. There’s excitement about its creative and economic capabilities on the one hand, but serious concerns regarding ownership and security on the other.
The creator economy is a $1 billion industry. AI-generative art is a radical approach to art-making where humans and machines collaborate to produce art. It’s what Venture Beat called the ‘mediamorphosis of media,’ where one medium is transformed into another and is forever changed.
As more startups hit the market for a piece of the creative AI pie, do they represent a real shift in the art market or are they just new tools and mediums? OpenEye’s Dall-E launched in April 2022 and has 1.5 million users generating 2 million images per day. Google’s new text-to-video AI tool, Imagen Video, creates video clips from text prompts. The company’s Brain Researchers developed an internal dataset of 14 million videos and 60 million images. At present, only a select few can trial the technology before it is open to the public.
Generative-AI opens up new revenue streams for tech companies and applies innovation to other forms of content creation like music and podcasts. But there are legal and ethical issues that could dip into profits.
There are legitimate concerns that AI tools can spread misinformation or infringe on artist copyright. Traditionally, sampled images are often copyrighted from stock photo websites like Getty Images and Shutterstock, and news outlets or original artworks are given credit. In September, Getty Images banned AI-generative art from its stock photography site, expressing concerns with its method of scraping publicly-available content from the internet to produce new imagery without credit or compensation to the original creators. This steps over the boundaries of US Copyright Fair Use guidelines, which state that new work must be sufficiently transformative to be considered fair use.
To address these concerns, Shutterstock recently announced that it has partnered with OpenAI to provide AI-generated stock images. The photo stock giant claims it will compensate original artists and will be fully transparent in the methodology of an AI-generated image. Photographers will have to wait some months before this service is rolled out to them.
Dall-E, Stable Diffusion, and Midjourney have policies in place to prevent their technology from being used in certain ways and prohibit images of politicians, celebrities, or sexual and violent imagery. What protections are in place for working artists? Stable Diffusion has an opt-out system for artists who don’t want their work used on their platform, and judging by critics and artists alike, other AI art generators will develop their own system. Why are big tech companies only addressing these issues after the public release of their platforms? Living artists should have been included in the startup discussion, but this is rarely the case.
So what does this mean for artists and content creators when AI-generative tools emerge to assist with the process of production? Creators can collaborate with AI by generating images, audio-visual content, text-to-music creation, or text-to-video, which are made commercially available to the visual effects industry. But how long before this human and machine collaboration removes the human element altogether—or are we getting ahead of ourselves?
Making movies with AI is a bad idea, according to Collider. Similar to the art world, the development raises questions about auteurship. Will audiences accept an AI-generated film or feel conned? And what does this mean for human creatives who provide voice-over or music services?
Performer unions like SAG-AFTRA and Equity need laws in place to prevent AI performance synthesization without permission. At present, AI tech falls outside the scope of artists’ protection. While big stars like James Earl Jones’ Darth Vader voiceover have no doubt been rewarded handsomely for intellectual property rights, it’s the smaller players in the industry who are not being fairly compensated. Exposing AI across the entertainment industry is a genuine concern for skilled performers who are faced with being replaced by AI systems.
Generative-AI is a billion-dollar industry with big funding for startups, entrepreneurs and venture capitalists rebranding themselves as AI experts. This powerful technology is transforming permanently the creative industries of filmmaking, photography, art, illustration, graphic design, writing, gaming, and advertising. Whether artificial intelligence can replace human creativity is an ongoing discussion that is still in its infancy.
Am I still worried about AI replacing my skill set? Not really. Humans are better than machines at learning. We acquire new skills and learn how to evolve and diversify. I collaborate with machines but they can’t beat me… at least not yet.
Share this article: