AI is here: What happens next?
The impact of AI tools like DALL•E will be felt for years to come. Learn who will be affected first, and what it means going forward.
AI tools are beginning to have a tremendous impact on our world, and the speed at which this change is occurring is catching many people by surprise. The past few years have seen enormous strides being made in the field of generative AI models like DALL•E, with new advances being made in rapid succession, and now these tools are starting to become available to the general public.
While their long-term influence will be significant, AI tools are going to impact different markets at different times. Some have already been affected, and others won’t see the full impact for years to come.
Today we’ll be taking a look at the short-, medium-, and long-term impacts that co-creative AI tools will have on the world, who these changes will impact the most, and when.
Generative AI art might still be in the early days, with the wider public only recently gaining access to some of these tools. Until now, the tools have mostly been relegated to academic research settings, where the artistic quality of the output hasn’t necessarily been a major consideration during prompt construction. As a wider variety of artists gain access to these tools, we’re starting to get a better idea of the limitations and potential with these models, and making the first tentative steps towards thinking about what the future of AI-driven co-creative tools might look like in practice.
As such, we now have a better idea of what they can do right out of the box, with very little effort made toward prompt optimization — and we know which industries will be immediately affected by generative AI art tools as a result.
Who sees the most short-term impact?
- Creative Artists of all sorts
- Stock Photography sites
- Prompt Engineers
Creative artists of all sorts
Digital artists, photographers, writers, and creatives of all stripes will be affected by the adoption of AI tools like GPT-3, DALL•E, and Stable Diffusion. Not only will these tools introduce all sorts of new competition into already-crowded fields, they so skew the balance of time vs quality in terms of output that it will be nearly impossible for artists to compete with them on their own. Widespread adoption of these tools among creative professionals might come more from competitive necessity than anything else.
Stock photography companies operate at volume, and will immediately be impacted by AI tools like DALL•E. Traditional stock photography companies make their profits by acting as centralized storefronts for a vast number of independent photographers looking to make a living a few cents at a time. Stock photography sites typically pay a very small amount per sale to the artist, but turn around and charge many multiples of that to anyone looking to license the image for their newsletter or website.
Generative AI art tools like DALL•E allow anyone to become their own stock photographer, getting the exact image they want at a fraction of the traditional cost, assuming they’ve got a good in-house prompt engineer, at least.
The field of prompt engineering might be relatively new, but it will be increasingly important with regards to these co-creative AI tools, and we’ll likely see some specialization and focus on prompt optimization within models like DALL•E and Stable Diffusion as a new generation of creative professionals learns how best to use these new tools. Until then, there will likely be increased demand for competent prompt engineers to do the necessary work required for early-stage exploration of this emerging field.
In the medium-term, AI tools will be integrated into apps, with some of the more idiosyncratic behavior of AI models like DALL-E abstracted away into UI elements (for example, including “award winning” in a DALL-E prompt significantly improves the quality of the output, and can be stacked throughout a prompt to greater effect. In practice, it’s incredibly annoying to repeatedly type it out, and the sooner it’s abstracted away into a UI element the better) rather than being something directly visible in the prompt itself. This could be accomplished by creating a user interface that allows blocks of text to be tagged with different criteria (such as art style, artist skill level, emotions to evoke, and so on), which could then be applied automatically to the full prompt that’s sent to DALL-E without the the end user needing to be aware of the fact that “award winning” appears in no less than 7 times in their prompt.
We’re already starting to see some of this with code integrations from OpenAI Codex and GitHub Copilot, both of which assist programmers in writing their code. While the number of supported programming languages and IDEs remains somewhat limited, more languages are being added every day, a trend that’s not likely to reverse any time soon. Likewise, writers are already able to work with various AI tools to help brainstorm ideas, summarize text, or act as a virtual editor.
Who sees the most medium-term impact?
Programmers are probably the group that will adapt most naturally to co-creative AI tools, as a number of automation and code-completion solutions already exist in the daily-use toolkit of the modern developer. The biggest advantage will be held by those who are able to use AI tools to augment their existing skillset, opening up new avenues that may have previously been inaccessible to them.
Professional writers will benefit greatly from AI tools, which can help act as a side-by-side editor and co-author, able to expand on topics, consolidate larger blocks of text into tl;dr’s, and even generate text at different reading levels. Likely impacts will be seen at large editorial shops, where fewer staff writers and editors may be needed, as well as with ghostwriters, who will soon find a solid ally in AI when it comes to slogging through the boring parts of the writing process, able to concentrate on the intent of the piece while the AI can act as fact-checker or fact-fetcher, able to cross-reference concepts and ideas across a wide range of sources and data sets.
In the long-term, co-creating with AI tools will be the norm, and will likely be built into the social media layer as part of a shared canvas that feeds back into itself, where the art we co-create with AI tools is used to train future generations of those same tools.
Who sees the most long-term impact?
- All of us, as we start participating in this shared creative canvas
There is a huge amount of potential in AI tools, both as a new means of creation as well as communication.
There is no reason to think that these impacts will be limited to still images. A gif is just a series of still images played in rapid succession. A movie is simply a gif with a soundtrack. So if you take a look at where things stand today, it’s not hard to see the huge impact that AI tools are about to have on our world.
Buckle up, it’s going to be a wild ride.
Next week, we’ll cover the darker side of these AI tools, and the negative consequences we’ll face from their widespread adoption.
Nicholas Ptacek is a veteran writer and technologist, with close to 20 years of experience in the cybersecurity industry building award-winning computer security software. His work has been featured extensively in print and news media, including CNNMoney, Macworld, and MacDirectory magazine, along with numerous press appearances in publications including The Information and Vice.
Nicholas has been documenting his journey as he co-creates art with GPT-3, DALL-E, and Stable Diffusion. You can follow his AI experiments on Twitter at: @nptacek
This is essay 3 of 4 for 1729 Writers’ 2nd Cohort