Although AI seems like a new topic, yes - I'm pretty sure that we've all been using AI daily - even besides all the latest tools that emerged recently. Think about Face ID, Traffic Data, Chatbots, and on an even more common level - social media. Although those examples are well integrated into our touchpoints and therefore are less obvious, they still make part of our lives for quite some time.
Recently, a new wave of AI tools emerged that is good at creating various types of creative content and drives the buzz that AI appeared out of nowhere. It's helpful to remember that AI is not new but rather that it has some exciting new applications and possibilities to offer that were not addressed in such a way before.
I'm currently using a few tools frequently and experimenting with some others as well. What made the most sense to integrate into my work routine were ChatGPT or Notion (specifically Notion.ai), as they improved my writing by a huge margin while making the process more streamlined and efficient. Some other tools worth mentioning are Midjourney and Dall-e for AI-generated artwork. There are also a few other AI tools or plugins that work well for specific tasks that I experiment with on a daily basis and this list keeps changing day to day. One specific tool that I’m very curious about (as it is not available yet) is Adobe Firefly.
Depending on the specific use case, some tools have a more straightforward game than others regarding the production-readiness of their results. Text-based AI is editable; you can quickly work with it to adjust it. Photo/Video/Audio generated AI is a different game since you rely on the end result and have no means to alter it via a source file (assuming you know how to work with multimedia tools). You often end up with limited edibility and have to ditch or re-do the task, but again, with little control over the deliverable.
Writing seems a lot easier now; for instance - I let ChatGPT inspire me to write a certain copy and work with it in the union to get a solid end result. On various occasions, such tools save time for repetitive tasks and shorten the path to the end result if used appropriately. On another parallel - they can be used for learning and validating various topics - the threshold of use cases is almost entirely limited by your imagination.
For instance, on a data level, we've managed to take advantage of Chat-GPT by importing relatively unstructured conversational UX data and pinpointing common UX pain points, and drawing conclusions from it. Such a process saved us lots of time that we otherwise would have invested in a typical research & synthesis process.
Like with any new technology, the bar of entering a particular industry will be lowered. We can draw parallels from the past and conclude that such technologies empower creation and innovation rather than suppress it.
Having access to such tools is undoubtedly beneficial, but it still does not make you superior to domain-specific experts (for now). AI is (and will probably be for some time) analytic and task-specific and works best in union with humans that have a reasonable understanding of the big picture rather than on their own.
Superstar designers will most probably remain unaffected, but it will get increasingly more difficult to differentiate genuine mainstream designs from AI-generated ones - although that might be of little difference in value to the end user.
For sure - AI is still in its infancy, and we are witnessing new and improved algorithms day by day that is exponentially more powerful than their predecessors. The current models still fail to understand the context of their generated results fully. This results in overlapping geometry, wired body anatomy for pictures, and made-out data points for a copy. Nonetheless, if you work in a union with AI, you can overcome those shortcomings.
We have set the baseline of how those tools function and continue to discover how to integrate them into our workflows efficiently. For me, the future is not necessarily dependent on stand-alone AI tools but more on proper integration into existing platforms. A good example of this is how we perceive AI at the moment - as it has generated a lot of buzz - a vast amount of AI tools have emerged. All of them are single touch-points and most of them won't last the year.
On the other hand, think of Google Lens - an AI-powered vision-based search engine that's been with us since 2017. It's the same type of AI that makes a difference and helps us execute tasks faster, just integrated into a bigger ecosystem. In the future, this will probably be the case with all newcomers - a small portion of them could continue their journey on their own while most of the other ones will get acquired and integrated into bigger ecosystems.
This is a tough one at the moment. AI currently struggles on this topic for a good reason, and this has to do with the fact that most tools currently operate on their own.
In terms of trademark law, here's an example: you can't easily get a proposal for a logo design that uses a specific font or that provides you with a variation of existing fonts. You rather get a distorted font-like example, but you can't do much about it but inspire. Would such a tool be integrated with a real font database that would allow you to license the result, we'd move into a much more usable spectrum. I certainly believe that this will be the next step for AI - customizability, licensing, and editing of the results.
In terms of the ethical component - probably the most debated topic to date is database bias. The thing is - AI does not only replicate human biases; it confers on these biases a kind of scientific credibility. It makes it seem that these results have an objective status. The question to ask is, how do we set up ethical gatekeepers, and how do we define what's ethical and how AI should interpret it?
While most tools currently are trying to safeguard results in terms of result alteration (i.e. you can't easily generate a specific person onto a generated environment), it still proves a dangerous thesis - that those tools can do it. Still, my outlook suggests that since we can't assume that big tech and market forces will sort it out by themselves, there might be some formal regulatory frameworks for the ethical use of AI in the future in place. As for now, limits will be pushed, and it remains to be seen which direction AI will take.