As of 2024, I don't think we have found an application of AI in UX that makes me think "it's here to replace us"– most of the stuff I have seen lacks 'taste' - a subjective evaluation of an outcome shaped by your upbringing, exposure, and so many other factors. Take Apple's implementation, for example

To my mom and dad, this would be 'mind blowing', generating custom digital art in seconds based on any situation you can think of is indeed impressive, but to me this feels a bit off - something about the use of colors, style of art, and interpretation of race/age just feels a bit distasteful. The point I am trying to hammer down is 'taste' is something the AI model is not actively being trained on - when the model is producing these outcomes it's not looking at the image and critiquing it like 'yeah the background is too busy'. Because of this lack of 'taste', where I see AI excelling is at being a really good assistant - someone that can help me do what I do today, but way faster, vs replacing me completely. What does this mean for designers? Here's my thoughts:
Avoiding the 'blank canvas' problem
How many times have you gone through this: You want to update a landing page made by some other designer, but it was 4 years ago and the designer does not work at your company anymore. If not that, how about you trying to refresh a page based on new design system/brand? I think AI can do a lot here - training the model on a design system, and then training it on documentation/real world usage to eventually stand up a design immediately.
Super-charged explorations
Every designer's workflow is unique, but for me I like to do a lot of divergent explorations whenever I am working on a project - divergent enough where I can eventually visualize the spectrum of crazy and practical design ideas and then get user/team feedback on what might actually work. An ideal use-case here would be 'supercharged explorations'. Giving an initial direction and set of constraints, imagine if the AI assistant in the design tool (Figma?) could help you explore faster:
This could be great for having an initial idea of how divergent you should go, what could work, what would make sense to further evaluate by testing, and even combinations that you could not have thought of.
No-frills prototyping
Listen, I'm not going to lie – I'm not a fan of prototyping in Figma. I much prefer using a rapid prototyping/implementation tool like Framer. But due to work constraints, there are times when I still have to use Figma, and I think I speak for everyone when I say no one likes the noodles
Faster research & testing
Summarizing insights is the most obvious use-case that we have begun to see already, but the way I imagine generative AI can really cook here is understanding the context, recommending the best path forward to test a feature/UI, and creating an initial plan for the designer.
Imagine this:
You conduct some super charged explorations, but are undecided between two designs
You then proceed to convert the two designs into functional prototypes
Now at this point, what if an AI agent/model can help you draft a quick concept test plan based on the two selected designs?
Even better, what if it was connected to UserTesting.com and was able to go through recordings and prepare an executive summary of all the task performance and insights?
Not only can it help designers constrained by resources/capacity move faster, it can also make more early designers more confident in conducting their own research.
This is just one fragment of my deeper reflection on AI for UX Design. It is definitely not an exhaustive list of all potential use cases. In fact, it is still quite narrow and primarily focused on Figma. More thoughts to follow...