My first AI design feature taught me a valuable lesson about user expectations. I built a chat interface for an enterprise application because, at the time, it was the evident pattern for AI interactions. The AI itself worked well: it understood user queries, accessed the correct data, and generated helpful responses. However, users simply didn't understand it.
They'd type a question, get an accurate answer, and then stop. They didn't understand they could ask follow-ups or go deeper into that data point. Many didn't realize the breadth of what the AI could help with. The feature worked exactly as designed, but I'd focused on the AI's capabilities rather than meeting users where they were in their understanding of AI. This realization made me more empathetic towards users, understanding that their comprehension of AI is crucial for its practical use.
I had treated AI like a tool, something that processes inputs and gives outputs. I designed the interface that made sense, not the experience my users needed. I never considered that I was working with a material that had properties and capabilities that needed to be tailored to my specific users' varying levels of AI maturity.
In the first article of this series, we established AI as a design material with unique properties: adaptivity, generativity, memory, and how these systems change over time. This article leans into the practical side. How do you actually work with AI as a material? What new processes do you need? How do you prototype something that learns and adapts? The most significant shift isn't technical; it's conceptual. You have to think differently about your role as a designer when your "material" has agency. This shift in thinking requires us to be more open-minded and adaptable in our approach to design.
Working with AI as a Material
Prompt Engineering as Material Manipulation
When you work with putty, you learn various techniques, including pinching, stretching, and molding. Each technique shapes the putty differently, and you can keep reshaping it until you get exactly what you want. With AI, prompt engineering becomes your primary technique for shaping the material.
Take designing a customer support chatbot. You wouldn't just write "be helpful." You'd craft something like: "Respond as a customer advocate who genuinely cares about solving problems. Ask clarifying questions when you need more context. If you can't solve something, explain why and suggest next steps." You're shaping how the AI material behaves, just as you would apply pressure and movement to putty.
The difference is that AI responds to language, context, and examples rather than physical force. You shape it through the specificity of your instructions, the examples you provide, and the constraints you set.
Prompting as Material Understanding: The Path to Collaboration
Here's what I've learned: the best way to understand AI as a material is to get your hands dirty with hands-on experience, such as prompting. You can read about AI capabilities all day, but you don't truly understand them until you start experimenting and seeing how they respond. This emphasis on hands-on experience makes us more engaged and proactive in our learning process.
Consider a content designer working with an AI writing assistant. Initially, they might give vague directions, such as "Make this better." The results will likely be generic and unhelpful. But through experimentation, they should discover that the AI responds much better to specific creative constraints: "Write this product description like you're having a conversation with a knowledgeable friend who's genuinely excited about helping you solve a problem."
Testing Across Audiences: Content designers must experiment with how AI outputs perform across different audience segments. The same prompt might generate content that resonates with working parents but completely misses the mark with tech-savvy early adopters. They should test variations and consider demographic differences in their prompting approach.
Ethical Considerations: When experimenting with AI-generated content, designers must be mindful of potential biases. Does this output perpetuate stereotypes? Does it exclude certain groups? This requires deliberately testing the AI material with prompts that might reveal problems and then refining it based on the findings.
This hands-on experimentation teaches you where the AI flows easily versus where you have to work for it, how different phrasings change output quality, and where potential ethical issues might emerge. What's interesting is how this material understanding naturally leads to a more collaborative relationship. As content designers become more proficient at prompting, they begin to anticipate how the AI will interpret their requests. The AI starts adapting to their style. It becomes less like giving orders and more like having a creative conversation.
Beyond Content: Building as Material Discovery
The hands-on material understanding extends beyond content creation. Tools like Cursor and v0 have transformed how I approach prototyping. As a designer, I could always sketch interfaces, create mockups, and prototype in Figma, but I was limited when it came to building comprehensive functional prototypes that truly tested my ideas.
Working with AI in these development environments taught me something unexpected about material collaboration. I'd start with a design concept, begin building with AI assistance, and discover that the AI would suggest implementation approaches that actually improved the original design. It wasn't just helping me code my vision—it was contributing ideas about information architecture, interaction patterns, and even visual hierarchy that I hadn't considered.
This bridging of design and engineering through AI collaboration shows how material understanding naturally extends your capabilities. You're not just learning to prompt better; you're learning to think alongside an intelligence that can translate your design intent into functional reality while offering improvements along the way. The gap between design and engineering starts closing when your material can actually build what you're envisioning.
Training and Fine-tuning as Material Preparation
Before using wood in furniture making, it is seasoned. Before you throw putty, you condition it to the right consistency. With AI, training and fine-tuning serve a similar purpose. You're preparing the material for your specific use.
Say you're building a language-learning app. You might fine-tune your AI material with successful teaching conversations, examples of encouraging feedback, acknowledgment, and learning progression patterns. You're not just adding features; you're shaping the fundamental behavior of the material to understand educational contexts and respond appropriately to different learning styles.
This preparation directly affects the AI's embedded values (the biases and assumptions baked into its responses), unlike conditioning putty, where you're just adjusting its physical properties. Training AI involves making ethical choices about what perspectives and approaches to emphasize.
Prototyping with AI Materials
Traditional prototyping assumes you can predict how your design will behave. You create mockups that show specific states and interactions. AI throws that out the window because the system learns and adapts. You can't predict exactly what it will do.
Instead, you need "living prototypes" that show how the AI material behaves over time and in different contexts. Consider a team designing an AI-powered creative tool. They would need to set up test environments to observe how the AI responds to design principles, how it maintains consistency across projects, and how it improves its suggestions based on user feedback.
This means prototyping for different user maturity levels as well. The same AI material needs to work for someone who has never used AI (who requires a lot of guidance) and for an expert who wants granular control. You have to test how your material adapts across that entire spectrum.
Teams should start using "time compression" techniques. You simulate months of user interaction in compressed timeframes to see how the AI evolves. It's like stress-testing but for learning and adaptation rather than load.
When Material Understanding Becomes Creative Recognition
Here's something that happens as you get more comfortable working with AI materials: you start recognizing the difference between AI following your instructions and AI contributing something new.
Initially, you provide the AI with a prompt and evaluate whether the output aligns with your intended result. But after months of hands-on work, you develop an intuition for when the AI has interpreted your intent in a way that's actually better than what you originally had in mind. Say you are working on microcopy for an onboarding flow, testing different prompts to get the right tone. At one point, the AI suggested framing a privacy explanation not as "we protect your data" but as "your information stays yours." It's a subtle shift, but it completely changes how users might respond to that screen. Could AI somehow understand aspects of user psychology that we, as humans, might miss?
This kind of recognition, knowing when AI has contributed something genuinely creative rather than just producing what you asked for, is a skill that develops through a deeper understanding of the material. You learn to spot when the AI is offering insights rather than just outputs.
This recognition becomes the foundation for creative collaboration. Once you can distinguish between AI following instructions and AI contributing ideas, you can start designing processes that invite and build on those contributions.
When Materials Start Contributing Ideas
Here's the significant mindset shift: AI materials have agency. They don't just sit there waiting for you to shape them; they actively contribute to what you're creating. However, I discovered through all that hands-on prompting that, sometimes, the AI surprised me with ideas I hadn't thought of.
I was using Claude to help me flush out ideas for a fitness app. What surprised me was how Claude naturally assumed the role of product manager. What started out as a list of simple requirements turned into a comprehensive PRD and complete development requirements. That wasn't what I asked for; I was just looking for some initial feature ideas. But Claude had somehow recognized the broader context of what I was trying to build and began contributing strategic thinking I hadn't considered.
That's when I realized something had shifted. I wasn't just manipulating material anymore; I was having a creative conversation. Ultimately, this evolved into team personas that I used in my future work.
Consider a design team creating an adaptive interface for a productivity app. They couldn't just design fixed layouts. They would need to establish parameters for how the AI would rearrange elements based on user behavior, generate workflow suggestions, and store individual preferences. But what happens when the AI starts suggesting interface patterns the team never designed? When it proposes new ways to organize information based on patterns it discovered in user behavior?
The AI becomes more than a co-creator of each user's unique interface experience; it becomes a creative partner contributing ideas to the design itself.
This requires moving from "prescriptive" to "collaborative" design. Instead of dictating every detail, you create frameworks and boundaries that guide the AI's behavior while remaining open to its creative contributions. You start designing with the expectation that the AI might teach you something about your users you didn't know.
Designing for AI Maturity
Here's something I wish I'd understood earlier: not everyone is ready for the same AI experience. Some users dive right into complex prompting, while others get confused by fundamental AI interactions. It's like the difference between giving a power tool to a carpenter versus someone who's never held a screwdriver.
The GenAI Compass framework (Schwartz, 2024) outlines this progression, but it can also be observed by simply watching people use AI. New users require extensive hand-holding, including clear examples, simple outputs, and gradual adaptation. They want to understand what's happening before they trust it. More confident users start experimenting, asking follow-up questions, and pushing boundaries. Expert users? They want control. They'll dig into settings, craft detailed prompts, and basically treat the AI like a collaborative partner.
The tricky part is that users progress through these stages at different speeds, and your AI material needs to adapt accordingly. A content creation tool might start someone off with simple templates, but six months later, that same person might want to write custom prompts and fine-tune outputs. If your AI material can't adapt to that progression, you lose them.
This is where that temporal dimension really matters. You're not just designing for who your users are today; you're planning for who they'll become as they get more comfortable with AI.
The Transformed Design Process
From Linear to Iterative
Traditional product development has a predictable rhythm: research, concept, design, test, refine, and ship; however, with AI materials, that goes out the window. Your "product" keeps learning after you ship it, which means you're never really done designing.
Take a social media platform with AI-powered content curation. You can't simply launch with fixed rules and call it a day. The AI material continues to learn from user engagement, adapting its understanding of what content is effective for different individuals. "Shipping" becomes just the beginning of the design process.
This messes with your planning in interesting ways. You have to think about not just what your product does when it ships but how it will grow and change through user interaction. That requires ongoing monitoring, adjustment, and sometimes admitting that the AI learned something you didn't expect (CareerFoundry, 2024).
Shifting from Deterministic to Probabilistic Design Thinking
Traditional design assumes deterministic outcomes: design X, get experience Y. AI materials introduce uncertainty. The same AI-infused product behaves differently with different users due to its adaptivity and generativity.
An AI-powered financial advisor app can't provide identical advice to users with similar profiles. The AI material's adaptivity means each user gets personalized guidance based on their unique interaction history and preferences. You have to design for this variability.
Success metrics shift from binary (did it work as designed?) to distributional (how often did it produce good results, and how quickly did it adapt?). Design reviews discuss confidence intervals and behavior ranges rather than specific outcomes.
Cross-disciplinary Collaboration Requirements
Working with AI materials is humbling because you quickly realize how much you don't know. I can shape user experiences, but I need a data scientist to explain why AI consistently produces unexpected outputs when users inquire about pricing. I need a content designer to figure out why the AI's tone feels off for our audience. I need engineers who actually understand how to implement the memory and learning features I'm envisioning.
The reality is that effective AI material design requires product designers, data scientists, content designers, engineers, and researchers, all speaking the same language. The data scientist understands what the AI can actually do. The content designer knows how to communicate effectively. The researcher knows what users actually need.
The trick is developing shared vocabularies so we're not talking past each other. When I say "adaptivity," does the data scientist think I mean personalization algorithms or something else? When they say "training data," do I understand the implications for embedded values and bias?
These collaborations take more time and patience than traditional design projects, but they're necessary when your material can think (McKinsey, 2024).
Let me give you a few quick examples of how this material thinking applies across different spaces:
Healthcare AI: A diagnostic assistant doesn't just provide answers; its memory learns which types of questions each doctor asks most often, its adaptivity adjusts explanations based on specialty, and its transparency shows confidence levels for different diagnoses.
E-commerce: A product recommendation system's generativity creates novel product bundles, its memory tracks seasonal preferences, and its adaptivity learns whether you're browsing for yourself or shopping for gifts.
Education: A learning platform's memory tracks not just what you've learned but also how you learn best. Its adaptivity adjusts the pacing for different concepts, and its generativity creates practice problems tailored to your weak spots.
The point is that in each case, you're not just building features; you're shaping how the AI material behaves in that specific context.
Consider a music streaming service that decides to rethink its discovery experience by treating AI as a design material.
Instead of just recommending existing songs, they design the AI material to generate novel playlist concepts by combining unexpected musical elements. The system remembers not just what users like but also when and why, distinguishing between "focus music for work" and "emotional support during difficult times."
The team would design the AI material's responsiveness to life transitions without being overly reactive to temporary listening patterns. Rather than launching with fixed algorithms, they'd create the material to evolve its understanding of music and user preferences over time.
The documentation wouldn't capture static screens but would define principles of musical discovery and methods for measuring success based on long-term engagement. This approach would transform the entire design process, requiring new skills in probabilistic thinking and collaborative AI development (LogRocket, 2024).
From Material to Creative Partner
Working with AI as a material does require new processes, new skills, and new ways of thinking about design. However, what surprised me most is that the material metaphor is just the beginning.
The more you work hands-on with AI, shaping it through prompting and watching it adapt and learn, the more you notice something happening that goes beyond material manipulation. You start recognizing when the AI is offering ideas rather than just following instructions. You begin to anticipate how it will interpret your creative intent while it seems to learn your style and preferences. That hands-on material understanding I described? It's actually teaching you the fundamentals of creative collaboration with non-human intelligence.
Think about it: the properties that make AI a unique material. Memory, adaptivity, and generativity are the same properties that enable creative partnership. An AI that remembers your design preferences, adapts to your creative style, and generates novel ideas based on your collaborative history isn't just sophisticated; it's also a valuable asset. It's starting to function as a creative collaborator.
The iterative, probabilistic design process we've discussed prepares you for something bigger than managing uncertainty. It prepares you for the give-and-take of a creative partnership with an intelligence that works differently from you but can complement your thinking in unexpected ways. When I started that first AI project, I thought I was learning to use a new tool. By treating AI as a material, I learned to work with its properties and constraints. But through that process, I discovered I was actually learning something much more significant: how to engage in creative collaboration with artificial intelligence.
The design process becomes more collaborative not just because you're working with cross-functional teams but because the AI itself becomes a participant in the creative process. You set creative direction and constraints, but you remain open to where the AI's contributions might take the work.
This isn't about AI replacing human creativity; it's about expanding the possibilities of what creative collaboration can look like. The material understanding gives you the foundation to work with AI's capabilities respectfully and effectively. But it also prepares you for the next step: genuine creative partnership.
In Part 3, we'll explore what this means for the designer's role when your material becomes your creative collaborator. We'll dive deep into the craft of working with AI as a malleable, responsive material and how manipulating this requires its own set of skills and sensibilities. As we move toward "vibe designing," where we're not creating static mockups but using AI to build and iterate on ideas in real-time, what does craft mean in this new paradigm? How do you maintain a creative vision while staying open to AI's contributions? How do you balance human judgment with AI's pattern recognition? And what new skills do designers need when their work involves ongoing creative dialogue with artificial intelligence?
We'll also address the concerns many designers have about AI diminishing craft and taking away jobs. But working with AI as a material, as putty that needs to be shaped, molded, and refined to achieve desired outcomes, is itself a craft. It requires taste, intuition, and skill to effectively manipulate this responsive material.
The shift from tool to material was just preparing us for the bigger transition: from working with AI to creating with AI.