From Personas to Personality: Engineering the Agent Voice
So for AI it seems that traditional personas simply won’t cut it anymore. Personas were in the world of UX and marketing created for designing interfaces and experiences and not for designing active digital entities that think, respond, remember, adapt, and act on our behalf.
With AI agents, we are no longer designing how a system behaves. We are designing who it becomes and behavior immediately becomes so much more than what you experience with most other systems used today.
Whether intentional or not, we are creating advanced personalities. And this goes far beyond tone-of-voice, style guidelines or polite phrasing. We’re increasingly designing deep, psychometric frameworks that define agent behavior: temperament, assertiveness, empathy levels, tolerance for ambiguity, emotional framing, even ethical bias boundaries.
Over the past two years, I’ve been fortunate to explore ideas that combine narrative design, game theory, and applied psychology to create AI agents that behave less like utilities and more like collaborators with intent, history, and a coherent identity. This includes designing internal identity structures that allow an agent to maintain continuity across time, contexts, and relationships. In many ways very much like a human professional does, or even a professional ‘dream team’ if you like.
Interestingly, recent academic work supports this shift toward computational personality models:
– The PersLLM study by Huang et al. (2024) develops methods for training large language models to internalize stable, consistent personality traits using psychological frameworks. This research explores how personality can be encoded as a persistent internal structure within AI.
– Tudor et al. (2025) analyze how Big Five personality traits affect interaction in multi-agent ecosystems. Their work shows, for example, how agents with high Agreeableness communicate and collaborate more fluidly, but may also become susceptible to strategic manipulation — a fascinating trade-off.
– Xu et al. (2024) present evidence that personality attributes can emerge organically during agent interaction, even when not explicitly designed. In other words, agents can “grow” personality traits through accumulated conversational and contextual history.
These papers (linked below) represent research performed by other scholars whose work has helped me expand my understanding of the potentials here, and they highlight a key evolution:
We are shifting from persona design as external touch point guidance to personality engineering as internal behavioral architecture.
This raises meaningful design questions:
Are we prescribing personality, or letting it emerge?
Are we training agents to mimic personalities, or to hold stable psychometric structures?
Are we designing compliance, or co-creating collaboration?
These decisions shape far more than outputs. They shape how humans trust, confide in, cooperate with, and emotionally relate to AI. Not suggesting you are going to ‘fall in love’ with your AI assistant, but you are going to relate to it very differently than your average internet banking interface.
Looking forward, I would love to contribute to a role or research environment where personality-driven agent design is used to support real-world AI integration. And absolutely not just as tools, but as adaptive partners that evolve through use and interaction.
If this resonates with your organization or research group, feel free to reach out or share my profile with someone who might be exploring these frontier questions.
Here are the research studies mentioned above:
https://lnkd.in/eJxsEJGx
https://lnkd.in/exGy5gnC
https://lnkd.in/ehi3X8YW
