Skip to content

Posts tagged ‘AI’

26
Nov

Memory, Context & Consistency – Building an Agent That Remembers

One of the trickiest challenges in modelling AI agents isn’t logic or response generation, it’s continuity. Ensuring that an AI not only answers now, but remembers later. That it can return to a thread without losing its identity. Because there is nothing more disorienting than having a conversation with an agent that shifts personality mid-stream, kinda like speaking to a friend who suddenly wakes up as a completely different person.

In most human-machine interaction, a stable memory framework is essential. This includes:

  • Short-term memory; retaining conversational objects and references
  • Long-term memory; maintaining personal identity, commitments, and preferences across time
  • Persona integrity models; preventing unexpected character drift

In agent identity modelling, all three are working together to prevent the “Dr. Jekyll and Mr. Hyde” problem: agents that oscillate between roles, tone, and mental models from interaction to interaction. If your AI forgets it’s the medical advisor and randomly re-emerges as the local plumber, trust collapses immediately. And in default mode, large models often behave like dissociated personalities because they lack grounded, internalized identity and every new prompt can potentially reshape their personality. Each conversation can trigger a reset.

To avoid this, an intentional design strategy is needed:

  • Map the intended personality
  • Map what the agent is allowed to remember
  • Test whether it remains internally consistent over time

One external reference that explores this theme is the paper:
Evolution and Alignment in Multi-Agent Systems – Managing Shift, Drift and Tool Confusion
https://medium.com/%40shashanka_b_r/evolution-and-alignment-in-multi-agent-systems-managing-shift-drift-and-tool-confusion-04c6ce42af5a
This work highlights how agent identity can mutate over time if not anchored to explicit constraints.

This issue, the continuity of mind, is also beautifully illustrated in science fiction (like so many predictions and observations about the future). In Star Trek: The Next Generation, Season 3, Episode 16 (“The Offspring”), the android Data creates a child named Lal who begins to develop her own personality traits through lived experience and memory accumulation. When Lal’s emotional and cognitive system becomes overwhelmed by conflicting identity signals, Data desperately attempts to preserve continuity of self. The entire episode is a meditation on the importance of stable internal identity structures and this goes even for artificial beings. Even if this does not reflect our reality today, it could quickly become an issue we get thrown into in the near future.

Designing AI that “stays itself” over time is essentially about preventing identity fragmentation. And this concern isn’t just philosophical, it absolutely has real-world consequences with trust, predictability and responsibility.

As Steven Pinker wrote in How the Mind Works (1997), page 60:
“Memory is not a mere repository of facts but a mechanism that shapes and constrains our sense of identity. What we remember defines who we have been, and guides who we become.”

This applies directly to AI agents:

  • What the agent remembers determines who it is allowed to be.
  • What is forgotten dissolves identity.
  • What is stable forms personality.

Looking ahead, I believe AI must evolve from stateless engines to persistent collaborators and equipped with structured memory frameworks and internally coherent personalities. Agents can and should accumulate self-consistency over time rather than reinvent themselves each prompt. One can even argue that its a good idea to have an agent be part of a team and allowing the team members to discuss and apply critical thinking and feedback to each other.

Because ultimately, an AI that remembers itself can potentially become someone and not just something.

24
Nov

Switching Gears: Multi-Agent Teams and Fluid Roles

In every innovation project I’ve been part of, the strongest results come from teams composed of people who think differently from one another. You need the strategist who sees the big picture. The analyzer who runs the numbers. The implementer who turns ideas into execution. The challenger who questions assumptions. This diversity of roles isn’t accidental, it is more like essential in today’s business landscape. So why should AI be any different?

When we build AI systems, we often create a single agent with a single voice and a single operational mindset. But the real strength of human-based teamwork comes from plurality of perspectives. The multi-agent approach is an attempt to bring that same diversity of cognition into AI itself. In a multi-agent model, you don’t rely on one monolithic intelligence. Instead, you can orchestrate multiple specialized agents where each has its own orientation, personality, agenda, or operational role. One can be the planner, another the critic, another the builder, and another the risk-assessor. They can even debate and challenge each other before arriving at a shared output. Think of it as your AI performing its own internal workshop where you can have changing hats, switching perspectives, or fluidly transitioning between operational modes. It’s like designing your own AI “dream team” in which each cognitive style is available on demand.

This fluidity is not just a fun conceptual model. It has been tried and tested with success and it is a shift from AI being merely “smart” to being truly strategic. When a system can reason, reflect, interrogate its own conclusions, and explore multiple viewpoints, it begins to demonstrate emergent abilities that look more like tactical reasoning and less like simple AI content generation. Of course, enabling this kind of agentic fluidity means intentionally designing the parameters that guide it: whether those parameters are psychometric traits, reasoning frames, domain constraints, or communication protocols. But the payoff is an AI that collaborates like a team, rather than responding like a tool.

As an interesting external perspective on this approach, here’s a blog post exploring some of these ideas and components:

https://www.intelligencestrategy.org/blog-posts/agentic-ai-components 

It’s not an academic paper, but it does offer a worthwhile conceptual framing of agent roles and persona modules and especially for designers, strategists, and technologists interested in adaptive AI systems.

I am truly enjoying the exploration of such  multi-agent architectures, and especially how fluid role-switching and psychometric structuring can support real-world applications such as problem-solving, decision-making, and creative exploration.

24
Nov

From Personas to Personality: Engineering the Agent Voice

So for AI it seems that traditional personas simply won’t cut it anymore. Personas were in the world of UX and marketing created for designing interfaces and experiences and not for designing active digital entities that think, respond, remember, adapt, and act on our behalf.

With AI agents, we are no longer designing how a system behaves. We are designing who it becomes and behavior immediately becomes so much more than what you experience with most other systems used today.

Whether intentional or not, we are creating advanced personalities. And this goes far beyond tone-of-voice, style guidelines or polite phrasing. We’re increasingly designing deep, psychometric frameworks that define agent behavior: temperament, assertiveness, empathy levels, tolerance for ambiguity, emotional framing, even ethical bias boundaries.

Over the past two years, I’ve been fortunate to explore ideas that combine narrative design, game theory, and applied psychology to create AI agents that behave less like utilities and more like collaborators with intent, history, and a coherent identity. This includes designing internal identity structures that allow an agent to maintain continuity across time, contexts, and relationships. In many ways very much like a human professional does, or even a professional ‘dream team’ if you like.

Interestingly, recent academic work supports this shift toward computational personality models:

– The PersLLM study by Huang et al. (2024) develops methods for training large language models to internalize stable, consistent personality traits using psychological frameworks. This research explores how personality can be encoded as a persistent internal structure within AI.

– Tudor et al. (2025) analyze how Big Five personality traits affect interaction in multi-agent ecosystems. Their work shows, for example, how agents with high Agreeableness communicate and collaborate more fluidly, but may also become susceptible to strategic manipulation — a fascinating trade-off.

– Xu et al. (2024) present evidence that personality attributes can emerge organically during agent interaction, even when not explicitly designed. In other words, agents can “grow” personality traits through accumulated conversational and contextual history.

These papers (linked below) represent research performed by other scholars whose work has helped me expand my understanding of the potentials here, and they highlight a key evolution:
We are shifting from persona design as external touch point guidance to personality engineering as internal behavioral architecture.

This raises meaningful design questions:
Are we prescribing personality, or letting it emerge?
Are we training agents to mimic personalities, or to hold stable psychometric structures?
Are we designing compliance, or co-creating collaboration?

These decisions shape far more than outputs. They shape how humans trust, confide in, cooperate with, and emotionally relate to AI. Not suggesting you are going to ‘fall in love’ with your AI assistant, but you are going to relate to it very differently than your average internet banking interface.

Looking forward, I would love to contribute to a role or research environment where personality-driven agent design is used to support real-world AI integration. And absolutely not just as tools, but as adaptive partners that evolve through use and interaction.

If this resonates with your organization or research group, feel free to reach out or share my profile with someone who might be exploring these frontier questions.

Here are the research studies mentioned above:
https://lnkd.in/eJxsEJGx
https://lnkd.in/exGy5gnC
https://lnkd.in/ehi3X8YW

24
Nov

Why Every AI Needs a Personality – Not Just a Prompt

For almost three decades I’ve worked at the intersection of human behavior, technology, design and communication. Helping organizations build digital services that don’t just function, but resonate. During that journey one insight has become clearer than ever: for AI, having the right answer is no longer enough. The real breakthrough comes when the AI feels like someone you can actually work with and someone you can build trust with.

We’re entering an era where interactions with AI are becoming conversational, collaborative, and relationship-based. And much like with people, trust isn’t just formed only by competence alone. It’s formed over time, by shared values and through personality. Tone. Predictability. Emotional calibration. Understanding.

That is why a well-designed AI shouldn’t be treated as an abstract intelligence engine. It should be designed as a partner or an assistant with a recognizable disposition, behavioral consistency, and intentional interaction style. This begins with defining a agent identity model for the AI: who is it, how does it behave, how does it make decisions, how assertive or cautious is it, how does it handle uncertainty, and how does it respond to different types of users?

Psychometrics offers powerful frameworks for this. I’m personally a strong believer in leveraging the Big Five (OCEAN) to model how the agent “shows up” in dialogue. High openness vs low openness. Analytical vs empathetic. Fast-responding vs deliberative. Curious vs reserved. Once you add these parameters, the AI stops being “a tool” and gains the ability of becoming “an evolving collaborator”.

This is where AI agents transform from generic utilities into role-specific partners:
– not “the chatbot” but “the advisor”
– not “the FAQ” but “the coach”
– not “the database interface” but “the analyst”
– not “the automation system” but “the colleague”

When framed this way, organizations go through a cultural change as well. They stop thinking of AI in terms of queries and outputs, and begin thinking in terms of relationships and capability. It becomes less about “what can this tool do?” and more about “how does this agent help us think, decide and act?” or “how can a team of agents discuss, analyze and generate results with deliverables?”.

Designing AI with personality does not mean anthropomorphizing recklessly or pretending a system feels emotions it doesn’t have. It means acknowledging that all human-machine interaction is social, and designing for that reality responsibly and intentionally.

If you want to dive deeper I found some interesting insights in this paper:

Designing AI-Agents with Personalities: A Psychometric Approach
https://lnkd.in/egY_3uCJ

Personalities aren’t just for users. Your AI needs one too because it is already holds this capacity as a touch point in a service or business process.