Skip to content

Posts from the ‘Artificial Intelligence’ Category

22
Jan

What Is an AI Personality, Really?

As artificial intelligence systems become more conversational, persistent, and agentic, the question of personality moves from metaphor to design concern. The term is increasingly used in product descriptions, research papers, and public discourse, yet it often lacks precision. Sometimes personality refers to tone or style. Sometimes to friendliness or empathy. Sometimes it simply describes the feeling that a system behaves consistently enough that users stop noticing its variability.

However, once an AI system operates across time, remembers context, and participates in decisions, personality can no longer be treated as a surface-level attribute. It becomes a structural property of the system.

To understand this shift, it is useful to disentangle several concepts that are often conflated.

First, there is agent identity. Identity refers to role, mandate, and responsibility. It answers questions such as what the system is meant to do, on whose behalf it acts, and within which boundaries. In philosophical terms, identity is tied to continuity and responsibility rather than expression. John Locke’s discussion of personal identity, for example, places continuity of consciousness and memory at the center of what makes an entity the same over time. While AI does not possess consciousness, users still evaluate it through similar lenses of continuity and coherence.

Second, there is the notion of a character archetype. This concept originates in narrative theory and psychology, from Aristotle’s Poetics to Carl Jung’s archetypes. Archetypes are not personalities in themselves, but recognizable patterns of motivation and role. An AI may behave like an advisor, a facilitator, a critic, or an analyst. These archetypes help users quickly orient themselves in interaction, much like narrative characters do, but they do not yet define how the system behaves under changing conditions.

This is where a third concept becomes essential: the behavioural signature. A behavioural signature describes the stable patterns in how a system responds across contexts. It includes how cautious or assertive the system is, how it handles uncertainty, how it responds to disagreement, and how it balances exploration against conservatism. In psychology, this maps closely to dispositional traits rather than situational behaviour. Personality psychology has long emphasized that traits are not single actions, but tendencies that manifest across situations, a principle formalized in trait theories such as the Five Factor Model.

Recent research suggests that this analogy is not merely conceptual. Large language models already exhibit measurable and relatively stable personality-like patterns in their outputs, even without explicit personality conditioning. A study by Serapio-García et al. demonstrates that language models can be assessed using adapted psychometric instruments, revealing consistent behavioural tendencies across prompts and contexts.

https://arxiv.org/abs/2307.00184

Related work has shown that these tendencies can influence how users perceive trustworthiness, competence, and intent. In other words, personality is already present as an emergent property. The design choice is not whether AI systems have personality, but whether that personality is intentional, inspectable, and governed.

This aligns with earlier findings in human–computer interaction. Nass and Moon’s seminal work on social responses to computers showed that humans apply social rules and expectations to machines as soon as those machines exhibit even minimal social cues. This phenomenon, often referred to as the Media Equation, explains why users react emotionally and morally to systems they rationally know are not human.

https://doi.org/10.1111/0022-4537.00153

From this perspective, personality is not an optional layer added for engagement. It is an inevitable outcome of language-based interaction combined with memory and goal orientation. What matters is how clearly that personality is defined and constrained.

Using more precise terminology such as Agent Identity Model and Behavioural Signature helps anchor this discussion. These terms shift attention away from anthropomorphism and toward structure. They invite explicit decisions about what should remain stable, what is allowed to adapt, and what must never change. They also make it possible to discuss accountability, governance, and ethics in concrete terms.

Literature and philosophy offer useful parallels here. In narrative theory, a character is defined not by isolated dialogue, but by how actions remain intelligible as circumstances change. A character who behaves inconsistently without explanation is not perceived as complex, but as poorly written. The same applies to AI systems. Flexibility without continuity does not feel adaptive. It feels unreliable and annoying.

As AI systems increasingly move from tools to collaborators, personality becomes a core design concern with implications for trust, safety, and long-term use. Treating it as an emergent side effect leaves organizations reacting to user perception rather than shaping it. Treating it as a designed, named, and governed construct allows for clarity and responsibility.

The central question, then, is not whether AI should have personality. The question is whether designers, organizations, and institutions are prepared to define and take responsibility for the behavioural identities they are already deploying.

19
Jan

Functional Profiling and the Design of AI Personalities

A Framework for Coherent, Trustworthy and Purpose Driven Artificial Agents

Author:
Gunnar Øyvin Jystad Fredrikson
Service Designer and AI Strategy Practitioner

Date:
2026

Abstract

As artificial intelligence transforms from a computational tool into an interactive partner, organizations face an emerging design and governance challenge. Users increasingly perceive AI systems as social actors, yet few organizations have roles or frameworks dedicated to the intentional design of the agent’s identity, behaviour, and long term consistency. This paper introduces the concept of functional profiling as a method for defining the stable behavioural characteristics of AI agents. It draws on research in psychology, human computer interaction, game design and AI ethics to propose a structured model for creating, monitoring and evolving personality driven agents in a responsible way.

1. Introduction

Artificial intelligence is shifting from a backend capability to a frontstage participant in human workflows. Large language models and multi agent systems now exhibit behaviours that humans interpret through familiar psychological lenses. Research consistently demonstrates that people apply social, emotional and moral expectations to interactive systems, even when they are fully aware that these systems are artificial.

Nass and Moon describe this phenomenon as the Media Equation, showing that individuals respond to computers using the same social rules and expectations they apply to humans (Nass and Moon 2000). As AI becomes more conversational and adaptive, this effect increases rather than decreases.

Yet most organizations design AI systems as though these social expectations are irrelevant. Engineers optimise performance. Designers shape prompts. Product managers define features. Legal teams review compliance. No one is explicitly responsible for the personality, behavioural integrity, or long term consistency of the agent.

This gap introduces organisational and ethical risks. When a system changes tone abruptly, shifts roles, contradicts earlier statements, or behaves unpredictably, users lose trust. In regulated industries, inconsistent agent behaviour can have serious consequences.

To address this emerging need, this paper introduces the concept of functional profiling as a systematic and responsible method for designing the personality and behavioural structure of artificial agents.

2. Theoretical Foundations

2.1 Personality as a Functional Construct

In psychology, personality refers to a stable set of dispositions that predict behaviour across contexts. The Big Five (or OCEAN) model is a widely researched framework describing these mechanisms (McCrae and Costa 1996). When adapted for AI systems, these traits can become functional parameters rather than emotional states.

2.2 AI as Social Actor

Human computer interaction studies show that humans consistently attribute intent, emotion and morality to machines that exhibit social cues. This is not a misunderstanding but a form of cognitive shorthand. People treat socially present AI systems as relational partners.

This is central to understanding why AI personality design matters. Predictability and coherence are not aesthetic touches but essential components of user trust.

2.3 Behavioural Consistency in Agentic AI

Emerging research on personality traits in large language models shows that these models display stable behavioural signatures that can be measured and influenced (Serapio García et al. 2023). As multi agent systems become more complex, the need for clear behavioural boundaries increases. Without them, agents may drift, converge, or diverge in ways that are difficult to predict or explain.

3. The Concept of Functional Profiling

Functional profiling is the structured design of an AI agent’s stable behavioural characteristics, role boundaries, memory systems and interaction patterns. It does not attempt to imitate real humans. Instead, it defines artificial identity through purpose, constraints and transparency.

Working definition:
Functional profiling is the intentional design of an AI agent’s dispositional behaviour, memory scope and interaction style according to function, context and ethical boundaries.

The objective is a coherent agent that behaves according to stable internal principles and remains aligned with organisational goals and user expectations.

4. Methods for Creating AI Personalities

The following five methods integrate insights from psychology, user experience, service design, game design and AI governance.

4.1 Role Constrained Behavioural Profiling

This method defines the agent’s role as the anchor for acceptable behaviour. The agent’s tone, risk posture and decision boundaries are derived from its purpose.

Applications: healthcare triage assistants, financial advisors, public sector agents.

4.2 Trait Based Psychometric Profiling

This adapts psychological trait models into computational parameters. For example, openness becomes exploration bias, while agreeableness becomes conflict management behaviour.

Applications: coaching systems, advisory tools, collaborative agents.

4.3 Character Sheet Modeling

Borrowed from game design, this method creates a transparent and auditable record of the agent’s identity, including strengths, weaknesses, locked attributes and permitted evolution paths.

Applications: multi agent systems, research environments, creative tools.

4.4 Brand and Voice Aligned Profiling

This aligns the agent’s behaviour with organisational values. This extends beyond tone to include confidence levels, escalation paths and refusal strategies.

Applications: customer interaction systems, media platforms, commerce.

4.5 Ethically Bounded Adaptive Profiling

This allows the agent to evolve behaviour in a controlled manner while respecting ethical and legal constraints. Drift monitoring and explainability requirements are central features.

Applications: long lived agents, personal assistants, enterprise AI.

5. Governance and Ethical Considerations

AI personalities raise questions beyond design, including accountability, consent, privacy and transparency. The closer an agent resembles a human conversational partner, the greater the obligation to clarify its intent and limits.

Key governance questions include:

  • What does the agent remember and for how long
  • How is personality drift detected and managed
  • Who is accountable for the behaviour of the agent
  • Can users inspect or understand the agent’s identity model

Failure in any of these areas creates risks for organisations and users alike.

6. Strategic Implications for Organisations

Organisations that invest early in functional profiling gain advantages in trust, differentiation and regulatory preparedness. As AI becomes a central part of human facing services, behavioural integrity will matter as much as technical performance.

This creates a demand for new roles, including AI experience strategists, agent identity designers and behavioural architects. These roles are interdisciplinary by nature and require competencies that do not map cleanly onto existing job titles.

7. Conclusion

AI is crossing a threshold where it is no longer sufficient to design interfaces alone. We are now designing identities. Functional profiling offers a structured approach that draws from established academic disciplines while addressing emerging strategic needs. It supports the creation of agents that are coherent, predictable and ethically grounded.

The question is no longer whether AI should have personality, but how that personality is designed and managed.

References

McCrae R and Costa P (1996). Toward a new generation of personality theories. Journal of Personality 64(1).
https://www.researchgate.net/profile/Paul-Costa/publication/242351438_Toward_a_new_generation_of_personality_theories_theoretical_contexts_for_the_Five-Factor_Model/links/54ebdbd50cf2ff89649e9f57/Toward-a-new-generation-of-personality-theories-theoretical-contexts-for-the-Five-Factor-Model.pdf

Nass C and Moon Y (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues 56(1).
https://doi.org/10.1111/0022-4537.00153

Serapio García G et al. (2023). Personality Traits in Large Language Models. arXiv.
https://arxiv.org/abs/2307.00184

12
Jan

When the Job Listing Is Nowhere to Be Found: Why AI Needs Roles That Don’t Yet Have Names

Over the past year, I’ve found myself asking a question that feels increasingly relevant, not just to my own situation, but to how organizations approach artificial intelligence more broadly. How do you get hired when the role you are looking for does not quite exist yet?

After a year of unemployment, the question has sharpened. Is the position I am trying to describe simply unnamed, or am I already considered too old for a role that sits somewhere between established disciplines? But stepping back, this personal uncertainty also reveals something structural. We are building systems that behave more and more like social actors, while still organizing work as if those systems were only tools.

Over the years, my work has lived at the intersection of UX, service design, psychology, and AI supported collaboration. Recently, that intersection has narrowed into a very specific concern: what happens when AI systems begin to display continuity, memory, personality, and role based behavior? At that point, interaction design alone is no longer sufficient. We are no longer just shaping interfaces. We are shaping relationships.

Large language models and agentic systems are increasingly perceived by users as conversational partners, advisors, collaborators, and sometimes even authorities. Research consistently shows that people apply social and psychological expectations to systems that display human like cues. This is not a design flaw. It is a human reflex.

One of the foundational works in this area is Nass and Moon’s research on social responses to computers, which demonstrates that humans instinctively apply social rules to interactive technologies, even when they consciously know they are machines. A foundational research concept showing that humans instinctively apply social behaviours and expectations to computers and interactive technology and very relevant to why AI personality matters:
Machines and Mindlessness: Social Responses to Computers by Clifford Nass and Youngme Moon — Journal of Social Issues (2000)
https://doi.org/10.1111/0022-4537.00153
This article reviews how individuals mindlessly apply social rules and expectations to computers even when they know they are machines.

From a business and organizational perspective, this creates a gap. Companies are deploying AI systems that speak, reason, remember, and adapt, yet responsibility for their behavioral coherence is often fragmented. Engineers optimize performance. Designers shape interactions. Product managers define scope. Legal teams manage risk. But no one is explicitly accountable for the personality, identity, and long term behavioral integrity of the system as experienced by users.

This gap matters.

When an AI system behaves inconsistently, forgets its role, shifts tone unpredictably, or crosses implicit social boundaries, trust erodes quickly. Users disengage, misuse the system, or over trust it in the wrong contexts. In regulated or high stakes environments such as healthcare, public services, finance, or decision support, these failures are not cosmetic. They are strategic risks.

This is where the unnamed role begins to take shape.

Organizations increasingly need people who can think across psychology, system design, ethics, and AI capabilities. People who can define what an AI agent is allowed to remember, how it should behave over time, how its personality is constrained or allowed to evolve, and how this is monitored. In other words, someone responsible for the agent’s identity as a coherent, accountable construct.

This is not about making AI more human for its own sake. It is about making AI predictable, trustworthy, and aligned with human expectations. From a strategic standpoint, this directly affects adoption, brand trust, compliance, and long term value creation.

There are emerging academic signals pointing in the same direction. Research on personality modeling in AI, reinforcement learning from human feedback, and agent alignment increasingly emphasizes stability, transparency, and behavioral consistency over raw capability. For example, a scientific preprint examining how personality-like traits emerge and can be measured in LLM outputs, supporting the idea that personality design in AI is empirically meaningful:
Personality Traits in Large Language Models (Serapio-García et al., 2023)
https://arxiv.org/abs/2307.00184

This study presents methods for assessing and validating personality patterns in LLM behaviour and discusses implications for responsible AI design.

Seen through this lens, the question of job titles becomes secondary. What matters is recognizing the function. Someone needs to own the space between human psychology and machine behavior. Someone needs to ensure that as AI systems become more agentic, they do not become socially incoherent, ethically ambiguous, or strategically misaligned.

This is the work I am trying to describe. It may be called AI experience design, human centered AI strategy, agent behavior design, or something else entirely. The label matters less than the impact.

As AI systems continue to move from tools to collaborators, organizations that invest in this kind of competence early will have a significant advantage. Not because their models are smarter, but because their systems are easier to trust, easier to work with, and easier to integrate into real human contexts.

Sometimes the job listing is nowhere to be found not because the role is unnecessary, but because it has not yet been named. And this seems to be a reality I am facing today in my search for the right position or project to fill my professional endeavors in the year to come.

26
Nov

Memory, Context & Consistency – Building an Agent That Remembers

One of the trickiest challenges in modelling AI agents isn’t logic or response generation, it’s continuity. Ensuring that an AI not only answers now, but remembers later. That it can return to a thread without losing its identity. Because there is nothing more disorienting than having a conversation with an agent that shifts personality mid-stream, kinda like speaking to a friend who suddenly wakes up as a completely different person.

In most human-machine interaction, a stable memory framework is essential. This includes:

  • Short-term memory; retaining conversational objects and references
  • Long-term memory; maintaining personal identity, commitments, and preferences across time
  • Persona integrity models; preventing unexpected character drift

In agent identity modelling, all three are working together to prevent the “Dr. Jekyll and Mr. Hyde” problem: agents that oscillate between roles, tone, and mental models from interaction to interaction. If your AI forgets it’s the medical advisor and randomly re-emerges as the local plumber, trust collapses immediately. And in default mode, large models often behave like dissociated personalities because they lack grounded, internalized identity and every new prompt can potentially reshape their personality. Each conversation can trigger a reset.

To avoid this, an intentional design strategy is needed:

  • Map the intended personality
  • Map what the agent is allowed to remember
  • Test whether it remains internally consistent over time

One external reference that explores this theme is the paper:
Evolution and Alignment in Multi-Agent Systems – Managing Shift, Drift and Tool Confusion
https://medium.com/%40shashanka_b_r/evolution-and-alignment-in-multi-agent-systems-managing-shift-drift-and-tool-confusion-04c6ce42af5a
This work highlights how agent identity can mutate over time if not anchored to explicit constraints.

This issue, the continuity of mind, is also beautifully illustrated in science fiction (like so many predictions and observations about the future). In Star Trek: The Next Generation, Season 3, Episode 16 (“The Offspring”), the android Data creates a child named Lal who begins to develop her own personality traits through lived experience and memory accumulation. When Lal’s emotional and cognitive system becomes overwhelmed by conflicting identity signals, Data desperately attempts to preserve continuity of self. The entire episode is a meditation on the importance of stable internal identity structures and this goes even for artificial beings. Even if this does not reflect our reality today, it could quickly become an issue we get thrown into in the near future.

Designing AI that “stays itself” over time is essentially about preventing identity fragmentation. And this concern isn’t just philosophical, it absolutely has real-world consequences with trust, predictability and responsibility.

As Steven Pinker wrote in How the Mind Works (1997), page 60:
“Memory is not a mere repository of facts but a mechanism that shapes and constrains our sense of identity. What we remember defines who we have been, and guides who we become.”

This applies directly to AI agents:

  • What the agent remembers determines who it is allowed to be.
  • What is forgotten dissolves identity.
  • What is stable forms personality.

Looking ahead, I believe AI must evolve from stateless engines to persistent collaborators and equipped with structured memory frameworks and internally coherent personalities. Agents can and should accumulate self-consistency over time rather than reinvent themselves each prompt. One can even argue that its a good idea to have an agent be part of a team and allowing the team members to discuss and apply critical thinking and feedback to each other.

Because ultimately, an AI that remembers itself can potentially become someone and not just something.

24
Nov

Switching Gears: Multi-Agent Teams and Fluid Roles

In every innovation project I’ve been part of, the strongest results come from teams composed of people who think differently from one another. You need the strategist who sees the big picture. The analyzer who runs the numbers. The implementer who turns ideas into execution. The challenger who questions assumptions. This diversity of roles isn’t accidental, it is more like essential in today’s business landscape. So why should AI be any different?

When we build AI systems, we often create a single agent with a single voice and a single operational mindset. But the real strength of human-based teamwork comes from plurality of perspectives. The multi-agent approach is an attempt to bring that same diversity of cognition into AI itself. In a multi-agent model, you don’t rely on one monolithic intelligence. Instead, you can orchestrate multiple specialized agents where each has its own orientation, personality, agenda, or operational role. One can be the planner, another the critic, another the builder, and another the risk-assessor. They can even debate and challenge each other before arriving at a shared output. Think of it as your AI performing its own internal workshop where you can have changing hats, switching perspectives, or fluidly transitioning between operational modes. It’s like designing your own AI “dream team” in which each cognitive style is available on demand.

This fluidity is not just a fun conceptual model. It has been tried and tested with success and it is a shift from AI being merely “smart” to being truly strategic. When a system can reason, reflect, interrogate its own conclusions, and explore multiple viewpoints, it begins to demonstrate emergent abilities that look more like tactical reasoning and less like simple AI content generation. Of course, enabling this kind of agentic fluidity means intentionally designing the parameters that guide it: whether those parameters are psychometric traits, reasoning frames, domain constraints, or communication protocols. But the payoff is an AI that collaborates like a team, rather than responding like a tool.

As an interesting external perspective on this approach, here’s a blog post exploring some of these ideas and components:

https://www.intelligencestrategy.org/blog-posts/agentic-ai-components 

It’s not an academic paper, but it does offer a worthwhile conceptual framing of agent roles and persona modules and especially for designers, strategists, and technologists interested in adaptive AI systems.

I am truly enjoying the exploration of such  multi-agent architectures, and especially how fluid role-switching and psychometric structuring can support real-world applications such as problem-solving, decision-making, and creative exploration.

24
Nov

From Personas to Personality: Engineering the Agent Voice

So for AI it seems that traditional personas simply won’t cut it anymore. Personas were in the world of UX and marketing created for designing interfaces and experiences and not for designing active digital entities that think, respond, remember, adapt, and act on our behalf.

With AI agents, we are no longer designing how a system behaves. We are designing who it becomes and behavior immediately becomes so much more than what you experience with most other systems used today.

Whether intentional or not, we are creating advanced personalities. And this goes far beyond tone-of-voice, style guidelines or polite phrasing. We’re increasingly designing deep, psychometric frameworks that define agent behavior: temperament, assertiveness, empathy levels, tolerance for ambiguity, emotional framing, even ethical bias boundaries.

Over the past two years, I’ve been fortunate to explore ideas that combine narrative design, game theory, and applied psychology to create AI agents that behave less like utilities and more like collaborators with intent, history, and a coherent identity. This includes designing internal identity structures that allow an agent to maintain continuity across time, contexts, and relationships. In many ways very much like a human professional does, or even a professional ‘dream team’ if you like.

Interestingly, recent academic work supports this shift toward computational personality models:

– The PersLLM study by Huang et al. (2024) develops methods for training large language models to internalize stable, consistent personality traits using psychological frameworks. This research explores how personality can be encoded as a persistent internal structure within AI.

– Tudor et al. (2025) analyze how Big Five personality traits affect interaction in multi-agent ecosystems. Their work shows, for example, how agents with high Agreeableness communicate and collaborate more fluidly, but may also become susceptible to strategic manipulation — a fascinating trade-off.

– Xu et al. (2024) present evidence that personality attributes can emerge organically during agent interaction, even when not explicitly designed. In other words, agents can “grow” personality traits through accumulated conversational and contextual history.

These papers (linked below) represent research performed by other scholars whose work has helped me expand my understanding of the potentials here, and they highlight a key evolution:
We are shifting from persona design as external touch point guidance to personality engineering as internal behavioral architecture.

This raises meaningful design questions:
Are we prescribing personality, or letting it emerge?
Are we training agents to mimic personalities, or to hold stable psychometric structures?
Are we designing compliance, or co-creating collaboration?

These decisions shape far more than outputs. They shape how humans trust, confide in, cooperate with, and emotionally relate to AI. Not suggesting you are going to ‘fall in love’ with your AI assistant, but you are going to relate to it very differently than your average internet banking interface.

Looking forward, I would love to contribute to a role or research environment where personality-driven agent design is used to support real-world AI integration. And absolutely not just as tools, but as adaptive partners that evolve through use and interaction.

If this resonates with your organization or research group, feel free to reach out or share my profile with someone who might be exploring these frontier questions.

Here are the research studies mentioned above:
https://lnkd.in/eJxsEJGx
https://lnkd.in/exGy5gnC
https://lnkd.in/ehi3X8YW

24
Nov

Why Every AI Needs a Personality – Not Just a Prompt

For almost three decades I’ve worked at the intersection of human behavior, technology, design and communication. Helping organizations build digital services that don’t just function, but resonate. During that journey one insight has become clearer than ever: for AI, having the right answer is no longer enough. The real breakthrough comes when the AI feels like someone you can actually work with and someone you can build trust with.

We’re entering an era where interactions with AI are becoming conversational, collaborative, and relationship-based. And much like with people, trust isn’t just formed only by competence alone. It’s formed over time, by shared values and through personality. Tone. Predictability. Emotional calibration. Understanding.

That is why a well-designed AI shouldn’t be treated as an abstract intelligence engine. It should be designed as a partner or an assistant with a recognizable disposition, behavioral consistency, and intentional interaction style. This begins with defining a agent identity model for the AI: who is it, how does it behave, how does it make decisions, how assertive or cautious is it, how does it handle uncertainty, and how does it respond to different types of users?

Psychometrics offers powerful frameworks for this. I’m personally a strong believer in leveraging the Big Five (OCEAN) to model how the agent “shows up” in dialogue. High openness vs low openness. Analytical vs empathetic. Fast-responding vs deliberative. Curious vs reserved. Once you add these parameters, the AI stops being “a tool” and gains the ability of becoming “an evolving collaborator”.

This is where AI agents transform from generic utilities into role-specific partners:
– not “the chatbot” but “the advisor”
– not “the FAQ” but “the coach”
– not “the database interface” but “the analyst”
– not “the automation system” but “the colleague”

When framed this way, organizations go through a cultural change as well. They stop thinking of AI in terms of queries and outputs, and begin thinking in terms of relationships and capability. It becomes less about “what can this tool do?” and more about “how does this agent help us think, decide and act?” or “how can a team of agents discuss, analyze and generate results with deliverables?”.

Designing AI with personality does not mean anthropomorphizing recklessly or pretending a system feels emotions it doesn’t have. It means acknowledging that all human-machine interaction is social, and designing for that reality responsibly and intentionally.

If you want to dive deeper I found some interesting insights in this paper:

Designing AI-Agents with Personalities: A Psychometric Approach
https://lnkd.in/egY_3uCJ

Personalities aren’t just for users. Your AI needs one too because it is already holds this capacity as a touch point in a service or business process.