Skip to content

Archive for

22
Jan

What Is an AI Personality, Really?

As artificial intelligence systems become more conversational, persistent, and agentic, the question of personality moves from metaphor to design concern. The term is increasingly used in product descriptions, research papers, and public discourse, yet it often lacks precision. Sometimes personality refers to tone or style. Sometimes to friendliness or empathy. Sometimes it simply describes the feeling that a system behaves consistently enough that users stop noticing its variability.

However, once an AI system operates across time, remembers context, and participates in decisions, personality can no longer be treated as a surface-level attribute. It becomes a structural property of the system.

To understand this shift, it is useful to disentangle several concepts that are often conflated.

First, there is agent identity. Identity refers to role, mandate, and responsibility. It answers questions such as what the system is meant to do, on whose behalf it acts, and within which boundaries. In philosophical terms, identity is tied to continuity and responsibility rather than expression. John Locke’s discussion of personal identity, for example, places continuity of consciousness and memory at the center of what makes an entity the same over time. While AI does not possess consciousness, users still evaluate it through similar lenses of continuity and coherence.

Second, there is the notion of a character archetype. This concept originates in narrative theory and psychology, from Aristotle’s Poetics to Carl Jung’s archetypes. Archetypes are not personalities in themselves, but recognizable patterns of motivation and role. An AI may behave like an advisor, a facilitator, a critic, or an analyst. These archetypes help users quickly orient themselves in interaction, much like narrative characters do, but they do not yet define how the system behaves under changing conditions.

This is where a third concept becomes essential: the behavioural signature. A behavioural signature describes the stable patterns in how a system responds across contexts. It includes how cautious or assertive the system is, how it handles uncertainty, how it responds to disagreement, and how it balances exploration against conservatism. In psychology, this maps closely to dispositional traits rather than situational behaviour. Personality psychology has long emphasized that traits are not single actions, but tendencies that manifest across situations, a principle formalized in trait theories such as the Five Factor Model.

Recent research suggests that this analogy is not merely conceptual. Large language models already exhibit measurable and relatively stable personality-like patterns in their outputs, even without explicit personality conditioning. A study by Serapio-García et al. demonstrates that language models can be assessed using adapted psychometric instruments, revealing consistent behavioural tendencies across prompts and contexts.

https://arxiv.org/abs/2307.00184

Related work has shown that these tendencies can influence how users perceive trustworthiness, competence, and intent. In other words, personality is already present as an emergent property. The design choice is not whether AI systems have personality, but whether that personality is intentional, inspectable, and governed.

This aligns with earlier findings in human–computer interaction. Nass and Moon’s seminal work on social responses to computers showed that humans apply social rules and expectations to machines as soon as those machines exhibit even minimal social cues. This phenomenon, often referred to as the Media Equation, explains why users react emotionally and morally to systems they rationally know are not human.

https://doi.org/10.1111/0022-4537.00153

From this perspective, personality is not an optional layer added for engagement. It is an inevitable outcome of language-based interaction combined with memory and goal orientation. What matters is how clearly that personality is defined and constrained.

Using more precise terminology such as Agent Identity Model and Behavioural Signature helps anchor this discussion. These terms shift attention away from anthropomorphism and toward structure. They invite explicit decisions about what should remain stable, what is allowed to adapt, and what must never change. They also make it possible to discuss accountability, governance, and ethics in concrete terms.

Literature and philosophy offer useful parallels here. In narrative theory, a character is defined not by isolated dialogue, but by how actions remain intelligible as circumstances change. A character who behaves inconsistently without explanation is not perceived as complex, but as poorly written. The same applies to AI systems. Flexibility without continuity does not feel adaptive. It feels unreliable and annoying.

As AI systems increasingly move from tools to collaborators, personality becomes a core design concern with implications for trust, safety, and long-term use. Treating it as an emergent side effect leaves organizations reacting to user perception rather than shaping it. Treating it as a designed, named, and governed construct allows for clarity and responsibility.

The central question, then, is not whether AI should have personality. The question is whether designers, organizations, and institutions are prepared to define and take responsibility for the behavioural identities they are already deploying.

19
Jan

Functional Profiling and the Design of AI Personalities

A Framework for Coherent, Trustworthy and Purpose Driven Artificial Agents

Author:
Gunnar Øyvin Jystad Fredrikson
Service Designer and AI Strategy Practitioner

Date:
2026

Abstract

As artificial intelligence transforms from a computational tool into an interactive partner, organizations face an emerging design and governance challenge. Users increasingly perceive AI systems as social actors, yet few organizations have roles or frameworks dedicated to the intentional design of the agent’s identity, behaviour, and long term consistency. This paper introduces the concept of functional profiling as a method for defining the stable behavioural characteristics of AI agents. It draws on research in psychology, human computer interaction, game design and AI ethics to propose a structured model for creating, monitoring and evolving personality driven agents in a responsible way.

1. Introduction

Artificial intelligence is shifting from a backend capability to a frontstage participant in human workflows. Large language models and multi agent systems now exhibit behaviours that humans interpret through familiar psychological lenses. Research consistently demonstrates that people apply social, emotional and moral expectations to interactive systems, even when they are fully aware that these systems are artificial.

Nass and Moon describe this phenomenon as the Media Equation, showing that individuals respond to computers using the same social rules and expectations they apply to humans (Nass and Moon 2000). As AI becomes more conversational and adaptive, this effect increases rather than decreases.

Yet most organizations design AI systems as though these social expectations are irrelevant. Engineers optimise performance. Designers shape prompts. Product managers define features. Legal teams review compliance. No one is explicitly responsible for the personality, behavioural integrity, or long term consistency of the agent.

This gap introduces organisational and ethical risks. When a system changes tone abruptly, shifts roles, contradicts earlier statements, or behaves unpredictably, users lose trust. In regulated industries, inconsistent agent behaviour can have serious consequences.

To address this emerging need, this paper introduces the concept of functional profiling as a systematic and responsible method for designing the personality and behavioural structure of artificial agents.

2. Theoretical Foundations

2.1 Personality as a Functional Construct

In psychology, personality refers to a stable set of dispositions that predict behaviour across contexts. The Big Five (or OCEAN) model is a widely researched framework describing these mechanisms (McCrae and Costa 1996). When adapted for AI systems, these traits can become functional parameters rather than emotional states.

2.2 AI as Social Actor

Human computer interaction studies show that humans consistently attribute intent, emotion and morality to machines that exhibit social cues. This is not a misunderstanding but a form of cognitive shorthand. People treat socially present AI systems as relational partners.

This is central to understanding why AI personality design matters. Predictability and coherence are not aesthetic touches but essential components of user trust.

2.3 Behavioural Consistency in Agentic AI

Emerging research on personality traits in large language models shows that these models display stable behavioural signatures that can be measured and influenced (Serapio García et al. 2023). As multi agent systems become more complex, the need for clear behavioural boundaries increases. Without them, agents may drift, converge, or diverge in ways that are difficult to predict or explain.

3. The Concept of Functional Profiling

Functional profiling is the structured design of an AI agent’s stable behavioural characteristics, role boundaries, memory systems and interaction patterns. It does not attempt to imitate real humans. Instead, it defines artificial identity through purpose, constraints and transparency.

Working definition:
Functional profiling is the intentional design of an AI agent’s dispositional behaviour, memory scope and interaction style according to function, context and ethical boundaries.

The objective is a coherent agent that behaves according to stable internal principles and remains aligned with organisational goals and user expectations.

4. Methods for Creating AI Personalities

The following five methods integrate insights from psychology, user experience, service design, game design and AI governance.

4.1 Role Constrained Behavioural Profiling

This method defines the agent’s role as the anchor for acceptable behaviour. The agent’s tone, risk posture and decision boundaries are derived from its purpose.

Applications: healthcare triage assistants, financial advisors, public sector agents.

4.2 Trait Based Psychometric Profiling

This adapts psychological trait models into computational parameters. For example, openness becomes exploration bias, while agreeableness becomes conflict management behaviour.

Applications: coaching systems, advisory tools, collaborative agents.

4.3 Character Sheet Modeling

Borrowed from game design, this method creates a transparent and auditable record of the agent’s identity, including strengths, weaknesses, locked attributes and permitted evolution paths.

Applications: multi agent systems, research environments, creative tools.

4.4 Brand and Voice Aligned Profiling

This aligns the agent’s behaviour with organisational values. This extends beyond tone to include confidence levels, escalation paths and refusal strategies.

Applications: customer interaction systems, media platforms, commerce.

4.5 Ethically Bounded Adaptive Profiling

This allows the agent to evolve behaviour in a controlled manner while respecting ethical and legal constraints. Drift monitoring and explainability requirements are central features.

Applications: long lived agents, personal assistants, enterprise AI.

5. Governance and Ethical Considerations

AI personalities raise questions beyond design, including accountability, consent, privacy and transparency. The closer an agent resembles a human conversational partner, the greater the obligation to clarify its intent and limits.

Key governance questions include:

  • What does the agent remember and for how long
  • How is personality drift detected and managed
  • Who is accountable for the behaviour of the agent
  • Can users inspect or understand the agent’s identity model

Failure in any of these areas creates risks for organisations and users alike.

6. Strategic Implications for Organisations

Organisations that invest early in functional profiling gain advantages in trust, differentiation and regulatory preparedness. As AI becomes a central part of human facing services, behavioural integrity will matter as much as technical performance.

This creates a demand for new roles, including AI experience strategists, agent identity designers and behavioural architects. These roles are interdisciplinary by nature and require competencies that do not map cleanly onto existing job titles.

7. Conclusion

AI is crossing a threshold where it is no longer sufficient to design interfaces alone. We are now designing identities. Functional profiling offers a structured approach that draws from established academic disciplines while addressing emerging strategic needs. It supports the creation of agents that are coherent, predictable and ethically grounded.

The question is no longer whether AI should have personality, but how that personality is designed and managed.

References

McCrae R and Costa P (1996). Toward a new generation of personality theories. Journal of Personality 64(1).
https://www.researchgate.net/profile/Paul-Costa/publication/242351438_Toward_a_new_generation_of_personality_theories_theoretical_contexts_for_the_Five-Factor_Model/links/54ebdbd50cf2ff89649e9f57/Toward-a-new-generation-of-personality-theories-theoretical-contexts-for-the-Five-Factor-Model.pdf

Nass C and Moon Y (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues 56(1).
https://doi.org/10.1111/0022-4537.00153

Serapio García G et al. (2023). Personality Traits in Large Language Models. arXiv.
https://arxiv.org/abs/2307.00184

12
Jan

When the Job Listing Is Nowhere to Be Found: Why AI Needs Roles That Don’t Yet Have Names

Over the past year, I’ve found myself asking a question that feels increasingly relevant, not just to my own situation, but to how organizations approach artificial intelligence more broadly. How do you get hired when the role you are looking for does not quite exist yet?

After a year of unemployment, the question has sharpened. Is the position I am trying to describe simply unnamed, or am I already considered too old for a role that sits somewhere between established disciplines? But stepping back, this personal uncertainty also reveals something structural. We are building systems that behave more and more like social actors, while still organizing work as if those systems were only tools.

Over the years, my work has lived at the intersection of UX, service design, psychology, and AI supported collaboration. Recently, that intersection has narrowed into a very specific concern: what happens when AI systems begin to display continuity, memory, personality, and role based behavior? At that point, interaction design alone is no longer sufficient. We are no longer just shaping interfaces. We are shaping relationships.

Large language models and agentic systems are increasingly perceived by users as conversational partners, advisors, collaborators, and sometimes even authorities. Research consistently shows that people apply social and psychological expectations to systems that display human like cues. This is not a design flaw. It is a human reflex.

One of the foundational works in this area is Nass and Moon’s research on social responses to computers, which demonstrates that humans instinctively apply social rules to interactive technologies, even when they consciously know they are machines. A foundational research concept showing that humans instinctively apply social behaviours and expectations to computers and interactive technology and very relevant to why AI personality matters:
Machines and Mindlessness: Social Responses to Computers by Clifford Nass and Youngme Moon — Journal of Social Issues (2000)
https://doi.org/10.1111/0022-4537.00153
This article reviews how individuals mindlessly apply social rules and expectations to computers even when they know they are machines.

From a business and organizational perspective, this creates a gap. Companies are deploying AI systems that speak, reason, remember, and adapt, yet responsibility for their behavioral coherence is often fragmented. Engineers optimize performance. Designers shape interactions. Product managers define scope. Legal teams manage risk. But no one is explicitly accountable for the personality, identity, and long term behavioral integrity of the system as experienced by users.

This gap matters.

When an AI system behaves inconsistently, forgets its role, shifts tone unpredictably, or crosses implicit social boundaries, trust erodes quickly. Users disengage, misuse the system, or over trust it in the wrong contexts. In regulated or high stakes environments such as healthcare, public services, finance, or decision support, these failures are not cosmetic. They are strategic risks.

This is where the unnamed role begins to take shape.

Organizations increasingly need people who can think across psychology, system design, ethics, and AI capabilities. People who can define what an AI agent is allowed to remember, how it should behave over time, how its personality is constrained or allowed to evolve, and how this is monitored. In other words, someone responsible for the agent’s identity as a coherent, accountable construct.

This is not about making AI more human for its own sake. It is about making AI predictable, trustworthy, and aligned with human expectations. From a strategic standpoint, this directly affects adoption, brand trust, compliance, and long term value creation.

There are emerging academic signals pointing in the same direction. Research on personality modeling in AI, reinforcement learning from human feedback, and agent alignment increasingly emphasizes stability, transparency, and behavioral consistency over raw capability. For example, a scientific preprint examining how personality-like traits emerge and can be measured in LLM outputs, supporting the idea that personality design in AI is empirically meaningful:
Personality Traits in Large Language Models (Serapio-García et al., 2023)
https://arxiv.org/abs/2307.00184

This study presents methods for assessing and validating personality patterns in LLM behaviour and discusses implications for responsible AI design.

Seen through this lens, the question of job titles becomes secondary. What matters is recognizing the function. Someone needs to own the space between human psychology and machine behavior. Someone needs to ensure that as AI systems become more agentic, they do not become socially incoherent, ethically ambiguous, or strategically misaligned.

This is the work I am trying to describe. It may be called AI experience design, human centered AI strategy, agent behavior design, or something else entirely. The label matters less than the impact.

As AI systems continue to move from tools to collaborators, organizations that invest in this kind of competence early will have a significant advantage. Not because their models are smarter, but because their systems are easier to trust, easier to work with, and easier to integrate into real human contexts.

Sometimes the job listing is nowhere to be found not because the role is unnecessary, but because it has not yet been named. And this seems to be a reality I am facing today in my search for the right position or project to fill my professional endeavors in the year to come.