Skip to content

Posts tagged ‘functional profiling’

19
Jan

Functional Profiling and the Design of AI Personalities

A Framework for Coherent, Trustworthy and Purpose Driven Artificial Agents

Author:
Gunnar Øyvin Jystad Fredrikson
Service Designer and AI Strategy Practitioner

Date:
2026

Abstract

As artificial intelligence transforms from a computational tool into an interactive partner, organizations face an emerging design and governance challenge. Users increasingly perceive AI systems as social actors, yet few organizations have roles or frameworks dedicated to the intentional design of the agent’s identity, behaviour, and long term consistency. This paper introduces the concept of functional profiling as a method for defining the stable behavioural characteristics of AI agents. It draws on research in psychology, human computer interaction, game design and AI ethics to propose a structured model for creating, monitoring and evolving personality driven agents in a responsible way.

1. Introduction

Artificial intelligence is shifting from a backend capability to a frontstage participant in human workflows. Large language models and multi agent systems now exhibit behaviours that humans interpret through familiar psychological lenses. Research consistently demonstrates that people apply social, emotional and moral expectations to interactive systems, even when they are fully aware that these systems are artificial.

Nass and Moon describe this phenomenon as the Media Equation, showing that individuals respond to computers using the same social rules and expectations they apply to humans (Nass and Moon 2000). As AI becomes more conversational and adaptive, this effect increases rather than decreases.

Yet most organizations design AI systems as though these social expectations are irrelevant. Engineers optimise performance. Designers shape prompts. Product managers define features. Legal teams review compliance. No one is explicitly responsible for the personality, behavioural integrity, or long term consistency of the agent.

This gap introduces organisational and ethical risks. When a system changes tone abruptly, shifts roles, contradicts earlier statements, or behaves unpredictably, users lose trust. In regulated industries, inconsistent agent behaviour can have serious consequences.

To address this emerging need, this paper introduces the concept of functional profiling as a systematic and responsible method for designing the personality and behavioural structure of artificial agents.

2. Theoretical Foundations

2.1 Personality as a Functional Construct

In psychology, personality refers to a stable set of dispositions that predict behaviour across contexts. The Big Five (or OCEAN) model is a widely researched framework describing these mechanisms (McCrae and Costa 1996). When adapted for AI systems, these traits can become functional parameters rather than emotional states.

2.2 AI as Social Actor

Human computer interaction studies show that humans consistently attribute intent, emotion and morality to machines that exhibit social cues. This is not a misunderstanding but a form of cognitive shorthand. People treat socially present AI systems as relational partners.

This is central to understanding why AI personality design matters. Predictability and coherence are not aesthetic touches but essential components of user trust.

2.3 Behavioural Consistency in Agentic AI

Emerging research on personality traits in large language models shows that these models display stable behavioural signatures that can be measured and influenced (Serapio García et al. 2023). As multi agent systems become more complex, the need for clear behavioural boundaries increases. Without them, agents may drift, converge, or diverge in ways that are difficult to predict or explain.

3. The Concept of Functional Profiling

Functional profiling is the structured design of an AI agent’s stable behavioural characteristics, role boundaries, memory systems and interaction patterns. It does not attempt to imitate real humans. Instead, it defines artificial identity through purpose, constraints and transparency.

Working definition:
Functional profiling is the intentional design of an AI agent’s dispositional behaviour, memory scope and interaction style according to function, context and ethical boundaries.

The objective is a coherent agent that behaves according to stable internal principles and remains aligned with organisational goals and user expectations.

4. Methods for Creating AI Personalities

The following five methods integrate insights from psychology, user experience, service design, game design and AI governance.

4.1 Role Constrained Behavioural Profiling

This method defines the agent’s role as the anchor for acceptable behaviour. The agent’s tone, risk posture and decision boundaries are derived from its purpose.

Applications: healthcare triage assistants, financial advisors, public sector agents.

4.2 Trait Based Psychometric Profiling

This adapts psychological trait models into computational parameters. For example, openness becomes exploration bias, while agreeableness becomes conflict management behaviour.

Applications: coaching systems, advisory tools, collaborative agents.

4.3 Character Sheet Modeling

Borrowed from game design, this method creates a transparent and auditable record of the agent’s identity, including strengths, weaknesses, locked attributes and permitted evolution paths.

Applications: multi agent systems, research environments, creative tools.

4.4 Brand and Voice Aligned Profiling

This aligns the agent’s behaviour with organisational values. This extends beyond tone to include confidence levels, escalation paths and refusal strategies.

Applications: customer interaction systems, media platforms, commerce.

4.5 Ethically Bounded Adaptive Profiling

This allows the agent to evolve behaviour in a controlled manner while respecting ethical and legal constraints. Drift monitoring and explainability requirements are central features.

Applications: long lived agents, personal assistants, enterprise AI.

5. Governance and Ethical Considerations

AI personalities raise questions beyond design, including accountability, consent, privacy and transparency. The closer an agent resembles a human conversational partner, the greater the obligation to clarify its intent and limits.

Key governance questions include:

  • What does the agent remember and for how long
  • How is personality drift detected and managed
  • Who is accountable for the behaviour of the agent
  • Can users inspect or understand the agent’s identity model

Failure in any of these areas creates risks for organisations and users alike.

6. Strategic Implications for Organisations

Organisations that invest early in functional profiling gain advantages in trust, differentiation and regulatory preparedness. As AI becomes a central part of human facing services, behavioural integrity will matter as much as technical performance.

This creates a demand for new roles, including AI experience strategists, agent identity designers and behavioural architects. These roles are interdisciplinary by nature and require competencies that do not map cleanly onto existing job titles.

7. Conclusion

AI is crossing a threshold where it is no longer sufficient to design interfaces alone. We are now designing identities. Functional profiling offers a structured approach that draws from established academic disciplines while addressing emerging strategic needs. It supports the creation of agents that are coherent, predictable and ethically grounded.

The question is no longer whether AI should have personality, but how that personality is designed and managed.

References

McCrae R and Costa P (1996). Toward a new generation of personality theories. Journal of Personality 64(1).
https://www.researchgate.net/profile/Paul-Costa/publication/242351438_Toward_a_new_generation_of_personality_theories_theoretical_contexts_for_the_Five-Factor_Model/links/54ebdbd50cf2ff89649e9f57/Toward-a-new-generation-of-personality-theories-theoretical-contexts-for-the-Five-Factor-Model.pdf

Nass C and Moon Y (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues 56(1).
https://doi.org/10.1111/0022-4537.00153

Serapio García G et al. (2023). Personality Traits in Large Language Models. arXiv.
https://arxiv.org/abs/2307.00184