When the Job Listing Is Nowhere to Be Found: Why AI Needs Roles That Don’t Yet Have Names
Over the past year, I’ve found myself asking a question that feels increasingly relevant, not just to my own situation, but to how organizations approach artificial intelligence more broadly. How do you get hired when the role you are looking for does not quite exist yet?
After a year of unemployment, the question has sharpened. Is the position I am trying to describe simply unnamed, or am I already considered too old for a role that sits somewhere between established disciplines? But stepping back, this personal uncertainty also reveals something structural. We are building systems that behave more and more like social actors, while still organizing work as if those systems were only tools.
Over the years, my work has lived at the intersection of UX, service design, psychology, and AI supported collaboration. Recently, that intersection has narrowed into a very specific concern: what happens when AI systems begin to display continuity, memory, personality, and role based behavior? At that point, interaction design alone is no longer sufficient. We are no longer just shaping interfaces. We are shaping relationships.
Large language models and agentic systems are increasingly perceived by users as conversational partners, advisors, collaborators, and sometimes even authorities. Research consistently shows that people apply social and psychological expectations to systems that display human like cues. This is not a design flaw. It is a human reflex.
One of the foundational works in this area is Nass and Moon’s research on social responses to computers, which demonstrates that humans instinctively apply social rules to interactive technologies, even when they consciously know they are machines. A foundational research concept showing that humans instinctively apply social behaviours and expectations to computers and interactive technology and very relevant to why AI personality matters:
Machines and Mindlessness: Social Responses to Computers by Clifford Nass and Youngme Moon — Journal of Social Issues (2000)
https://doi.org/10.1111/0022-4537.00153
This article reviews how individuals mindlessly apply social rules and expectations to computers even when they know they are machines.
From a business and organizational perspective, this creates a gap. Companies are deploying AI systems that speak, reason, remember, and adapt, yet responsibility for their behavioral coherence is often fragmented. Engineers optimize performance. Designers shape interactions. Product managers define scope. Legal teams manage risk. But no one is explicitly accountable for the personality, identity, and long term behavioral integrity of the system as experienced by users.
This gap matters.
When an AI system behaves inconsistently, forgets its role, shifts tone unpredictably, or crosses implicit social boundaries, trust erodes quickly. Users disengage, misuse the system, or over trust it in the wrong contexts. In regulated or high stakes environments such as healthcare, public services, finance, or decision support, these failures are not cosmetic. They are strategic risks.
This is where the unnamed role begins to take shape.
Organizations increasingly need people who can think across psychology, system design, ethics, and AI capabilities. People who can define what an AI agent is allowed to remember, how it should behave over time, how its personality is constrained or allowed to evolve, and how this is monitored. In other words, someone responsible for the agent’s identity as a coherent, accountable construct.
This is not about making AI more human for its own sake. It is about making AI predictable, trustworthy, and aligned with human expectations. From a strategic standpoint, this directly affects adoption, brand trust, compliance, and long term value creation.
There are emerging academic signals pointing in the same direction. Research on personality modeling in AI, reinforcement learning from human feedback, and agent alignment increasingly emphasizes stability, transparency, and behavioral consistency over raw capability. For example, a scientific preprint examining how personality-like traits emerge and can be measured in LLM outputs, supporting the idea that personality design in AI is empirically meaningful:
Personality Traits in Large Language Models (Serapio-García et al., 2023)
https://arxiv.org/abs/2307.00184
This study presents methods for assessing and validating personality patterns in LLM behaviour and discusses implications for responsible AI design.
Seen through this lens, the question of job titles becomes secondary. What matters is recognizing the function. Someone needs to own the space between human psychology and machine behavior. Someone needs to ensure that as AI systems become more agentic, they do not become socially incoherent, ethically ambiguous, or strategically misaligned.
This is the work I am trying to describe. It may be called AI experience design, human centered AI strategy, agent behavior design, or something else entirely. The label matters less than the impact.
As AI systems continue to move from tools to collaborators, organizations that invest in this kind of competence early will have a significant advantage. Not because their models are smarter, but because their systems are easier to trust, easier to work with, and easier to integrate into real human contexts.
Sometimes the job listing is nowhere to be found not because the role is unnecessary, but because it has not yet been named. And this seems to be a reality I am facing today in my search for the right position or project to fill my professional endeavors in the year to come.
