Are chatbots the best way to apply large language models?
Every tech company in the world now has an AI strategy. Most of them can be summarised as, “let’s build a co-pilot”. ‘Co-pilot’ being technospeak for ‘chatbot’. Many of these chatbots ingest the data and knowledge you’ve ploughed into whatever product just by using it. Some platforms use your data for training, and some use it for live retrieval and search. But the end result for most is still just a chatbot that is helpful but not revolutionary.
Some companies are creating ‘agents’ to perform specific tasks related to the company’s product. Atlassian has Rovo chat, an interface for chatting with out-of-the-box custom agents which can do things like organize a backlog and craft comms. ServiceNow goes a little further with its ServiceNow AI Agent Orchestrator which enables inter‑agent communication and coordination, also through an in-product chat interface.
These are great innovations that are going to improve productivity and help people get specific stuff done better and faster. But we believe there is an even better model - the AI teammate. No, we haven’t been smoking anything while writing this.
Chatbot vs agent vs AI teammate
Chatbot
A chatbot is built around prompt-and-response. It’s always waiting—passive until you say something. Under the hood, it’s running on predefined flows or prompt templates, sometimes supported by a bit of prompt engineering to make the interactions smoother. Some will suggest options to guide you—buttons, quick replies, or canned follow-ups—but that’s less intelligence and more guardrails.
The result is a system that can be helpful if you already know what to ask and when to ask it. But the burden’s on the user. If you don’t phrase your question just right, or you don’t know what’s possible, the chatbot doesn’t help you figure it out. That limits the complexity of what you can actually achieve. It’s reactive and bounded by its script or prompt scaffolding.
Agent
An agent is a step toward autonomy. It’s still reactive, but once activated, it can take meaningful action: triggering workflows, calling APIs, updating systems, even chaining tasks together. The key difference is that it operates with a goal in mind—usually one defined by a human ahead of time. You give it a command, and it moves toward the outcome, often by reasoning step-by-step or invoking tools as needed.
There are different types of agents—task-based agents, retrieval-augmented agents, planning agents. The more advanced ones use frameworks like ReAct or AutoGPT-style planning loops to decide what to do next. State of the art in this space involves agents that can operate across tools, manage memory, and adjust mid-task if something changes. But even at their best, they’re still executing someone else’s priorities. They need prompting. They don’t understand broader context unless you explicitly feed it to them.
AI teammate
An AI teammate is modeled after a human collaborator—not just someone who completes tasks, but someone who participates meaningfully in the work. It doesn’t sit idle waiting for prompts. It pays attention to what’s happening across tools, conversations, and systems, and figures out when to jump in and how to add value.
What makes it different from an agent is that it doesn’t rely on predefined goals or priorities. Instead, it builds its own understanding of what the team is trying to achieve by interpreting the surrounding context. It can raise issues, propose next steps, or shift attention—all without needing to be told what to do.
Under the hood, an AI teammate is likely a multi-agent system: one agent might track team objectives, another might handle planning, another might handle specific domains like code, content, or project management. What you see is often a chatbot-like interface, but what’s behind it is a collection of coordinated agents working across the same systems a human would—Slack, Notion, GitHub, Jira, email, you name it.
The result is something that doesn’t just execute instructions—it collaborates. It reasons about trade-offs, fills in gaps, and pushes work forward. It’s a system with autonomy and awareness—still early, still imperfect, but fundamentally a different way of thinking about AI in the workplace.
Humanize everything
If you want to build an AI teammate, not just another chatbot, you have to humanize it—relentlessly. Not just at the surface level, but all the way through the design and decision-making. Otherwise, you default back to chatbot thinking: prompt-response loops, rigid flows, robotic behavior.
Humanizing is a THE core design principle. It forces you to ask better questions.
Give it a name. Give it a face. Give it a point of view. Not because branding is cute, but because teammates have identities. They’re not interchangeable. They bring context, memory, and tone. If you want people to relate to the system like a teammate, you have to give them something to relate to.
Give it a gender—if that helps you be intentional about how it shows up in a team. Or don’t. But make a choice, don’t leave it in the uncanny valley of neutrality. Teammates have personalities. Chatbots have templates.
Run it through performance reviews. Seriously. What’s it doing well? Where is it falling short? What feedback should it act on? That’s how you start thinking about growth, responsibility, ownership—not just metrics.
Talk about skills, not integrations. Does it know how to manage a sprint? Review a pull request? Facilitate a discussion? That mindset shift unlocks actual capability development. You stop wiring up tools and start building behavior.
Talk about tasks, not workflows. Workflows are for systems. Tasks are for people. If you say “it should own the handover from design to engineering,” suddenly you’re imagining how a person would do that. That’s the right bar.
Some examples from our internal conversations:
How do you charge for it?
Someone said “maybe per question.” That’s chatbot thinking. Teammates aren’t paid per sentence—they’re paid for contribution, for presence, for impact.
How should it greet you?
It used to ask, “How can I help?”
But no human teammate says that every time they walk into a room. It sounds like a hotel concierge, not a collaborator. We changed it. Teammates find ways to help, they don’t just sit there asking how.
When you start humanizing, things click. People on the team start saying things like, “Can we loop it in here?” or “Let’s hand that to it and focus on this.” You hear them giving feedback, trusting it, challenging it. They stop treating it like a tool—and start treating it like a peer. That’s when you know it’s working.
Because in the end, you’re not just building software. You’re introducing a new kind of team member. And that raises a better question:
What actually makes someone a great teammate—especially a senior one?
Characteristics that make a great (AI) teammate
If you’re serious about building an AI teammate—not just a clever tool—you need to hold it to the same standard you’d expect from a great human teammate. That doesn’t mean pretending it’s human. It means designing for the qualities that actually make someone useful, reliable, and enjoyable to work with. Here’s what that looks like:
Personality
A great teammate brings their own flavor to the team. Not just functional, but human in tone and behavior. An AI with a clear voice is easier to trust, easier to collaborate with, and more likely to be used consistently.
Example: When it summarizes a meeting, does it sound like a court reporter—or like someone who gets what matters and can cut through the noise?
The best ones have a tone that matches the culture they’re in—direct, warm, dry, analytical—whatever fits. Personality isn’t fluff. It’s interface.
Autonomous
It shouldn’t wait around for permission. A good teammate knows what they’re responsible for and acts on it.
Example: If there’s a new task dropped into a project board, the AI should know if it’s relevant to its domain, pick it up, and update the team on progress—without being asked.
Autonomy doesn’t mean doing everything alone—it means not needing hand-holding.
Proactive
Great teammates don’t just wait to be told what to do—they notice, anticipate, and jump in.
Example: If two projects are drifting out of sync, the AI should flag the dependency and suggest a sync. If someone’s blocked, it should offer help or escalate. If something’s missing, it should say so.
Being proactive is what makes the AI feel like part of the team, not just another checkbox in the tooling.
Determined
Resilience matters. A teammate doesn’t drop the ball when something fails—they try again, ask for help, find another way.
Example: If it fails to access a system or API, it doesn’t just crash silently. It tells you what went wrong, tries a fallback, or sets a reminder to retry.
Determination means it’s built to follow through, not flake out.
Communicative
It should speak up when it has something to say—and stay out of the way when it doesn’t. It should be transparent about what it’s doing, and clear when it needs input.
Example: If it makes a change, it explains why. If it’s waiting on someone, it lets them know. If it’s not sure, it asks.
No silent errors. No cryptic actions. Communication is clarity.
Works where you work
A great teammate doesn’t drag you into its world—it shows up in yours. That means embedding in the tools and workflows your team already uses.
Example: It comments on your GitHub PRs. It joins Slack threads. It updates Notion or Jira without needing a special UI. It doesn’t require you to switch contexts to collaborate.
This isn’t about integrations—it’s about presence. If it’s not there when the work is happening, it’s not a teammate.
Meet Pleri
We’re not just talking theory—we’re building this thing for real. Her name is Pleri, and she’s our first AI teammate: a cloud security engineer who never gets tired, never misses context, and never forgets what matters.
She’s an active participant in your cloud engineering team—tracking posture, spotting issues, investigating root causes, recommending fixes, opening pull requests, and following through until the job is done. She works across the tools your team already lives in—Slack, GitHub, Jira, Confluence—and speaks in a tone that’s clear, concise, and never condescending.
She’s designed to be the kind of teammate your staff engineer would trust—and the one your overwhelmed security team would actually lean on. The kind who takes ownership, surfaces what matters, and never drops the ball.
We’re ridiculously excited about what she can already do—and even more excited about what’s next.
If you’re curious, skeptical, excited, or just want to throw some feedback our way—get in touch.
We’re looking for design partners, test pilots, friendly critics, and even loud skeptics.
Pleri’s growing fast. We’d love your help shaping who she becomes.