Skip to main content
Are you 'Forming', 'Storming', 'Norming', or 'Performing' when it comes to AI usage in your team?
Share on socials

Are you 'Forming', 'Storming', 'Norming', or 'Performing' when it comes to AI usage in your team?

Chirag Harendra profile picture
Chirag Harendra
Published on 27 February 2026
16 min read
An illustration of the different stages of Tuckman's model in the Upscale style.
Chirag Harendra profile picture
Chirag Harendra
Published on 27 February 2026
16 min read
Jump to section
Forming: getting to know AI as your new team member
Storming: when new tech shakes things up
Norming: finding our groove
Performing: confident with the AI mix
Reflections on the journey

How is your team really using AI? Explore Forming, Storming, Norming and Performing to understand your team’s AI adoption journey.

'Oh no, not yet another blog about AI’, I hear you mutter to yourself. Yes, guilty as charged. I still strongly believe that blog writing should be done, in what now feels like the old-school way, by humans. For me personally, the process is therapeutic, and helps me to think out loud as I'm writing. That's probably worth another blog in itself, but I'll save that for another day.
Now, onto why I'm actually writing this.
When I first started experimenting with ChatGPT, I had one of those quiet 'oh' moments. 'The cat is fully out of the bag now', I thought to myself. Up until then, AI had felt interesting but slightly abstract to me until, suddenly, it wasn't. The question stopped being whether this stuff would show up in our day‑to‑day work; that much seemed obvious. Instead, I began to wonder, how do we use this in our team in a way that encourages experimentation without creating unnecessary friction or confusion?
That's when Bruce Tuckman's Forming, Storming, Norming, Performing model came back to me. If you've been in my team, you'll know that I'm a big advocate of it. Tuckman's tool is a fantastic way to help team members understand and prepare for change in the workplace and within their team.
If you've come across Tuckman's model before, this will be familiar. If not, here's a quick overview:
  1. Forming: the new team/personnel has joined.
  2. Storming: the niceties are over, and the metaphorical elbow-shoving begins.
  3. Norming: strengths and weaknesses become clear and, over time, the team finds a way to work together.
  4. Performing: trust is at its peak, everyone's contributions are clear, and great results become possible.
  5. Adjourning: the project is complete, and team members move on.
So, as I reflected on those early AI experiments, I started to wonder: what if we treated an LLM like a new addition to the team?
After all, it brings the same mix of anxiety, excitement, nervousness, and dynamism as any new joiner. The difference is there's no interview process. No formal introduction. One day it's a curiosity; the next, it's sitting in the middle of your workflow.
Seeing AI through that lens changed everything for me. Instead of a technical rollout, it became a team development journey.
Let's explore what this looks like when the 'new joiner' is AI.

Forming: getting to know AI as your new team member

In the early days, everything still feels new, and confidence levels vary. People are figuring things out quietly.
That's exactly how it felt when AI began showing up in our workflows.
Fortunately, in my team, there were already a few early adopters who were well-used to working with LLMs. The next stage was to make sure the rest of the group was comfortable and getting 'hands-on' experience using it in their everyday marketing tasks too.
I sought a training programme that gave a no-frills guide to using AI tools for marketing, and ended up taking a practical, interactive course from MMC Learning (for anyone who's interested and would like to learn more for themselves).
In order for my team to do the training, it was important to me that I do it first, for my own knowledge and development as well. I didn't want to ask the team to spend time on something I hadn't taken seriously myself, and I wanted to understand the limitations as well as the potential before encouraging wider use. Doing it first also meant I could answer questions honestly, based on my own experiences and learnings, rather than theory.
I was excited to see how a marketing-focused course would teach me how to make the most out of the AI tools at our disposal. Before doing the course, I thought I knew what AI and LLMs were capable of. How wrong I was, because suddenly a whole plethora of new possibilities were revealed before my very eyes. What surprised me most wasn't just how much they could do, but how easily misunderstandings could creep in if you didn't pause, question, and test what was being produced. It made me realise how much context, nuance, and judgement still matter.
And we're not alone in that gap between usage and guidance. In a recent workplace survey, roughly three-quarters of people reported using AI regularly at work, but only about a third had ever had any formal training on how to use it well. That mismatch feels familiar: lots of activity, but not much clarity on how to apply it confidently and responsibly.
When AI enters the picture, it doesn't really feel any different. There's curiosity, hesitation, and a lot of quiet experimentation happening in the background. At this stage, the goal isn't speed or output, it's familiarity. Giving people the space to explore, make mistakes, and build a basic understanding of how AI might support their work, without anyone feeling rushed or judged.

Storming: when new tech shakes things up

As our team moved past the initial Forming stage with AI, what came next felt familiar in the oddest way: it felt a lot like Storming. Once the novelty had worn off and people got their hands dirty with real tasks, the differences in how we saw AI started to show.
Some folks loved it straight away and were trying to see how it could fit into every process they could think of. Others asked thoughtful questions about when it made sense to use it and when it didn't. There were some who felt uncomfortable leaning on something that occasionally gave confidently wrong answers. It reminded me of those early days in a new team when people are all figuring out how much they trust each other, what the unwritten rules are, and where responsibilities really lie.
For myself, I found it incredibly difficult to create social media posts that I felt authentic and weren't too 'same same'. Yes, you know the one on LinkedIn where there are those usual culprits; starts with a shocking question or one-liner to grab your attention, then has an emoji list telling you why you're wrong. For content production, I do strongly believe in a human-first approach, as opposed to AI-first. After battling multiple times with AI to help 'make this post sound more human', I gave up.
There was this moment, early in a group session, where someone jokingly said, "Who's the AI whisperer now?". At first it was funny, but it actually flagged something important: a shift in dynamics. For some, AI was already a tool they reached for first; for others, it was something to be questioned and checked before anything else. That's classic Storming energy: different approaches bumping up against each other, old habits meeting new tech.
In that stage, leadership isn't about telling everyone what to do. It's about listening, acknowledging concerns, and helping people make sense of the experience. With AI, that meant talking openly about what it is good at, and just as importantly, what it isn't. Sometimes that was about accuracy or nuance. Other times, it was about context or ethics, like deciding when an idea generated by a model needs human thought layered on top.
What mattered most to me though, was creating space for curiosity, for mistakes, and for the little moments where a team member would pause, question, and discover something new about the tools they were using (and perhaps, more importantly, about themselves).
One real-world illustration from this phase came from some recent website improvements our team had been working on. Rather than describing a proposed module change in words, or asking a designer or developer to spend time creating a full mock-up, they used AI tools to quickly generate realistic, responsive prototypes in minutes. This helped validate whether an idea was worth progressing and gave the team something tangible to respond to. Conversations shifted from abstract feedback to practical iteration, making small website updates faster and more productive.

Photo of Georges.
Conversations shifted from abstract feedback to practical iteration, making small website updates faster and more productive.
Georges Petrequin
Content Marketing Manager

Norming: finding our groove

Gradually, that awkwardness gave way to something a bit steadier: Norming. People began to figure out when it genuinely saved time and when it needed rigorous review. We started developing an internal language around it for things like, how we'd prompt it, how we'd check its outputs, and how we blended its suggestions with our own judgment.
Another practical example from our wider team illustrates this perfectly. Instead of manually creating detailed demo projects in Jira to mimic real-world use cases (complete with pre-populated work items), AI was used to generate a realistic project foundation in a fraction of the time. From there, the team could layer in a bolt-on app and demonstrate capability in a meaningful context. This saved hours of setup and let the team focus on applying and testing AI in ways that actually added value.
This really felt like how a team settles into roles when a new person joins. At first, you're polite, then you might argue a bit, then you figure out who's good at what.
That shift didn't happen overnight, and it wasn't always smooth, but the change was visible in how conversations started to sound less like "Is this right?" and more like "What's the best way to use this here?".
We reached a point where we could say, "This is where it helps, but this bit still needs our thinking", and other people would nod because they'd seen the same thing. That felt like the team genuinely learning with the tool, not against it.
My own 'aha, this is actually quite good', was when I was conducting some market research on app marketplace ecosystems. I found a tool called Manus incredibly mindblowing. I fed it some instructions to research the HubSpot app marketplace, and it pretty much started talking back to me as if I were speaking to an enthusiastic graduate looking to make a stellar impression. After answering some of its follow-up questions, I let it conduct the research for 45 minutes, and it produced several documents with detailed statistics, graphs, explanations, you name it, about the HubSpot apps marketplace and also comparing it to other app marketplaces such as Atlassian and monday.com. I was astounded to see how detailed some of the deliverables were; if this were a human, it would have taken them weeks, if not months, to produce the same set of deliverables.

Jay Prakash profile image
We saved hours of setup and let the team focus on applying and testing AI in ways that actually added value.

Jay Prakash
Senior Product Marketing Manager
In practice, this looked like people sharing examples of where AI‑generated work landed well and where it needed reworking, not as bragging rights but as shared learning. One day, someone said, "It nailed the structure, but I needed to refine the logic." Another person replied, "I had the opposite, great insight, but the framing was off." These conversations weren't about defending a position; they were about collaborating on how we work. That's what Norming feels like.
One member of our team, whom I'd label the 'AI Whisperer' myself, had created a tool that let us create UTM hyperlinks based on our internal campaign naming scheme. We no longer had to manually trawl through an Excel file to generate them, saving our team a good 15-20 minutes each time a new UTM hyperlink was generated using the automated tool.

Performing: confident with the AI mix

Eventually, you start to see signs of Performing. This isn't about AI doing everything for you, because that's missing the whole point. It's about people using it with confidence, not because it's perfect, but because they know how to make it better. They know when to challenge it, when to lean on it, and when to walk away and do something themselves.
Again, think of that new team member who, after a few months, isn't someone you're figuring out any more, they're someone you rely on. You know their strengths, you know their quirks, and you know what they need from you to do their best work.
Research suggests that building trust with a new colleague can take several months of shared work, consistent interactions, and proving reliability. With AI, the same principle applies: you need time to experiment, test, and iterate before you can confidently integrate a tool into your workflow. It's not about letting the tool take over; it's about letting people use it intentionally, so their individual (and more importantly, human) skills can amplify what the technology can do.
That's when good things start to happen. Brainstorming might start with an AI‑generated draft, but the ideas, insights, and strategic direction always come from the team. A marketing brief might get a first pass from a model, but the real value comes when humans apply judgment and context to make it sing.
I find myself using AI regularly to help me with more personal matters, such as personal finance. Comparing rates, packages, and providers. It gives me hints and tips that I had not come across before, and where I often had to fact-check previously, I now see myself trusting it more often than not (I'd still do some fact-checking just to make sure, as nobody wants to make personal finance decisions based solely on AI recommendations now, do we?).
For work, where I see real value is helping us to create templates and processes—streamlined processes—to help my team get their work done more efficiently and effectively. It's not perfect, but I don't think it ever will be perfect, because not only are AI tools evolving so swiftly, but we as humans are also looking to keep up with the latest and greatest. And there's always another new feature or tool that gets introduced every week, which can be overwhelming even for the most AI-literate of us, let alone beginners.

Reflections on the journey

So if your team feels like it's still figuring things out, that's okay. It doesn't mean you're behind; it probably means you're in the middle of the journey, and that's where real learning tends to happen. The goal isn't to race to some neat "finished" state. It's to get comfortable in the messy middle long enough to understand where the tools genuinely help, and where your own judgement and experience matter most.
And here's the thing: with AI, the landscape doesn't stand still. Just when you feel like you've reached Performing with one tool, something new comes along. A new model. A new internal bot. A new way of working. Suddenly, you're Forming and Storming again.
I don't think we'll ever sit permanently in a Performing phase when it comes to AI. And maybe that's not the point. Maybe the point is building a team that's confident enough to move between those stages without losing momentum or trust.
At the end of the day, the most successful teams I've seen aren't the ones who adopted AI first or fastest. They're the ones who treated it like any other evolving part of the team: with curiosity, respect, and enough flexibility to keep adapting as the tools change.
Written by
Chirag Harendra profile picture
Chirag Harendra
Head of Marketing
A self-professed geek, Chirag oversees all touchpoints where we communicate and interact with our customers. A believer in silver linings, he's all about positivity—you'll always find his glass half full!
AI