It’s essential to build the right foundation of trust, transparency, and well-being when implementing new technologies.
By Marisa Pereira
Artificial intelligence (AI) has moved fast and far faster than most people expected. In just a few short years, it’s gone from emerging technology to embedded infrastructure across almost every department. From marketing and finance to operations and HR, AI is now a tool most employees use or interact with on a daily basis. But while the rollout of AI has been relentless, the same can’t always be said for the support structures around it.
McKinsey’s 2024 State of AI report found that over half of businesses are now using AI in at least one core function. Yet less than 40% have taken clear steps to prepare or support their teams through that shift. That disconnect has real consequences. If the rollout of AI isn’t backed by clear guidance and support, it can quickly add to confusion, workload pressure, and unease among teams.
Well-being at Risk
The link between technology and mental health isn’t new, but AI introduces a fresh set of challenges. Unlike previous digital tools, AI can often feel less transparent. Decisions made by algorithms aren’t always easy to trace. For employees, that can create anxiety around fairness, job security, and purpose.
According to Qualtrics’ 2025 workplace trends data, nearly half of employees say AI tools have changed the nature of their roles. A quarter say they feel more isolated at work because of how tasks are now distributed or managed. These aren’t just surface-level frustrations. They’re signs that the way AI is implemented is affecting the emotional fabric of the workplace.
It’s not only about job security either. As AI takes on more decision-making, people may feel their judgement or creativity is being undervalued. In some cases, it’s even led to disengagement where employees stop offering ideas or feel reluctant to challenge flawed AI-generated outcomes. This dynamic has real implications for team cohesion, innovation, and overall well-being.
Leadership has a Responsibility
The answer isn’t to halt AI adoption but to rethink how it’s introduced and embedded. Organizations need to approach AI with the same care and clarity they would apply to any other major organizational change. That means communication, involvement, and a clear sense of purpose from the outset.
Leadership plays a crucial role in setting the tone. Employees take cues from how their managers and senior leaders talk about AI. If it’s presented as a way to cut costs or reduce headcount, it’s likely to trigger anxiety. If it’s framed as a support tool, one that enhances rather than replaces human skills, it’s far more likely to be embraced. That framing has to be backed by action. For Storyblok, integrating AI effectively requires more than a good tech stack. It means making sure people understand the “why,” not just the “how.”
Effective Ways to Approach it
Here are some of the key practices organizations can put in place to support people while adopting AI.
- Lead with clarity. For Storyblok, tools aren’t selected just because they’re new or trending. HR assesses whether they solve a real problem or reduce friction in someone’s role. Only then is the tool considered for roll out and implementation.
- Involve people early. Before introducing a new AI tool, Storyblok brings together employees from different teams to test it, challenge it, and flag issues. This provides practical feedback and helps build a sense of ownership.
- Respect the limits of automation. Not everything should be automated. The HR team makes deliberate choices about when human judgement is essential especially in areas like hiring, performance reviews, or anything that touches employee experience.
- Create space for learning. AI can be intimidating especially if it changes the tools or processes people have relied on for years. Be sure to offer low-pressure training sessions, peer-to-peer learning opportunities, and time for people to experiment and get comfortable.
- Keep an eye out for overload. Productivity doesn’t always mean well-being. If AI systems are producing more work than they’re saving or if they’re generating noise instead of clarity, step back and reassess. Digital efficiency should never come at the expense of mental clarity.
The Culture Behind the Tools
It’s easy to focus on the technology, but what matters most is the environment it operates in. A high-trust culture, where people feel heard, valued, and empowered to speak up, will always handle change better than one where communication is top-down or transactional.
Employees need to know that it’s okay to question AI-generated outputs. They should be encouraged to raise concerns, suggest improvements and stay involved in decisions that affect their day-to-day work. Creating space for open feedback helps both the technology and the people using it develop together and that’s what ultimately makes AI sustainable in a team setting.
Moving Forward, Together
AI is here to stay. It’s already changing the way people work and its influence will only grow. But how it affects people depends entirely on how it’s implemented. Organizations that treat AI adoption as a technical project are missing the bigger picture. It’s a human transition as much as a digital one.
The companies that get it right will be those that keep their people involved, invest in upskilling, and make space for both adaptation and emotional resilience. AI can be a powerful ally. But only if the right foundations are built where trust, transparency, and well-being are more than just talking points.
Marisa Pereira is VP of people and organization for Storyblok.