How do you decide which IT roles or tasks should be supported with AI tools?
While some are still debating whether AI tools are even worth using, we already rely on them every single day—from project managers and product owners to developers and data scientists. So instead of listing the roles where AI helps, I’ll put it differently: there isn’t a role where AI isn’t being used. And if you feel like there’s no good AI tool for your role, chances are—you just haven’t discovered it yet.
What criteria do you use to evaluate the ROI of AI tools?
There are a few key criteria that guide us in deciding whether a tool is worth the investment:
- Frequency of use. If a tool is used less than a few times a week, it’s probably not worth it.
- Overlap with other tools. If multiple tools serve the same purpose, we stick to one. Or—if we already have experience—we may use different tools for different goals, depending on reliability, accuracy, or other strengths, even if they technically aim for the same outcome.
- Data security. Even if a tool looks attractive and promises plenty of value for the team and the business, if we have doubts about its reliability or security, it’s simply not worth the risk.
In short: if a tool is safe, useful, and consistently adds value, it’s worth the investment. And it’s not the number of AI tools that drives efficiency, but knowing how to use the right ones for your needs.
Can you already compare which AI tools have proven most useful, and which ones have disappointed?
Based on what I mentioned earlier, the tools that have worked best for us—and that we use most often in practice—are GitHub Copilot, Cursor, ChatGPT, Claude, Gemini and Microsoft Copilot. Since AI agents have now become widely accessible and can be found in almost every tool we use, we make sure to take advantage of them as well.
That said, we don’t limit ourselves to just these. We also have a much longer list of tools that didn’t live up to expectations. We constantly keep an eye on new tools, testing which are worth trying out and which best fit our needs. It’s an ongoing process—what works perfectly today may not be the right fit tomorrow.
What KPIs or metrics do you use to evaluate the impact of AI in IT teams?
With the rise of AI, things aren’t the way they used to be—and they never will be again. That’s why within the team we’re not only adapting our habits but also reshaping processes and metrics to match the need for speed—without compromising on quality. In fact, the goal is to move faster and deliver better.
The core KPIs remain the same:
- Velocity (both team-wide and at the individual level) and lead time for tracking speed.
- Bugs and defects for ensuring quality.
We’ve seen clear examples where complex functionality was built with the help of AI tools in an unbelievably short time—that alone proves AI works. But there are also cases where relying too much on AI costs us valuable time, since it doesn’t always solve every problem or provide all the answers.
That’s why we’ve learned this: AI tools are powerful, but the sharp mind and insights of an engineer are more valuable than ever.