
May 19, 2025 9:42:30 AM | 12 min read
May 19, 2025 9:42:30 AM | 12 min read
The automotive industry is no stranger to artificial intelligence (AI). Now that AI is accelerating at a breathtaking pace, the possibilities are even more compelling. The automotive AI market is expected to reach $35.71 billion by 2033 (Data Forest), transforming the sector with its potential in decision-making, real-time diagnostics, and more.
Volvo Cars is at the forefront of this revolution, integrating AI to redefine automotive excellence. Michael Wallgren Fjellander, Advanced Engineering Lead at Volvo Cars, shares how the organization strategically identified and implemented AI solutions, overcame challenges, and droves innovation in vehicle design and manufacturing, customer experience, and business strategies.
Want more insights? Watch the virtual insights session, AI-Driven Innovation: How Volvo is Shaping the Future of the Automotive Industry?.
A lot of things are called AI, so it’s in many domains on different levels. We’re working with everything from simple machine learning and clever algorithms up generative AI and building foundation models. I would say the major focus currently at Volvo is in the process or development side of things like trying to leverage AI for automated engineering or finding ways to bring down costs and unlock new potential.
Especially with large language models, it presents a scope previously unheard of. For example, we have tons of logs and repair texts as well as different notes by mechanics or lab technicians that have been in a hidden archive somewhere. Someone may read it at some point, but now, we can unlock all our internal logs and documents at scale and use the insights derived to condense knowledge on an organizational level.
I’m surprised at how fast things are improving. I remember in mid-2021, we started researching generative AI and LLMs out of curiosity. Quite quickly, we started using these technologies to augment certain things. Now across the company, we see generative AI being used in almost every domain. It’s impressive and surprising how much you can do.
On the other hand, there’s a whole science behind applying new technologies and capabilities to processes and workflows. It takes a long time to really unlock the usefulness. Sometimes it does not take off because perhaps it was the correct tool but applied in the wrong way. We’re all still humans working with a new tool, and it’s a process both on the organizational and individual level to learn how to work with this and adapt to it.
I often think of the digitalization revolution of computers. In the early 70s and 80s, companies thought computers were amazing and proceeded to buy a massive number of them. But they didn’t see any immediate benefits. People still operated on the old paradigm of papers and were using computers as a replacement for papers instead of leveraging the full capabilities of the digital systems.
I think a similar thing is happening here.
Many processes are still stuck in a pre-AI paradigm and we’re still thinking that way too. We don’t really know how to use generative AI, and we haven’t adapted the way we work with it.
It’s shifting towards being a frontrunner with AI. It’s about becoming the best in having AI capabilities and leveraging AI. Still, I think there’s also something to be said about using it sensibly and responsibly. Safety is, of course, the bread and butter of Volvo. It’s what we do, it’s what we breathe, it’s what we talk about. All these AI products we see being launched can sometimes be gimmicky. It may not provide actual value to the end consumer in the car or even improve car safety. When we’re launching anything, it’s important that it’s not AI for AI’s sake but that it’s human-centered and pro-safety.
It’s probably why we see most of the immediate gains and values of AI being in the organization and development side of things, leveraging it internally to produce better products and cars. It’s about creating a safe environment.
In Volvo, we have a structure built around supporting and accelerating AI or generative AI, but also reviewing it from a safety standpoint, both the products itself and legally. Last I checked, we have around 200 to 250 ongoing initiatives relating to generative AI. It’s popping up across the organization.
I think the fastest gain we’ve unlocked is RAG systems. RAG is a way to ground your LLM in your domain knowledge. For example, you may have a manual for manufacturing on standards to adhere to. We can build systems to grant an AI agent access to these documents and serve as an interface between human and this vast information landscape to help them find correct information quickly and support them in understanding if they are doing things the right way.
Another area is translation. It’s an important field within Volvo Cars as we serve many different markets. We spend a lot of time and money translating web pages, diagnostic manuals, and various documentation into individual market languages. It’s costly and time-consuming. We’ve started to see a lot of value to be gained in leveraging AI to improve the speed and quality of translations.
At Volvo, it’s a pendulum swing. We have periods where we try to centralize development and initiatives. Other times, we spread it out. At the moment, we’re in a more distributed innovation approach.
We do have a central group or committee made up of AI experts as well as experts in regulatory affairs, legal, IP, and enterprise architecture who collect knowledge and understand the pitfalls and risks of AI. People from across the organization can reach out to them for help with reviewing a use case, suppliers, and more.
We see that even with the old paradigm of deep learning, we haven’t realized its full potential. Most of the time, you need to be very close to the domain and problems to best see developments and opportunities. Most of the innovation is happening outside and on the edges of the organization.
So, we have a centralized support structure to provide guidance on how best to leverage AI. It’s also a networkME to meet others who have done similar things that you can connect with and share knowledge.
It’s difficult to point out a single area as the biggest challenge. From an ethical perspective, we have a great governance structure on what is ethical and safe from the pre-AI age. So even if it’s a big challenge, we have good support structures in place. Even if the tech is new, the problem is ancient.
I would say there is a skill issue because this is so new and even if we work day and night to upskill, it’s difficult to stay on top of it. Also, things that used to be true or impossible are suddenly possible from one month to the next. We’re still trying to figure out the best way to add scale.
It’s difficult to give a single answer to fit all scenarios. Volvo traditionally looks at physical safety. More and more with digital systems and AI, we have to also look at cyber safety. Is the consumer’s data handled with care? It’s important that we never breach their trust. That’s a contract between the user and us, that they are safe in our hands – not just physically but in terms of privacy and their digital persona.
Of course, there are also regulations being rolled out such as the EU AI Act. We’re still waiting to see how that pans out but it’s good guidance on what are high and low risk scenarios. Using AI for decision-making, that’s an incredibly high-risk area. It’s important that when these capabilities arrive, you still retain human control in decision-making and keep humans in the loop both on the individual level and at a larger scale. Ultimately, it’s always humans making the decisions.
On the manufacturing side, we’re using AI to analyze materials to figure out what future materials are lighter, more sustainable, safer, and stronger.
We also use AI in the manufacturing process itself. When we make new cars or components, we are integrating AI more and more into the verification step to analyze if there are any defects in the material that could be unsafe and to test at a larger scale. One dream would be to be able to test as much as possible. With AI, you can leverage that or scale it up.
The second area we use modern LLMs is in analyzing data and unlocking previously hidden knowledge or data in our internal documentation or data that we have collected. One example is in diagnostics where we use AI to figure out even better ways to model the diagnostics data we get to generate better predictions on the state of the car. In this area, we’ve used AI for a long time but now we can use AI to augment the AI itself to unlock better data or test on a bigger scale which information models are best.
We recently did a pilot in the UK market with a learn-and-shop bot. Customers arriving on Volvo.com looking to configure their cars will get an LLM-powered assistant that can answer questions and help them figure out what car best suits their needs. It would provide a more personalized configuration experience. The bot would also address questions they had.
It was a very successful pilot. That’s probably the most customer-close feature in the customer area. But again, in this area, we see it as very high risk. Having generative AI directly interfacing with customers, we want to make sure that the AI behaves in a Volvo-appropriate manner, that it says the correct things, and doesn’t hallucinate. I would say this is an area where we need the most rigor in designing suitable systems.
On the backline, we’re also using AI to analyze feedback that comes in via the website and different surveys. We have many data points from customers and when they call in, a human can handle it. But with the surveys, it takes hours to comb through. It’s also quite easy for a person reading these comments day in and day out to make errors. Here, we’re leveraging AI at scale to condense the complete picture of what people are talking about, features they want to see, and pain points.
The biggest thing we’ve learned so far is that it’s easier than we thought to make these systems behave appropriately and safely. When we tested internally and in the pilots, it turned out that they were quite compliant. If you design them correctly, ground them in Volvo data using RAG, for example, and have proper guardrails to the model, these AI models are safer than we first assumed.
The most obvious gains are both cost and time savings, especially when it comes to analyzing vast amounts of data. For example, if we have over 50,000 survey comments, it’s not feasible for a person to analyze all of them every month. With AI, we can. This would have years to analyze otherwise but we can reduce it to a couple of days.
That’s a difficult metric to measure and I think most of them are intangible. What is the metric for unlocking things that were previously impossible, such as analyzing full survey results or finding a use for technicians’ logs to derive data into a knowledge graph?
It’s a fussy problem. I find that especially with LLMs, we often run into these fussy metrics. The biggest value gain of these LLMs is maybe not quantitative but rather qualitative. We can do more and do better. We can increase compliance. We’re slowly getting to commercial and financial benefits as well, but since these innovations mainly happen in smaller pilots within the organizations themselves, they have local benefits and impact.
I will be careful to say the hard numbers on a company scale, but I know that for some translation tasks, we’ve reduced the cost by a factor of 1,000. It goes from a cost we must budget for into a negligible cost that doesn’t even measure. But then again, in those cases, we still want to have quality control. We need to increase quality verification. The full sum, I would say, still points to this being a good area for improving and reducing costs.
I would say the old paradigms have been upended by the recent developments in LLMs. The biggest things on the horizon are agents. These are LLM systems that are put in a dynamic time-continuous context where they can work on problems either with humans or autonomously. We’re not there yet but we’re moving very fast in that direction.
With OpenAI’s recent model, o1, which has been specifically trained in logical reasoning and problem-solving, we are arriving at the point where a lot of engineering work can be supported by LLMs. The trend would be for more autonomous problem-solving where engineers have AI or copilot assets.
*The interview answers have been edited for clarity and length.