11.–12.06.2025 #polismobility

EN Icon Pfeil Icon Pfeil
EN Element 13300 Element 12300 DE
Calculating the world

How quantum computing is ushering in a new era

Share page
PrintPrint page Read duration ca. 0 minutes

Prof. Sabina Jeschke describes how quantum computers will soon take the calculation of solutions to a new level.

Prof. Sabina Jeschke © Deutsche Bahn AG/Max Lautenschläger

Prof. Sabina Jeschke © Deutsche Bahn AG/Max Lautenschläger

Quantum computers leave the binary computing method of bits and map quantum states (so-called superpositions) and their entanglements of qubits, which means that complex calculations can be performed in simultaneity of all solution variants. This new way of computing will find multiple applications, in medical research, material development and also in the field of mobility and logistics. What do you see as the most concise development corridors for the application of quantum computing in the field of mobility?

Currently, for complex systems, such as modeling DNA folding, modeling the universe, or a climate model, we have to make a lot of simplifications to the models because otherwise we would get computation times that would be on the order of 20,000 years. Quantum computers allow us to compute these simulations correctly. Instead of accepting all possible linearizations or the omission of effects, we model systems in the complexity as they appear in nature. Digital Twin is the buzzword; and the results of the modeling are directly usable for further application.

In addition, real-time adaptations and optimizations of systems will be possible ...

Exactly, this would be the second point. Let's look at a system like Deutsche Bahn, with the goal of simply creating an automatic timetable - for all trains, all tracks and all stations - just these three subsystems, without taking into account staff scheduling or maintenance, for example, so very simplified. If we want to do optimal traffic planning for this, then for the German rail network we arrive at calculation times that probably make it possible to create the plan for the next year, but spontaneous influences, such as disruptions, overwhelm the calculation. How do you have to control the system to be back to plan as quickly as possible? There are about one to two minutes available for this, whereas the corresponding computing time today would be somewhere around 20, 30 days. That's why today such decisions have to be made by people who are very well trained, but can only do local optimization. And this is exactly where such optimization and simulation environments could be used.

This also includes looking at the entire solution space. One can determine the best solution instead of the good-enough solution in real time.

It's the same way. Today, when we run an optimization procedure, we usually know how much time we have to determine the result or even what budget we have for the corresponding calculation in a data center. This leads to the fact that we provide the algorithm with a certain amount of computing time. It could be that it finds the global optimum in this time. However, it is usually a solution that is far away from the optimum, because it was not possible to go through the entire solution space. The consequence of the deviation from solution to optimum is "waste". It can be the non-optimal use of human resources. It can be Waste of Diesel if I do not find the shortest paths. That's why you have to try to find the best solutions. Algorithms often only run through five or ten or 20% of the solution space, if that; so there is enormous optimization potential in this.

Isn't the ability to optimize in real time for the control of autonomous vehicles a certain basic requirement to ensure the safety and functionality of the system? How will the development of quantum computing impact the field of autonomous driving?

We will be able to compute test scenarios at a much greater breadth and speed in simulation environments that incorporate real data as well as synthetic data. Situations, worlds, cities can thus be constructed in order to confront the autonomous driving car with ever new decisions. This allows us to expand the training scenarios in an incredible way.

In (subsymbolic) AI, there are two major strands: the data-driven methods (supervised, unsupervised) and the area of reinforcement, which works like "blind cow". Here, you feel your way to the solution, which tends to be very slow at the beginning. If I allow such a complex system to try everything, then it will try everything - and thus also a lot of nonsense. That's why it takes a long time to achieve a good result. That's why reinforcement methods are used today only in special contexts. But if I could massively shorten this early learning phase up front, simply because the system calculates much faster, then these methods would gain enormously in importance. And that is a very important point, because I don't need any data for reinforcement. So all the issues such as privacy, data availability and anonymization are no longer relevant.

I assume there will be a network of quantum computers with cloud access being built to their resources. What is the current status and how can we expect future quantum resources?

We are likely to see a large plurality of different classical and new quantum computers, and assume that certain quantum computers will be particularly well suited for certain applications. Therefore, we will need dynamic data center orchestration: Incoming tasks are analyzed and routed to the computers where they can be optimally performed. This will be done either in classical data centers upgraded with quantum computers or via virtual data centers operating with licenses from the providers. Majority of the computers will be integrated in clouds, because in the next few years low-temperature quantum computing will predominate; here the computers have to be cooled to - 273°C. Perspectively, towards the end of the decade, we will also see room temperature quantum computing. We anticipate that cryogenic systems will require one-tenth the energy of today's HPCs (high performance computers) and that room-temperature systems will be one to five percent of today's computer consumption. This means that quantum computers also play a major role in the topic of "Green IT".

This will require industry-specific quantum applications and special algorithms. How is the development of these codes proceeding?

Quantum computing is based on the principles of quantum physics, the hardware will be realized in a completely different way; and this means that the old programming languages will not work and the old codes will not run. Few realize that they cannot easily take their old simulation environment and simply run it on a quantum computer. In principle, there are two options, either the greenfield approach where you create a completely new application and phase out the outdated one. Or what we're dealing with at my company, Quantagonia. We think it's an unnecessary, big loss of intellectual property to rewrite everything. It's not feasible in terms of time and money, and old codes also have advantages - they've been put through their paces. If you rewrite the codes, you first have all the bugs in the world. For this reason, we are developing solutions that use artificial intelligence to form a kind of translator that transforms existing codes into quantum codes. In quantum computing, modeling is different. Therefore, one does not have to exchange the syntax word for word, but the way of modeling.

Are there already pilot projects working with quantum computing?

Pilot projects, for example at IBM or D-Wave, are mostly aimed at optimization applications, for example the control of ships for the distribution of liquefied gas. On a large scale, this already exists in particular with annealers, a kind of intermediate stage to quantum computers, because these computers are already available. In addition, of course, there are smaller use cases on the superfluid computers from IBM, etc., where you get more familiar with the overall system and learn how to deal with it.

IBM quantum scientist Dr. Maika Takita in the lab © IBM

IBM quantum scientist Dr. Maika Takita in the lab © IBM

How can companies familiarize themselves with this new technology?

In a nutshell, you can really say that everyone will use quantum computing in the future, but many have not yet realized it. We assume that low-temperature quantum computing will be available by 2025. So that's not far away, if you realize what a big change the technology represents and what preparatory activities are appropriate. My advice is to take a strategic approach that highlights the key questions: What principle use cases do I see in my organization? Which ecosystems should I join to be able to talk openly with partners about strategies and change processes? Which universities should I partner with? Do I have a team with whom I can consistently pursue this topic, so that I build personnel who can speak to quantum computing? Do my governance processes fit?

Companies that pursue a good cloudification and digital strategy are generally better prepared, because they can quickly access such initial computers via cloudification, calculate initial use cases, make proofs of concept, familiarize themselves with the topic.

What advice would you give to companies or boards of directors on how to prepare for such movements?

With a clean strategy development, and now. You should realize the competitive situation that will arise: there will be companies that will be born-quantum. These will never have to go through certain shoals because they have placed a data silo in the middle from the outset and hang their services on it. We will also see companies providing, for example, simulation environments for the production of new components, of new materials. And if we now add to this the fact that the whole issue of sustainability is creating an enormous need for innovation anyway, the pressure to act becomes obvious. It is very important to build up technology competence on the board, to carry out a technology scan and to ask which technologies are being developed and will become established. As a board itself, having that radar open, constantly watching for new developments, assessing the timeline and asking what that means for the company and its potential competitors - that's the main task of a board.

Wonderful. I think that concludes this appeal well. Thank you very much for the stimulating conversation.

PROF. DR. SABINA JESCHKE

Prof. Sabina Jeschke © Deutsche Bahn AG/Max Lautenschläger

Prof. Sabina Jeschke © Deutsche Bahn AG/Max Lautenschläger

is a manager, founder and scientist. She joined Arthur D. Little in January 2023 as a Senior Executive Advisor in Technology Consulting. After more than three years as Head of "Digitalization and Technology" at Deutsche Bahn AG and a twelve-year career as a university professor (Berlin, Stuttgart, Aachen), she founded the start-up Quantagonia GmbH in December 2021. Since October 2021, she has been chairwoman of the board of the start-up accelerator KI Park e.V. in Berlin. In parallel, she founded the start-up Arctic Brains AB in Jämtland/Sweden in April 2021, focusing on AI consulting and development. Sabina Jeschke is an experienced supervisory board member, most recently accepting the mandate for Vitesco (carve-out from Continental) in fall 2021. She holds an honorary professorship at TU Berlin and is a member of the CxO Council of Friedrich Alexander University Erlangen-Nuremberg. Sabina Jeschke studied physics, computer science and mathematics, earned her doctorate at the TU Berlin and has been on research stays at NASA's AMES Research Center in California and at Georgia Tech in Atlanta.

Author

Michael Müller