
AI in the laboratory – game changer or risk?
Application examples, future prospects and hurdles
Every now and then, a technological advancement changes the entire world in the long term. The invention of plastics at the start of the 20th century shaped the entire planet within a few years and created countless new possibilities in materials development. In 1941, Konrad Zuse laid the foundation for a digital age with the first computer. And with the transition from ARPANET to the publicly available Internet, another milestone was reached at the beginning of the 1990s.
Now we are in the midst of the next major technological revolution, which can be summed up in just two letters in every language: AI.
Since the success of the ChatGPT language model, artificial intelligence has become ubiquitous everywhere and is developing at a rapid pace. And like plastics, computers and the Internet, AI will also have a lasting impact on our world.
What is AI?
Although there is no concrete, uniform definition, artificial intelligence usually refers to a special form of algorithm that processes inputs via a complex network of calculations and ultimately delivers a result. What makes it special is that, unlike a conventional algorithm, AI is not static but can also handle completely new, previously unknown data. AI systems learn and can improve themselves over time [1].
The benefits are easy to see – a well-trained AI application is an invaluable aid in processing and evaluating large amounts of data. Accordingly, manufacturers of analytical, biological and laboratory technology are enthusiastic about the use of this technology. In a Spectaris survey carried out in 2024, 82 percent of participants stated that the advantages, opportunities and potential outweighed the disadvantages [2].
AI in laboratory diagnostics
There are countless examples of the use of AI, especially in the medical context, where it can make predictions about diseases and assist in diagnostics. One study with AI-supported data analysis showed that “six clearly distinguishable subtypes can be identified in people in the preliminary stage of type 2 diabetes (prediabetes)”. This is an important finding for the treatment of patients, which could make therapies significantly more precise and promising in the future [3]. Such insights are currently among the greatest advantages of artificial intelligence in laboratory medicine.
However, AI can not only structure and analyse existing data, it can also create new data – or at least calculate the blueprints for it. The AI application Alpha-Fold-2 is therefore currently capable of predicting the three-dimensional folded structure of the resulting protein solely from amino acid sequences. David Baker, Demis Hassabis and John Jumper were awarded the Nobel Prize in Chemistry 2024 for their work on computer-aided protein design and the prediction of complex protein structures [4].
Better than most medical students
The rapid development of the AI language models (Large Language Model, LLM) is highlighted in a study by Prof. Thomas Streichert, Director of the Institute for Clinical Chemistry at University Hospital Cologne [5]. In it, the researchers compared the ChatGPT language model in versions 3.5 and 4.0, which were developed only three and a half months apart. Both versions had to take a typical medical exam and compete with real medical students. Streichert describes the results in an interview with Sysmex Deutschland: “While the GPT 3.5 variant performed slightly worse than the students, the 4.0 variant often outperformed humans and was on par with the top eight percent of students.” And that was at the end of 2022. If the study were repeated today, the results would probably be even better for the language model [1].
Will every laboratory soon have its own AI assistant?
And indeed, AI is already well on its way to permanently changing the traditional analytics laboratory. In December 2023, an American research team published an impressive case study. The scientists had programmed a system based on GPT-4 that could be used as a virtual laboratory assistant. In the experiment, the Coscientist system performed various liquid handling tasks, for example, which it had received via voice input from the researchers. From “draw a red cross in a 96-well plate using food colouring”, the algorithm created a corresponding protocol for the connected liquid handling robot, which carried out this command.
What’s more, the AI system also planned complex chemical syntheses independently, including their subsequent implementation. In the study, Coscientist optimised palladium-catalysed cross-couplings, which are used in pharmaceutical research to develop new active ingredients. Only the carrier plates had to be changed by humans – otherwise, the experiment ran without human intervention, as the researchers reported [6, 7].
Risks and side effects
The prospects for digital, AI-powered laboratory assistants are enticing. They could establish themselves as sparring partners in the laboratory, as tireless helpers for complex tasks – ultimately as catalysts for modern science.
But as diverse as their potential is, systems with artificial intelligence raise just as many questions and risks. Concerns range from the loss of thousands of jobs to the risk of unnoticed “hallucinated” data because AI wants to do programmers a favour, to the loss of one’s own critical and analytical thinking skills. And what actually happens if someone hacks the institute’s own computer system?
There is no doubt that it is still impossible to predict how artificial intelligence will affect the laboratory world and human life in general in the coming decades. A major source of potential error lies in the quality of the training data used to teach new AI systems. If this data is not sufficiently diverse, for example, it can lead to undesirable results [8]. In 2017, an automatic soap dispenser made the headlines when it simply ignored dark-skinned hands. As it turned out, the infrared sensor was linked to AI whose training data consisted exclusively of hands belonging to light-skinned people. Because dark skin reflects infrared light differently, the system did not recognise the hand as such and refused to dispense soap [9].
What in this case – without wanting to relativise it – is “only” a lesson in discrimination can have serious consequences in the context of medicine and clinical research. This is particularly evident in the case of diagnostic AI used for skin cancer screening. Here too, AI often delivers incorrect analyses based on training data that is exclusively based on light-skinned hands [10] – with disastrous consequences for cancer prevention.
Solutions in sight or navigating in the fog?
Recognising such AI bias is a challenge in itself, because no one knows how a trained system actually arrives at its results. Only the input data and the output are known. In current machine learning algorithms, the decisions that lead an AI to a particular result remain a mystery – they are a black box.
Here, the development of Explainable AI (XAI) could help, i.e. artificial intelligence that provides comprehensible explanations for its results.
Another hurdle that could slow down artificial intelligence is the government regulation of AI, which has already been addressed by the EU AI Act but is still in its infancy and raises many unanswered questions [2]. Questions regarding data protection, dependence on AI applications and the long-term quality of training data are also debatable. Because if more and more AI-generated content is created, this could lead to a feedback loop in which AI learns from the content created by other AI systems – with luck based on true facts, but if not through the diverse processing of fake facts – which is unlikely to benefit the quality of the results.
Conclusion
Artificial intelligence is not yet as intelligent as the name suggests. It is more like super-fast calculation software. Provided it has good training data, it can solve the tasks it has been taught at high speed and with great precision, but it is often unable to perceive connections that go beyond its training. For now. Given the rapid pace of technological development, it is probably only a matter of time before AI systems actually live up to the intelligence promised in their name.
And just as the Internet has ushered science into a new (information) age, AI is also likely to bring about a technological revolution for laboratories and people’s lives. How companies and individuals will take advantage of these new opportunities remains to be seen.
Or, as ChatGPT puts it: “The laboratory world is on the threshold of a new era shaped by AI, but it can only be successful if we combine innovation with responsibility and human intuition.”
We can only hope for the best.

You can find more exciting articles on how AI is being used in laboratories in carl 03:
From the Turing machine to the chatbot: the historical development of AI
The history of AI goes back a long way and has gone through several phases of euphoria and disillusionment.
Theoretical bases
- 1943: Warren McCulloch and Walter Pitts developed the first mathematical model of an artificial neuron and thus created the basis for artificial neuron networks [11].
- 1950: Alan Turing published his paper “Computing Machinery and Intelligence” and introduced the Turing test for checking machine intelligence [12].
- 1956: At the Dartmouth Conference, John McCarthy and his colleagues coined the term “artificial intelligence” [12]
Initial successes and applications
- 1966: Joseph Weizenbaum created ELIZA, the first chatbot that demonstrated natural language processing and simulated a psychotherapist [12].
- 1972: MYCIN, an expert system for medical diagnosis, was developed at Stanford University – an early example of a knowledge-based system in the laboratory [13].
AI winter and resurgence
- 1970s: Initial disappointments due to overly high expectations led to what is known as the AI winter – a lack of computing power and methodological limitations prevented major breakthroughs [12].
- 1986: The backpropagation method enabled the training of multilayer neural networks, which gave new impetus to research in laboratories worldwide [14].
- 1997: Deep Blue by IBM defeated the world chess champion Garri Kasparow [12].
- 1997: Jürgen Schmidhuber and Sepp Hochreiter developed the Long Short-Term Memory (LSTM), a neural network that later became essential for voice recognition and many laboratory applications [12].
Modern AI revolution
- 2011: IBM Watson won the quiz show Jeopardy, demonstrating a high level of understanding and processing of natural language[12].
- 2016: AlphaGo defeated Go world champion Lee Sedol, mastering a game that was considered “unsolvable by computers” – a milestone in reinforcement learning [12].
- 2020: OpenAI released GPT-3, a language model that writes compelling texts, solves tasks and is widely used as a versatile, generative model [12].
- 2023/24: Increased focus on regulation and ethics. The EU and other countries have created legal frameworks and ethical guidelines for AI applications, such as the EU AI Act [15].
List of sources:
[1] Sysmex Germany, “AI in the lab: Where do we stand?”, https://www.sysmex.de/akademie/wissenszentrum/literatur/xtra-unser-kundenmagazin/ki-im-labor/ki-im-labor-wo-stehen-wir/
[2] Spectaris, “Artificial intelligence in the laboratory – Study 2024”, https://www.spectaris.de/fileadmin/Infothek/Analysen-Bio-und-Labortechnik/Zahlen-Fakten-und-Publikationen/2024_K%C3%BCnstliche_Intelligenz_im_Labor_Studie.pdf
[3] Wagner, R. et al. (2021), “Identification and characterization of distinct subphenotypes of prediabetes”, PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC8649314/
[4] Laborpraxis, “Chemistry Nobel Prize 2023: AI model for protein structure prediction”, https://www.laborpraxis.vogel.de/chemie-nobelpreis-2021-ki-modell-proteinstruktur-vorhersage-a-80456fdfb2839a85d3b3d8b9061c2769/
[5] Streichert, T. et al. (2024), “Evaluating ChatGPT Performance on Medical Student Exams: Comparative Case Study”, JMIR Med Edu, https://mededu.jmir.org/2024/1/e50965
[6] The Decoder, “Coscientist uses GPT-4 for automated lab experiments in chemistry”, https://the-decoder.de/coscientist-nutzt-gpt-4-fuer-automatisierte-laborexperimente-in-der-chemie/
[7] Boiko, D.A., MacKnight, R., & Gomes, G.N. (2023), “AI-driven automation of chemical research”, PubMed, https://pubmed.ncbi.nlm.nih.gov/38123806/
[8] Activemind, “Recognizing and avoiding bias in AI”, https://www.activemind.legal/de/guides/bias-ki/
[9] Technology Journal, “If there is no soap for you”, https://technikjournal.de/digital/wenn-es-fuer-dich-keine-seife-gibt/
[10] Deutschlandfunk, “Skin cancer detection – Wrong view of dark skin types”, https://www.deutschlandfunk.de/hautkrebserkennung-falscher-blick-auf-dunkle-hauttypen-100.html
[11] McCulloch, W., & Pitts, W. (1943), “A logical calculus of the ideas immanent in nervous activity”, The Bulletin of Mathematical Biophysics, 5, 115-133, https://link.springer.com/article/10.1007/BF02478259
[12] Agorate, “The history of AI”, https://www.agorate.de/KI/die-geschichte-der-ki
[13] B.J. Copeland, “MYCIN artificial intelligence program”, Encyclopedia Britannica, https://www.britannica.com/technology/MYCIN
[14] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986), “Learning representations by back-propagating errors”, Nature, 323, 533-536, https://mebis.bycs.de/beitrag/ki-geschichte-der-ki
[15] EU Commission (2023), “Artificial Intelligence Act”, https://www.copetri.com/knowledgehub/ki-veraenderung-arbeitsmarkt-eu-usa/
[16] Dr. med. Anna Katharina Mundorf: Artificial intelligence in the medical laboratory: AI – current status and future prospects(https://www.trillium.de/zeitschriften/trillium-diagnostik/trillium-diagnostik-ausgaben-2024/td-heft-1/2024-kuenstliche-intelligenz/schwerpunkt-kuenstliche-intelligenz/kuenstliche-intelligenz-im-medizinischen-labor-ki-aktueller-stand-und-zukunftsperspektiven.html) DOI: 10.47184/td.2024.01.08