The Electron Microscope, Quantum Puzzles, and AI-made Lesson Plans: What to read along books
See through electrons, think quantumly, but be human in education. Three articles that sparked my interest recently.
Visible light is made of photons, which are almost perfect particles: they have no rest mass (because they cannot exist at rest) and travel at the maximum speed that is possible in our Universe, 300,000 meters per second. Everything we see is made possible by light that is reflected, refracted, transmitted, or dispersed by surfaces of objects. But we can also see using electrons. All we need is an electron microscope, a device whose story starts in 1883, with Heinrich Hertz.
Electrons are elementary subatomic particles: they cannot be torn apart, and their behavior is understood using fundamental quantum mechanics equations, such as Schrödinger’s equation. Quantum phenomena started being studied rigorously by physicists and mathematicians exactly a century ago, and for some years, researchers and engineers from companies such as IBM, Google, or Microsoft have been building computers based on quantum principles.
Their processors are no longer made of semiconductors such as silicon but rely on superconductivity and other complex quantum phenomena that are hard to control ― which is why you’ll probably not have one in your living room anytime soon. But a team of Romanians from Miercurea Ciuc has developed a video game that simulates the behavior of quantum processors. The game offers multiple puzzles which require a thought process that is fundamentally different from the usual algorithmic behavior of regular computers: you must think in quantum terms.
Quantum physics and all its peculiarities are studied in Romanian high schools in the twelfth grade, but it is not examinable for the baccalaureate. In the final year of high school, students focus almost exclusively on examinable topics, and teachers understand that quantum mechanics would require mathematical foundations and efforts of imagination that are not for everyone. So maybe they could just ask an AI system to make a lesson plan.
However, studies show that if teachers delegate making lesson plans to artificial intelligence assistants, students will not be stimulated to learn. Even at the planning level, such lessons don’t encourage critical thinking and don’t stimulate students to discover, to ask questions and understand phenomena. Instead, they serve information ready to be memorized, in a confident voice that very few question even if they know that AI sometimes hallucinates.
Here are three articles to read more on these topics. This one here continues the series I started last month.
Electronic vision
One of the most surprising things about the quantum world is that the subatomic “particles” could behave simultaneously as small bodies and as waves. Wave-particle duality, as it is known rigorously, was one of the discoveries that contributed to Albert Einstein’s Nobel Prize from 1905. Ever since, physicists and mathematicians have understood that at a subatomic scale and at speeds that are comparable to the speed of light, electrons, protons, photons and similar particles must be studied both using wave theory ― the same used for electromagnetism or sound ―, and particle mechanics, in an updated version of Newton’s theories.
Electrons are organized around atoms and can be easily removed. Chemistry teaches that atomic bonds in molecules could be made by sharing electrons or you may remember the ionization procedure, where an atom receives or gives up some electrons, thus becoming a negative or positive ion, respectively. Electrons are also responsible for electricity, which made them widely used in experiments long before their quantum behavior was discovered.
Heinrich Hertz, one of the founding fathers of wave theory (whose name has become the unit for frequency), showed in 1883 that a beam of electrons could be directed using electricity. In 1926, the German physicist Hans Busch built the first device which behaved like a lens, only it worked with electrons instead of light (photons). It focused the beam of electrons just like your glasses bend light coming to your eyes.
The setup was practically made and in 1931, Max Knoll and Ernst Ruska built the device which enlarged images just like a microscope, but working with electrons. The two were awarded the Nobel Prize in 1986, when the construction and use of electron microscopes was already very popular.

There are many technical advantages of electron microscopes other than producing and focusing electron beams with devices not dissimilar to those we had long ago in our TVs or computer screens: cathodic tubes (remember the CRT or Cathodic Ray Tube technology?). Visible light is just a small fraction of the entire spectrum of photons. Which is why a microscope that works with photons is limited by using only a definite, relatively small range of frequencies, those from the visible spectrum.
Electrons don’t share this limitation: you can’t see them anyway, so the image they produce in a microscope is projected onto a screen. This means that electron microscopes could use electrons that have frequencies and speeds of almost any order of magnitude, so it is common to use them for images down to 0.1 nanometers (or 10-10 meters), whereas optical microscopes usually fail below 100 nanometers (or 10-7 meters).
Asimov is a biotechnology company which publishes a newsletter titled “Asimov Press”. In their articles you could read about the history of electronic microscopes, both detailed and accessible. Other articles that I recommend are about liver transplants, the history of cataract surgery, or how to weigh a cell.
Quantum Thinking
The logic that our computers use relies on binary electronic: 0 and 1. Processors, memories, and other components which “think” in our everyday devices use electrical current at threshold values of intensity or voltage. If there’s a 1-volt potential difference in a circuit, then the components (such as diodes, transistors, and other wonders of digital electronics) let it pass. Any value that is smaller is rejected, as if there was no current. This is the key for understanding the binary behavior of digital electronics: 0 or 1, “no” or “yes” show whether the potential difference in the circuit has the value of 1 V or not.
After passing through such a “checkpoint gate”, the current could face another one, then another, and so forth such that when it reaches the destination, you know that it got “yes” answers all the way. Logically speaking, it got a “yes” and “yes” and “yes” and…, so the circuit could work as a logical conjunction.
Mathematically, the proposition “P and Q” is true only when individually, P and Q are both true. The proposition “It’s the year 2053 and you’re reading this online” is false, because only one of the two components is true.
Physicists and engineers used such logical gates to express other types of composite propositions: disjunction (“or”), implication (“if…then”), negation, and more. Thus, by simply regulating the current, which is checked at various points in the circuit, the device works as if it “knew” logical thinking.
Digital electronics has been working like this for decades, and everything relies on binary logic, also known as the law of excluded middle: a proposition could be either true, or false. I’m referring to propositions which could be expressed mathematically, of course, not those in common parlance. For example, for any number x, the proposition “x > 3” is either true or false.
But once the quantum revolution has set in, due to the unclear behavior of subatomic particles, which are waves simultaneously, it became obvious that nature could express ambiguity. Therefore, instead of logical gates that opened only for values of precisely 1 V, circuits could work with gates that are only halfway or a quarter open.
This was admittedly a massive oversimplification for quantum computers, but the key point is this: instead of using binary logic, 0 or 1, “no” or “yes”, or bits, quantum devices use qubits, a new word meaning quantum bits. They no longer have values of 0 or 1 exclusively, but could be found in intermediate, ambiguous states.
Quarks Interactive is a Romanian company, founded in Miercurea Ciuc, which set the goal of educating the masses on quantum computing with the help of a video game. At the beginning of 2025, they launched Quantum Odyssey, a puzzle-rich game in which you have to think like a quantum processor. The game contains a background story, where you can find fundamental physics and mathematical structures such as Hilbert spaces, eigenvalues, eigenvectors, or complex functions, and researchers such as Albert Einstein, Richard Feynman, or Niels Bohr.

But Quarks Interactive is not a game company. Their CEO, Laurențiu Niță, is a physicist at the University of Durham in the UK who published a research paper on “quantum literacy” for wider audiences, namely a friendly approach to quantum phenomena in general and quantum computing in particular. For their task, Quarks Interactive received sponsorships and partnerships from Microsoft, IBM, and the European Union, among others.
AI Teaches You a Lesson
Teachers’ work happens mostly outside of classrooms. I’m not referring to private tutoring, but to lesson preparation, test grading and analysis, and lesson planning. Same as artists rehearse tens of hours for a one-hour show, teachers plan for hours for a lesson they present in 30 or 45 minutes. When the rigid curriculum enters the discussion, creativity and flexibility seem to have vanished. But an experienced teacher knows that it is equally important how they deliver a lesson, even if its contents are mostly fixed.
Lesson plans include precisely such moments when the teacher could significantly change the presentation of a subject in the curriculum. For example, a geometry lesson must teach students to recognize regular polygons, how to compute area or perimeter. But the same objectives could be reached both by drawing abstract perfect figures on the blackboard decreeing “this is an equilateral triangle with side l = 12 cm” or using examples from nature, art, architecture, and origami.
Teachers face many deadlines for which they lack the time more than other resources. So, any time-saving method could be crucial. Which is why a Gallup study showed that over 60% of teachers of K-12 level use AI in teaching, including in lesson planning.
However, anyone who’s ever asked a chatbot a homework question saw how the AI responded. Fragmented summaries, full of emojis, some equations or formulas if you’re lucky, and maybe a follow-up question at the end, where it proposes a different turn of the discussion, usually away from what you had in mind. The tone is authoritative, with no hesitation: the AI knows; even when it errs or it hallucinates, it does so with confidence.
It’s no surprise, then, that a lesson plan made by AI does not encourage critical thinking but instead promotes memorization and rote learning of formulas. A study presented by the magazine Ars Technica analyzed 311 lesson plans generated by various AI models for a subject like civics education. The educational content was analyzed in the terms of Bloom’s taxonomy, developed by Benjamin Bloom, an education psychologist who worked on understanding objectives at various levels of education.

The results were clear: AI-generated lesson plans are boring, traditional in the sense of encouraging memorization instead of understanding, and with no initiatives to actively involve students. In other words, AI would have prepared students to behave like itself: speak confidently even when knowledge is superficial, don’t start real conversations, just ask plain “yes” or “no” questions and don’t encourage curiosity. Answers come fully baked, and the so-called “guided study” in features like Study Mode in ChatGPT is just a tightly constrained obstacle course, not one that is guided by a learning instructor.
However, the study does mention that using AI in the process of education, including in lesson planning, is not entirely to be avoided. Among the suggestions the AI made one could find valuable ideas, but it is up to the teacher’s experience to adapt those to their classroom.
As a teacher myself, I’ve been working with students daily for more than fifteen years. I think that such a conclusion is apt for the technological era, more so with the Internet and AI. Using AI in education should be encouraged, like using technology in general. But a tool should never become a replacement.
Same as students who copy their homework from the Internet, a classmate, or ChatGPT and eventually don’t contribute anything to their own education by presenting an externalized “work”, teachers who use AI-generated educational resources that they don’t scrutinize don’t educate. They encourage memorization (sometimes of false information), with no critical thinking, and no context that would make that information understandable, not just recorded in their brain to be forgotten in the coming days.
Thank you for reading The Gradient! The articles here will always be free. If you've read something useful or enjoy what we publish, please consider a small contribution by sharing on social media, starting a paid subscription, or a tip.