The misunderstood role of the physical world. Why AI still can’t master math or code

TL;DR: Arguably the most damaging myth in AI is the idea that abstract thinking and reasoning are detached from physical reality. The difference between the concepts involved in cooking and those used in math and coding, isn’t as big as you would think! Going from simple numbers to extreme mathematical concepts, I show why even the most abstract fields cannot be grasped without sensory experience. 

SHORT VERSION: written on Medium

Table of Contents (I specify the longer sections!)

CHAPTER 1: Link between abstractions and the physical world

Section 1 – Intro and presentation of the thesis

Section 2 – Why abstract fields depend on the physical world (main argument)

Section 3 – What transpositions and analogies tell us about cognition

Section 4 – Intellectual fields aren’t objective. They come from subjective experience [long!]

Section 5 – A closer look at extreme and unintuitive abstractions

Section 6 – Creativity is a property of reality

Section 7 – The crucial role of mental imagery for reading and writing

CHAPTER 2: Counterarguments and weird cases

Section 8 – If humans rely on visual imagery to understand math, what about blind people? [long!]

Section 9 – The role of physical interaction in the human experience

Section 10 – Non-intellectual abstractions (intentionality, emotions, selfhood, etc.)

Section 11 – Isn’t AI already smarter than us on certain tasks? [very long!]

Section 12 – Is it really EVERYTHING that comes from the physical world?

CHAPTER 3: Implications for AI

Section 13 – What is the role of language in intelligence?

Section 14 – We don't need to build the Matrix: AGI ≠ 4K VR physics engine [long!]

Section 15 – Where are we at? Isn't multimodality baked into modern AI?

Section 16 – How to build AGI?

Section 17 – Closing thoughts

CHAPTER 0 (before reading)

I came up with the idea of writing this text while managing a subreddit called “r/newAIParadigms”. It is a subreddit where other redditors and I passionately talk about intelligence and how to reproduce it in AI. Over there, we discuss various and ambitious ideas on how to achieve AGI. While none of us are researchers or experts at anything, we are just curious enough to have meaningful conversations! 

I’d like to take this opportunity to make a brief disclaimer about the text as a whole. This text was not written to convince or to prove anything. Even though I’ve done my best to back up my claims with sources, my goal was by no means to produce a piece as rigorous as one might expect from, say, an academic researcher. Rather, my motivation was to explore the topic enough to give someone reasonably open to my thesis a solid intuition of what I mean and where I come from. My arguments are meant to make my thesis sound credible, not to establish some irrefutable theory (in fact, calling this a theory would be vastly overestimating my knowledge and expertise). Unlike what this essay might suggest, I am not an expert in programming, math, or AI. I just happen to have taken a few courses at university and listened to bazillions of podcasts about these subjects in my free time. 

The point of view I develop in this text is one I have held for almost a year now. It is behind all my reasoning about what is needed to achieve AGI and I hope it's both clear and enjoyable to read. Given how long the text is, I separated it into 3 chapters, each with a few sections for a better reading experience. I also added subtitles here and there for skimmers and speed readers.

Despite the length, this text is written mostly in casual language. I designed it to be accessible to anybody. Since the thesis is quite ambitious, sometimes I am forced to go a bit more technical or to use very specific examples. But it’s okay if you aren’t familiar with what I am saying: skipping these parts won’t affect your understanding in any meaningful way.

Quick note on sourcing: I have included references wherever I felt they were needed using the marker [X], with X denoting the reference number which can be found (along with the corresponding source or link) at the tail end of the essay.

CHAPTER 1 – LINK BETWEEN ABSTRACTIONS AND THE PHYSICAL WORLD

Section 1- Intro and presentation of the thesis

In the difficult pursuit of AGI, it is crucial that our biases and preconceptions do not lead us down false paths or, worse, blind us to fundamental elements of intelligence. This essay takes on what may well be the greatest of these preconceptions. It seems so self-evident, so unquestionable, that most experts rarely mention it, let alone challenge it explicitly. Ironically, this belief is as widespread as it is damaging, because it distracts researchers from arguably the single most difficult problem on the path to AGI, one that might end up requiring the combined brainpower of nearly the entire field to solve.

This assumption can be stated as follows: abstract reasoning, as it is used in mathematics and coding, is detached from the physical world. Abstract thought, according to most AI enthusiasts, has little to do with what we experience through vision, hearing, or touch. After all, animals cannot do mathematics!

Put in layman’s terms, “It’s okay if AI can’t navigate a 3D space and serve me a coffee, as long as it can solve complex math problems and cure diseases.” 

Subtitle: Thesis

I’ll try to make the bold case that intellectual fields like math, science or even coding, are deeply tied to the physical world and simply cannot be understood without it. The only path for AI to code or engage in mathematical reasoning at a human level is to grasp the world on which these abstractions are based.

Subtitle: Clarifications of the thesis

The emphasis on “human-level” is deliberate here. AI today can undeniably write code and solve various math problems. Very recently, a few systems even snatched gold at one of the most prestigious math competitions on the planet (look up “IMO”). By human-level, I don’t mean topping competitions (which often depend on other factors I explain throughout this text). I mean being capable of adaptability, invention, and handling novelty. For instance, adapting to messy coding environments like codebases is a major challenge for AI today. I still can’t trust the SOTA versions of ChatGPT to do my homework, even when the problems featured have had solutions known for decades. A few rewordings by my teacher, which are barely noticeable to us students, are often enough to throw these systems off, leading them to produce nonsensical answers and reasoning steps.

Anecdote: A few months ago, I asked an AI to explain the concept of "Macros" in programming and Lisp to me, as I was encountering the concept for the first time. It gave a solid explanation, so in the same conversation I requested help with the first exercise of the manual and … it couldn’t do it! Since the exercise wasn’t framed exactly like the examples, it just couldn’t adapt to it (granted, it was just GPT-4o). I managed to solve it myself using its own explanations when I literally just learned how the concept even works!

A common confusion with my thesis is the assumption that it implies a need for a physical body and refined sensory capabilities. In my opinion, exposure to the real world can be achieved through simple video-watching (aka watching and analyzing YouTube videos, for example). Throughout this text, you’ll notice I am specifically very vision-oriented despite obsessing over the physical world. This emphasis will make sense to you as you read

Subtitle: Caveat

Before starting my presentation, I would like to introduce a small caveat. I am not suggesting that understanding intuitive physics will automatically lead to understanding math. I do believe that abstract reasoning may require additional structures or modules for these abstractions to develop. Humans likely possess something in their brains that enables deep abstractions. Something that animals do not possess, despite also understanding the real world. 

That said, I think we are fundamentally a species whose entire cognition is based on the physical world. Our ability to handle novelty, make discoveries, and invent new solutions, hinges entirely on how much we understand the 3D world around us. Simply put: sensory input, while not sufficient is necessary to achieve human-level AI.

Clarification: By “physical world”, I am not referring to abstract physics concepts (like F = ma, or atomic models). Rather, I mean in a broader sense everything we can observe with our eyes or feel with our hands. It includes objects, natural phenomena, animals, interactions between people, cultural events, intuitive physics, etc. It’s reality overall. If you can capture it with a camera, it’s part of the physical world. I also refer to it as "the real world", “the sensory world”, “reality”, “concrete experience” or similar terms.

Section 2- Why abstract fields depend on the physical world (main argument)

Let’s take programming. When we code, we mentally manipulate concepts such as memory cells and data structures (like stacks and queues). These concepts aren’t abstractions detached from reality like many believe. Storing data in a memory cell is analogous to placing physical objects inside drawers. Stacks and queues are analogous to piles of objects (like dirty dishes) and waiting lines. We’ve just transposed concepts we’ve formed about the physical world to something we can’t see. Something “abstract” (data is abstract because we can’t see or touch it). I call these concepts “transpositions”

It goes even further! Whenever we want to teach abstract knowledge and concepts, we almost always turn back to the real world to explain it. Almost every single course teaching the concept of “Object Oriented Programming” will use real-life objects, like engines, to define it. They would say that just like an engine: 

  • has attributes. Ex: power, number of cylinders, type (electric, diesel...)
  • has built-in functions (“methods”). Ex: “start”, “stop”
  • can be part of a larger object (a car), 

An object in programming has attributes, methods and can be used inside another object. 

This isn’t just about convenience. We don’t use the real world in our cognitive processes just to make things easier for us (see section 3). I am arguing that, at the deepest level of our psyche, we always consciously or not involve the physical world. The very way we think and conceptualize things comes almost entirely from there, even if we aren’t always aware of that. We always have these visual metaphors floating somewhere in our minds while dealing with abstractions. 

To sum up this introductory section, no abstraction produced by humans didn’t originate at some point from the real world. Every abstraction (math, economics, psychology...) describes an aspect of observed reality: people’s behavior, the mathematical structures and patterns discovered in our world, the dynamics that push people to buy things, etc.

I know that linking things like coding to the physical world is a very counterintuitive proposition, so I’ll try to rigorously further develop my thoughts as much as possible in the following sections

Section 3- What transpositions and analogies tell us about cognition

I have introduced the idea that almost all intellectual abstractions originate from the real world. To do so, I mentioned 2 ways the real world gets involved in our thought processes: transpositions and analogies. It is very important to make a distinction between these two notions. A transposition and an analogy are two completely different things.

Let’s clarify this distinction.

Subtitle: What is a transposition?

In the second section, only 3 transpositions were presented: storing, queues and stacks. A transposition can be defined as the direct transfer of an embodied, real-world phenomenon to a more “abstract” context. It is almost a one-to-one mapping. Storing, for instance, is an action. An action that can be done using different types of resources. It may be done with physical objects (clothes) or “imaginary” objects (a sequence of numbers, a string of characters, etc.). It's the same action but carried out in two different situations: a physical one and an abstract one. Likewise, a queue is a sequence of items that needs to be processed in a certain order (first-come, first-served). The items can be either physical items (people) or again, imaginary objects. It’s the same idea transposed to different situations. It is very clear to me that a transposition comes from the physical world. “Storing” and “queue” were discovered in the real world first before being applied to abstract contexts. In section 6, I will show why knowing the origin of a transposed concept is extremely important to be able to manipulate it effectively.  

Here are other transpositions outside of coding to cement what that is:

Math

Distances: mathematical distance⟷ physical distance; 

Set: abstract set ⟷ box/bag

Physics:

Magnetic fields: curvy geometric lines (“field lines”) ⟷ iron filling patterns around a magnet

Note: The concept of “magnetic field lines” came from an initial experiment where a physicist, Michael Faraday, sprinkled iron filings on a sheet of paper and placed a magnet under the paper. This created visible patterns (fuzzy “lines”) revealing the shape and direction of the invisible magnetic field. The lines we see in physics graphs showing magnetic fields are an exact transposition of a real-world shape (fuzzy iron fillings lines) to a mathematical context (perfect geometrical field lines) [1]

Subtitle: What is an analogy?

Okay, but what about analogies? An analogy is a comparison made between two completely different things that feature a specific resemblance. It’s a way to highlight a similarity between 2 phenomena, 2 events, or 2 abstract structures that in general aren’t the same at all. Unlike a transposition, an analogy is NOT a 1:1 mapping. In the second section, there was only one analogy. I made a comparison between car engines and abstract objects in “Object Oriented Programming” to explain what is considered an object in OOP. It was a completely exaggerated comparison because obviously, car engines and imaginary computer objects have almost nothing in common. I had to make it up from scratch. 

Subtitle: Why analogies are the engine of intelligence

However, analogies tell us something important about human cognition so bear with me. We use analogies in two contexts: when we want to explain something to others or … to ourselves! Many detractors would object that analogies are just tools to help us understand complicated notions. They would say, “Math and coding don’t really depend on the real world. We just use analogies because that’s how humans understand things better”. But this is just the first usage of analogies. The other one is much more interesting. In many cases, analogies precede abstractions. The analogy is how the abstract notion itself was initially apprehended and developed. The smart people who contributed to math, physics and other intellectual domains relied on analogical thinking to make their discoveries. It’s the very way humans in general internally think in many situations. We make analogies for ourselves all the time to try to process the world and make sense of unfamiliar phenomena. Even if the two things we are comparing are actually completely different, if we can find an angle in which they present a superficial similarity, it unlocks everything for us. Whatever we are trying to understand starts to make sense. Again, analogies aren’t just a tool for education. It’s an internal process used by all humans. 

Here are other analogies outside of coding to solidify this notion (as you can see, it’s a lot easier to find analogies than transpositions! That’s because parallels can be drawn with almost anything if we stretch it out enough)

Math

vectors ≈ arrows, matrices ≈ grids/shelves, functions ≈ machines that take an input and produce something (sometimes also compared with a pipe with an opening on both ends), space transformations using matrices ≈ stretching or bending a fabric, cartesian plane ≈ a real-world map

Physics

particles ≈ tiny balls bouncing around, gravity ≈ fabric, forces ≈ arrows

Formal logic

logical connectors (AND, OR, NOT) ≈ gates, models (in predicate logic) ≈ parallel worlds or universes

✦✦✦✦

Now, we have perfectly covered two ways we can see how human cognition is deeply tied to the sensory world and unconsciously involves said world all the time: through transpositions and through analogies.

Section 4 – Intellectual fields aren’t objective. They come from subjective experience

In this section, I would like to shed light on another angle from which we can see how important the physical world is for intellectual fields. Consider this simple yet powerful observation: most intellectual abstractions (math, coding and science in general) are human abstractions. They are constructs of the human mind. They represent reality as seen through a human lens. 

As such, 2 conclusions can be drawn: 

1- These intellectual so-called “objective” fields, are actually very subjective and open-ended. There could potentially exist as many versions of each of these fields as there are human beings!

2- Since these fields are the product of a human interpretation of the world, true understanding can only be achieved through a familiarity with the physical world, the main ingredient and only shared substrate of all these otherwise subjective and open-ended fields

Subtitle: Math and coding are the perfect examples of this

Let’s turn to math and coding to illustrate this, as they are by far the most popular intellectual abstractions in AI. I’ll start by showing how subjective and culture-dependent they are. Then at the tail end of the section, the 2nd conclusion should become apparent for both of these fields.

Unfortunately, this section is a bit more technical than what I had hoped. I want this entire essay to be as accessible as possible because separating the physical world from the intellectual sphere is the most prevalent mistake in the AI field (a mistake I estimate might be made by an astonishing 80% of this field). However, if I want to demonstrate how subjective intellectual fields are, I have no choice but to dig a little bit and find concrete examples. That said, don’t worry: I’ll stick to relatable ones accessible to anybody with even the slightest bit of experience with these fields. 

Subtitle: What is math really?

The term “Mathematics” encompasses all the tools humans use to interpret the recurring patterns and structures of the universe. It is the field of tools and symbolic systems humans created to describe and reason about abstract structures, quantities, shapes (geometry), space and relationships (functions). It's a language that relies on symbols (numbers, variables, operators) to model and solve abstract scenarios in the world.  

Subtitle: To each human their own math concepts

Since it’s a language, it's a very subjective field because again, each human has their own way to represent the world. We do not perceive the world in exactly the same way, and therefore the concepts we derive from it are not identical, even if they often overlap.

The concept of zero, for example, didn't exist in many civilizations. Romans didn't have any symbol to represent the concept of "nothing", as the idea of representing nothingness was seen as philosophically troubling. It first emerged in India, before being adopted by the Islamic world and eventually made its way to Europe. 

The concept of randomness still isn't universally accepted by all mathematicians. Many people simply don't believe in chance. They see uncertainty as a product of ignorance, not pure chance. As such, probabilities are a relatively recent mathematical field because attempting to quantify randomness just didn't match with many people's conceptualization of the world. 

These are just two simple examples to show how the concepts we use in math aren’t shared by all humans. I could have also mentioned the concept of infinity, which is far from universal either. 

Subtitle: To each human their own math systems

We also don’t use the same systems to count, represent abstract structures (sets, axioms, matrices...), and solve problems.  

There are countless ways to count: Roman numerals, base 2 (binary), base 3, ..., base 10 (decimal), base 16 (hexadecimal), Mayan numbers, Babylonian base-60, Chinese rod numerals. 

Likewise, there are countless ways to represent a problem. Put two people with no formal math training in front of the same problem, and they might come up with totally different methods to both model and solve it. They might use different symbols and invent their own system, shaped by how they perceive and interpret the world. 

Heck, I have seen colleagues invent their own system on the fly to try to make sense of an abstract situation, not knowing that there is already an established way to do it. You’ll see original diagrams on their notebook with arrows all over the place and unusual notations. For instance, the array of numbers [2, 8, 32, 128...] is usually officially written as [2^1, 2^3, 2^5, 2^7,...] or better [2^n | n ∈ ℕ, n odd]. But someone could very well write: [2^1, 3, 5, 7...] and still understand what they are doing. 

Of course, improvised systems are generally not as battle-tested as the formal systems we learn in school. There might be situations they don't account for. But as I’ll explain in a bit, there is no such thing as a mathematical system that can perfectly deal with and represent any situation because the real world is just too complex.  

Subtitle: Math is arbitrary

To give a better idea of how arbitrary and “human-specific” mathematics are, think about natural numbers. There is no such thing as the number “1”. We are the ones who arbitrarily decided to isolate entities in the world. Outside of our lenses, the concept of “1 room” or “2 mountains” makes no sense. How do we know where a mountain ends exactly? Where do we draw the line between a mountain and a plateau? Natural numbers are a concept we’ve arbitrarily designed. 

An astute reader might contend that there definitely exist individual objects in the world that can objectively be defined and counted. Atoms for example! (or whatever fundamental particle physicists landed on today). This is where my point gets stronger! The very notion of a "particle" is blurry today. The idea of a particle as a tiny ball is simplistic. What we see as a particle can also be seen as excitations of quantum fields, closer to a "ripple" than a defined sphere. Again, the world is extremely complex. Seeing particles as "tiny spheres" or "ripples" is just a human way to describe things. Isolating elements is something the brain does to make things easier for us. We constantly make arbitrary hypotheses and simplifications about reality because otherwise it would be hopeless for us to grasp it. 

Disclaimer: By the way, my intent isn't to spark a debate about whether natural numbers truly exist "out there" in the world. Maybe they do. But if the objectivity of such a relatively simple concept is already debatable and difficult to establish in absolute terms, then that doesn't bode well for the much more complex mathematical constructs humanity has devised.

Back to the initial point about the arbitrariness of mathematics, I see math as a functional system built on assumptions (axioms and premises) that have proved useful, consistent and robust so far. But we could have designed the system in a completely different way. In fact, there exist mathematical statements considered true by the majority of mathematicians (because of real-world experience), which remain unprovable within the current formal system (unless we modify it, say by introducing new axioms) [2]. This clearly demonstrates that mathematics is a human construct which, despite its complexity, still falls short of fully capturing truths we recognize in the real world. 

Subtitle: “Solving math”?

I hope this rambling was enough to convey how subjective, arbitrary and culturally shaped mathematics is. If you think about it, the word “math” itself is a very loose umbrella term. It encompasses a large number of techniques and formal systems that have barely anything in common. The concepts included under this term are so miscellaneous that an expression like “solving math” (something we often hear in AI) barely has any meaning. The only common ground between all mathematical concepts? The real world!  

Subtitle: Math vs Physics

Before analyzing coding and how it’s also a subjective field, I would like to clarify one final aspect of my definition of mathematics. I said “it’s the tools humans use to interpret (and manipulate) the recurring patterns and structures of the universe”. But if we tie maths to the real world, what’s the difference between math and physics then? 

The field of mathematics emerged through the following process: the brain observes the world, detects patterns and rules, and then extrapolates from those rules. This leads to the creation of an entire system that seems disconnected from the real world, but only because we've taken a basic logic and pushed it to its extremes. We started with a grounded system and stretched it far beyond. 

The difference with physics is that maths is used in an infinite number of situations: a farmer might use it to keep count of his animals, an economist to make sense of the market, an engineer to design electronic systems that fit some constraints. Your mother uses it unconsciously while attempting to separate the pizza into a fair ratio. 

It’s a very general tool applicable in both casual and formal situations. On the other hand, physics is different in two ways: 

i) It is used explicitly to describe the physical world and its dynamics (not just everyday situations) 

ii) While math often stretches a grounded concept to such a degree that it ends up having barely anything to do with the real world, physicists always need to check if their extrapolations are still consistent with reality (through experiments). That’s why extreme theories like String theory are very controversial. They are closer to maths than physics! 

Subtitle: What is coding/programming really?

Now let’s switch our attention to coding.

Programming is the field of human protocols used to communicate sets of instructions to machines. It encompasses all the languages and symbolic systems humans use to instruct machines and automate the execution of calculations and various computer-related tasks. We provide a set of structured instructions to a machine (in a formal format) along with some data, then the machine executes the specified operations on said data. The set of instructions is called a program. 

Similar to math, programming is also a language. Only, instead of encoding random real-life scenarios, this language has the purpose to translate human intent into actionable sequences interpretable by machines.

Subtitle: Programming is subjective and arbitrary

Because it’s a language, it's unequivocally a very subjective field. There are thousands of programming languages. Many of them don’t just differ in terms of the keywords and symbols they use, but actually use a completely different philosophy to communicate instructions. Programmers either lean toward one of these popular philosophies or simply create their own! 

Let’s cite a few to illustrate (if you aren’t familiar with them, it’s okay. It’s just examples). 

----

Object-oriented programming (OOP): programmers using OOP like to see programs as a set of interacting entities called “objects”. An object encapsulates some data and a predefined behavior. This approach is loved for its reusability (new programs can be built by recycling former objects)

Procedural programming: In PP, the program is simply a sequence of step-by-step instructions, like following a recipe. Everything happens in order, and nothing is stored unless explicitly specified (no reusable objects).

Functional programming: In FP, programs are made of functions with a simple “input -> output” format. Instead of executing actions (which can be hidden), these functions generally simply return information. This approach emphasizes predictable behavior. 

Declarative programming: In DP, programmers describe the desired result rather than the process to achieve it, leaving the "how" to the machine (the underlying engine or interpreter).

----

The exact same problem can be solved differently through these coding paradigms. They offer fundamentally different ways to decompose problems and build an appropriate solution, not just a different syntax. None of them is “better” than the other. Choosing one or the other is mostly a matter of taste (though some tasks are usually better solved with a specific coding paradigm). It’s an arbitrary decision.

Subtitle: To each programmer their own concepts

I think it’s equally fascinating to realize how much programming is filled with human concepts. For instance, bytes (aka the data) encode entirely human-defined constructs like integers, floating-point numbers, and letters. 

Different programmers often define programming concepts in different ways, and sometimes some concepts don't even exist in certain languages or systems!

For instance, in Unix/Linux, everything is considered as a "file", including peripherals (like printers, hard drives, etc.). But in Windows, a clear distinction is made between files (like Word documents) and peripherals. A program can't just decide to "open" a printer as if it were a mere Word document.

The notion of a string (aka a sequence of textual symbols aka text) exists in almost all modern languages, but not in C! (at least not in the traditional sense). In C, text is considered as an array of individual characters, not a cohesive whole. 

To copy text, we have to rely on low-level functions which painstakingly copy all the individual characters one by one (whereas other languages would just copy the whole text at once). To determine the length of a text in C, we have to use a function like “strlen” which locates the characters and counts them one by one. In other language,s the length is provided automatically as the system registers the text as a whole, along with its related meta-information. 

I hope this was enough to show how subjective the programming field is. Different programmers approach the task “communicating to a computer” differently (through different programming philosophies). 2 experts can solve the same problem completely differently using different languages, architectures and computers. There are endless ways to design protocols to communicate with computers.

Subtitle: The only objective aspect of math and coding

Now that we’ve established that the two most popular intellectual fields in AI are subjective, open-ended and culture-dependent, we can finally reach the 2nd conclusion I introduced earlier: the necessary familiarity with the real world.

Since these intellectual abstractions are shaped by the human experience, they depend not only on our biases and ways to perceive the world but more importantly on the real world, the very substrate upon which we built those abstractions. We may all have different points of view on it, but ultimately all our ideas are drawn from the same world. That’s why there are so many overlaps and shared concepts between humans. That’s why there is still this recurring base of mathematical and programming concepts between different cultures. Our definitions may vary somewhat but this shared reality guarantees the recurrence of certain concepts among humans. Likewise, this same common ground allows us, despite at times drastic divergences in conceptions, to still grasp to some extent what others are referring to and why they see things the way they do. 

It's what enables mathematicians and programmers to understand the systems designed by their peers (aka other mathematicians or other programmers) even when those are completely different from their own. 

In programming, the link with concrete experience is even more apparent, as the commands performed on data are actions we do all the time in the real world (“printing”, “reading data”, “storing”, “fetching”, “sending”, “receiving”, “saving”). 

It is thus reasonable to conclude that having a solid grasp on the real world which was critical in the process of designing our mathematical systems and programming languages, is likely mandatory. Otherwise, an AI would be limited to superficial manipulations within these two domains (see section 6). I think we severely underestimate how much the real world is involved in every abstract system we create. 

*Note: My goal with this section wasn’t to propose a cynical relativist view of intellectuality. I am not saying “math and science are pure fiction invented by humans”. I have a more positive view of science: we get closer to an objective view, but never fully quite get there. Furthermore, despite being “arbitrary”, math and science propose useful models of the world. Planes fly, vaccines save lives and we can have conversations with a computer! These abstractions may not tell the full truth, but the gigantic advances our civilization has produced come entirely from them, so they do capture interesting features of reality. 

Section 5 – A closer look at extreme and unintuitive abstractions

So far, we’ve looked at fairly intuitive and simple abstractions: natural numbers, simple analogies, the concepts of “storing”, “queues”, etc. But what about extreme abstractions? Hasn't humanity invented concepts, structures, and theories that have nothing to do with the real world? Some are so extreme that they actually go against the intuition we’ve formed from concrete experience! Imaginary numbers, some of Einstein’s theories, the notion of infinity (especially in sets), Hilbert space, Information Theory, Turing machines, 4^th^ or 5^th^ or nth dimensions in math and physics, etc, are very good examples of concepts that seem completely detached from reality. We like to qualify them as mere abstract tools that we use to help us deal with more concrete phenomena. 

Subtitle: Why extreme concepts are grounded

However, I believe even the most extreme concepts and abstractions have strong ties with the real world. There are two types of such ties: 

1) Extrapolation

Some extreme concepts are nothing more than extreme extrapolations of real-world phenomena. We form models of the world based on our understanding of it, then we stress that model by asking ourselves, “if my conceptualization of this thing is correct, what would happen in this extreme scenario that I would never be able to actually observe for real?”. For example, the concept of “4^th^ dimension” is an extrapolation of the concept of 3D. The extrapolation only exists because of our model of the real world.

2) Chain of abstractions

Sometimes, these abstract concepts are built upon layers of other concepts that are themselves grounded in the physical world. These concepts are the result of a chain of abstractions, starting from tangible phenomena to an abstraction of it to an abstraction of the abstraction, until we get to the final concept that doesn’t seem linked with observable reality. For example: addition (counting real-life objects) → multiplication (multiple additions in a row) → exponentiation (repeated multiplication) → roots (“inverse” of exponentiation, depending on your definition of “inverse”) → imaginary numbers (roots applied to negative numbers). Without the more concrete layers (like counting real-life objects), the abstract concept (“imaginary numbers”) not only can’t exist but also has no meaning

Subtitle: Counter-intuitive abstractions

The case of abstractions so extreme that they defy intuition, is fascinating and deserves an appropriate focus. One of the main reasons why we sometimes see fields like math as detached from the real world is precisely these unintuitive mathematical concepts. To look into this case, I will make use of the concept of infinite sets. If you aren’t familiar with this example, then it’s probably the most technical of this entire essay (though if you ask ChatGPT about it, you’ll realize it’s actually easy to understand). In such a case, I recommend jumping to the conclusion of this section.

DEMONSTRATION.

In math, we have sets of numbers. The set of natural numbers (called N) is the one we all know: [1, 2, 3, 4...]. The set of even numbers (let’s unofficially call it E) contains only the even numbers starting from 0 (we’ll ignore negative numbers for this explanation), namely: [0, 2, 4, 6, 8, ...].

By intuition, most people tend to think that the set of natural numbers is larger than the set of even numbers, since even numbers are included within natural numbers. Usually, in real life, when a set A contains another set B, that means A is necessarily bigger than B. However, this logic doesn't hold when it comes to infinite sets like N and E. 

Since N and E are both infinite, we can't determine which one is larger just by naively counting the elements they contain. We need to use a more appropriate method: trying to link each element of one set to an element of the other. For example, if I have two sets A = [1, 2, 3] and B = [0, 5, 7, 9], then to determine which one is bigger, I can try to link each element of A to one element in B. If after making these pairings, one set still has leftover elements that aren't linked to anything, then that set is larger. Here we can clearly see that one element of B is left out: [1 (A) → 0 (B), 2 (A) → 5 (B), 3 (A) → 7 (B), ?? → 9 (B)]. Obviously, this is simply due to B having more elements than A.

This method is better suited for infinity because it doesn't rely on direct counting (which is impossible when infinity is involved) but instead on a 1:1 comparison between elements of two sets. When applying it to sets N and E, we realize that for every natural number in N, we can always find an even number in E to pair with it. So we must conclude that, against our initial intuition, the set of natural numbers is the same size as the set of even numbers.

To reconcile our view of the world with this strange fact, we just need to realize one thing: we’re talking about infinity. The very notion of "size" or "being bigger" doesn’t really make sense in this context. If we take the strict definition of "size", then in fact, N and E are the same size: they both contain an infinite number of elements. What we really mean here is that N and E have the same density.

Okay, but if a mathematical fact goes against our intuition, how can we claim that math depends on the physical world? Didn’t the brilliant mathematicians who discovered these facts have to go beyond intuition i.e., beyond what the real world told them? Well… not really! In fact, it’s still the intuition of these mathematicians that allowed them to arrive at these findings. They simply had to confront what their intuition told them in several contexts and decide (perhaps using the majority principle) when their intuition was accurate and when it was misleading.

Here, when comparing the size of N and E, our intuition tells us three things:

i) Normally, a set A that contains another set B, is bigger than B.

ii) If we can make a one-to-one pairing between the elements of two sets such that each element of the first set is matched with an element in the other set, then the two sets are the same size. We could also say they have the same density.

iii) When dealing with infinity, the notion of "which set contains which" no longer makes sense. We can’t really visualize an infinitely large bag, much less determine which one contains the other. Focusing on density to determine "size" seems more appropriate.

By combining ii) with iii), we conclude that the size of N = the size of E. 

END OF DEMONSTRATION.

The observations that led to this fact ALL came from concrete experience. We simply compared what our intuition told us across different contexts. Given that our intuition seems to alternate between supporting one claim and supporting its negation, we choose to trust the version that's more consistently supported, i.e., the one that leads to fewer inconsistencies.

We often define intuition as “what comes to mind first”. I’d rather define it as the knowledge we acquire from the real world. And again, here, all three observations come from the real world, regardless of which one comes to mind first.

If you think about it, mathematicians and scientists usually make their discoveries through intuition, not through rigorous reasoning. They feel like something is off. A result or a finding doesn’t quite match their model of the world. The same intuition that sometimes misleads us is also the one that gets us to realize something is off and guides us toward deeper truths!   

Subtitle: Concluding thoughts on counterintuitive abstractions

In conclusion, even the strange and supposedly "counterintuitive" mathematical facts... are actually supported by our intuition because it’s still our understanding of the real world that allows (or rather allows mathematicians) to establish these facts. We would never consider something true if it went against EVERYTHING we know about the world. The word counterintuitive means “an idea that doesn’t come to mind first or quickly”, not “an idea that magically pops in the brain contradicting everything we know from the real world.”

Section 6 – Creativity is a property of reality

In the previous section, I showed how even extreme abstractions are tied to the physical world through a hierarchy of more grounded conceptual layers (e.g., counting real-life objects → multiplication → ... → imaginary numbers). This is an opportunity to make a bigger point: only a grasp of the lower layers allows one to manipulate the extreme abstractions effectively. As such, these foundational layers are the only path to achieving creative freedom. AI skeptics often point out how current AI is unable to adapt to novelty and incapable of genuine creativity. This is where I am going to dive into why!

Using very abstract concepts correctly (like knowing when it makes sense to apply them or not) and creating new ones is only possible if one has a deep intuitive grasp of where they come from and why they were created. We are able to use these abstractions even in unusual situations because we also have access to the lower levels of reality they are based on. That’s why teachers like to point out that merely using a memorized formula (in physics or math) doesn’t really mean you know what you’re doing. Only when you understand the *why* behind the formula can you use it and adapt it for unfamiliar situations. 

Let me give you a couple of simple examples

----

As humans, we've all learned the "mean" formula (first quantity + second quantity / 2). But common sense is generally what tells us that you can't add 2 quantities with different units. It's because we have experienced the notion of distance that we intuitively know one can't just make an average out of 5km and 8m without first making a conversion, because adding kilometers and meters together has no meaning in the real world. 

Most of us have also learned algebra in school and how to isolate a variable to find its value. We've learned that to do this, we must apply the same operations to both sides of the equation until we've canceled everything around the unknown variable. However, common sense is what allows us to realize that the following equation (0x = 5) makes no sense, because in the real world I can't multiply "nothing" (0) to a big enough degree that I would get "something" (5) out of it. A basket with 1 orange times 5 gives 5 oranges, but I could stack up empty baskets forever and not a single orange would ever spontaneously pop out of thin air. Therefore, the isolation trick cannot work.

Finally, let's take matrix inversion for one final, more complicated example. Many of us have learned that a matrix represents a deformation of space. Thus, inverting a matrix is essentially undoing said deformation. We've learned a few ready-made methods to be able to invert a matrix. However, it's common sense that led mathematicians to realize that not every matrix transformation can be inverted because some transformations lead to a loss of information (for instance, flattening a 3D space onto a 2D plane). As a consequence, these ready-made methods can't be applied to all matrices because we can’t recover information that doesn’t even exist anymore.

----

My goal here is just to take simple examples to show that intuition about the physical world can often be necessary to apply premade abstractions correctly to new contexts. Otherwise, the risk of basic common-sense errors becomes considerably higher. Without even realizing it, we often make these back-and-forths between high-level abstractions and the real world they represent, to make sure what we’re doing makes sense. And while it’s not always necessary to keep the physical world in mind when applying an abstraction, it is absolutely essential for true mathematical creativity and discovery of significant rules and theorems.

Current AI systems are able to use human-made abstractions (math formulas, human-provided heuristics to play Go, etc.) only in controlled and restricted cases where humans made sure the AI will never even have the opportunity to make illegal moves. They cannot go outside these human-provided boundaries to test anything new. They can only make minor discoveries that are possible within these frameworks, and thus true creativity is completely out of their reach. 

Nobel Prize winner Demis Hassabis admitted that what would be truly impressive isn’t AI beating world champions at Go or solving theorems (with heavy guidance) but inventing a new game as elegant as Go or coming up with interesting theorems worthy of being solved (famously, “asking the right question is harder than finding the answer”). Exposure to the real world is how AI will get there.

Section 7- The crucial role of mental imagery for reading and writing

As I said, there are no cognitive processes that don’t involve the physical world, and reading/writing is no exception. Here is why: whether we are reading a novel or an academic paper, we ALWAYS form mental pictures during the process. It’s often blurry imagery, abstract visual symbols, and absurd little scenes that we use to understand what we are reading. 

If it’s a novel then we picture the characters, their interactions, and the locations as we read. If it’s something more abstract (like lecture notes from a math class, a scientific article, a technical paper…), we may not rely on clear mental pictures like we would for a novel, but we still rely on a ton of visual mental clues like arrows, grids, geometric shapes, lines and boxes. These mental images are blurrier and more abstract than those we use for novels, but they are just as vital. 

Let’s take my favourite example. 

----

When reading abstract math rules like “3 vectors can’t all be linearly independent in a 2D space”, almost every single student relies on visual reasoning to understand it. We picture the vectors as arrows in a 2D plane and realize that, according to our understanding of space, no matter how we try to position the 3^rd^ vector it will always lie in the same 2D plane formed by the other two, making them linearly dependent. The abstract rule comes after the spatial and visual reasoning. 

----

To that point, many would argue that an arrow, just like a letter or a number, is an abstract symbol with little to no connection to the physical world. I don’t see it that way. To me, it seems very clear that many visual symbols, such as arrows or geometric shapes, are just simplifications of forms observed in real life. An abstract image is still a snapshot of reality: it’s just a simplified, compressed, and distorted version of it. It’s a visual metaphor.

Back to my initial point about the role of mental imagery, reading and writing are inherently visual processes engaging spatial reasoning and, more broadly, reasoning about the physical world in general. We don’t just manipulate symbols on paper. We picture them to make sure our text makes sense and is consistent with reality. The words and textual symbols themselves are full of imagery: “he stormed in, she cracked under pressure, he froze in place, her words hit me hard”. All of these words trigger specific mental imagery unique to each of us. Vivid and figurative words are also abundantly used in scientific literature to create these same mental effects: information flows, memory leak, energy landscape, “a function behaves like” (as if it had a personality).

Even people with aphantasia still have *some* capability to mentally visualize scenarios (although it’s sometimes very limited). It’s essential for recalling memories, spatial awareness (being able to find one’s way to work or home), planning, reading novels, etc. I will admit that I am not familiar with this condition, so I won’t make any definitive judgment. I suppose for extreme cases of aphantasia, those with the condition rely on other modalities like touch, which I … touch on later! (no pun intended).

CHAPTER 2 – COUNTERARGUMENTS AND WEIRD CASES

Section 8- If humans rely on visual imagery to understand math, what about blind people?

If you noticed, in most of the examples I used so far, I always insist on our visual capabilities. I explain how mathematicians rely on mental imagery to make sense of a math problem and how we always use some type of mental visualization in all our cognitive processes. The obvious counterargument is the existence of people who have never even experienced visual perception, blind people, yet still in some cases managed to become high-level mathematicians or coders [3]. How did they do that?

Subtitle: Touch is the 2nd most powerful modality

Vision is the most efficient modality through which humans understand both the concrete physical world and more abstract data (math, code, etc.). However, touch also provides a ton of information about the structure of our world. Specifically, we can understand the world through vision and touch, or each of them taken separately. As for audio, while it can be a rich source of information, it’s a lot more limited (more on that in a moment). 

Disclaimer: Before attempting to explain how one can understand math, code and other abstract domains without vision, I have to admit that whenever we involve people who are born without a significant sensory input, it becomes a speculative exercise since most of us have access to all our senses to understand the physical world. So before I try to convince you how someone can get to understand math through audio or touch, keep in mind that ultimately the honest answer is: I don’t know. I am guessing based on my own behavior, what I’ve heard from scientists, etc.

I think we severely underestimate touch as a modality. People who managed to become proficient mathematicians or coders while being born blind all had access to touch, with no exception I know of. Let’s compare vision and touch. Vision is powerful because through a simple image, we can perceive tons of information at the same time and in parallel: shape, distance, position, texture, movement, color, perspective (what is in front or behind in a scene), and even temporal changes (provided we are fed at least a small number of visual frames). Touch is a little less rich. It’s a lot more localized and more sequential (one can’t touch everything at once in parallel). However, touch still provides all the information necessary to mentally make sense of a rich 3D scene. 

Through touch we can feel shape, angles, curvature, and direction. If one has the patience to grope around for long periods, it is also possible to feel the distance between objects, movement, and detect even temporal changes. People who rely on touch understand the physical world very, very deeply. It’s just a lot less efficient than vision. I have seen blind children riding two-wheeled bicycles. Many animals born blind can perform very complex actions and navigate the world with ease.

Subtitle: Vision and Touch are continuous

Touch, like vision, is a continuous modality meaning it offers an uncountable and uninterrupted stream of information over time. This continuity of data shared by both senses enables them to provide (in some sense) a near infinite amount of information about the real world. The number of photons that can theoretically reach our eyes is innumerable. The same goes for the atoms that constitute the objects we touch. Unlike discrete sources of information like text, we can’t really count photons or atoms. The reason why touch provides less information is that raw quantity of data ≠ useful data. The photons that reach our eyes tell us a lot more about the structure of the world than the “atoms” our hands can perceive. 

Subtitle: How touch can be used to reason in math

To be more concrete and show how touch can help someone understand math, I’ll try to show how everything I mentioned in the writing section can be done through this modality. I’ll try to show that the reasoning processes we do while processing text or math (like spatial reasoning) can also be done through touch. 

To make sense of abstract math rules like “3 vectors can’t all be linearly independent in a 2D space”, instead of visualizing the vectors in space, it’s also possible to “feel them” in our mind. Instead of seeing vague, blurry and stylized vectors interacting in space, we can also vaguely feel their interaction and spatial relationships. We can almost mentally “touch” them, though it’s obviously not as precise and clear as visual imagery. 

The 

I mentally picture two non-parallel arrows extending across a surface, and see that any third arrow I can imagine will fall in the same plane as the first two

thought process becomes 

I mentally feel the directions of two rigid sticks lying flat under my fingers, and realize that my fingers just can't find a third direction that doesn’t already lie in the same flat plane formed by the first two.

All the mental imagery we typically employ while reading, coding, or engaging in mathematical reasoning, is replaced by more deliberate touch-based sensations. Touch-only people reason through sensations instead of visuals. This illustrates the extraordinary adaptability of the human (and even animal) brain, and the extent to which it is a true marvel of nature. We can form deep intuitions about how the world works just by feeling around. We can understand 3D, intuitive physics, the relationships between physical entities (objects, animals, humans), structure, texture, how nature works, and even how people behave simply by hyperfocusing on the other signals the world provides us (touch and audio).

It is also worth noting that, just as we have created tools and visual symbols to do math more efficiently, the blind community has also developed a lot of tools to reason in touch: Braille, raised diagrams, and haptic interfaces

Subtitle: A short aside on audio as a modality

But what about audio? Is it possible to understand the physical world through auditory input alone? I don’t think so. Audio does provide valuable feedback about the physical world, but it’s a lot blurrier. It can be used to estimate distance and have a rough idea of how things behave (since objects tend to make sounds when moved or while interacting together), but that’s about it. One can’t really understand the shape of objects at any deep level just because of the sound they make. Much less understand texture, color, perspective, direction, etc. That’s why it’s possible to find examples of “touch-only” individuals who reached high levels of achievement in abstract domains, whereas no such cases exist for “audio-only” individuals. 

Nevertheless, audio is a much richer source of information than, say, text simply because like vision and touch, it’s a continuous type of data (unlike text, which is discrete). Only, it is better served as a complement and not the main source of information.

Section 9- The role of physical interaction in the human experience

Thus far, this essay has only focused on passive modalities such as vision, touch and hearing. But, make no mistake: physical interaction is also a very good source of information. By physical interaction, I don’t include touch since it’s still pretty passive (it’s just tactile perception). I mean moving objects around, pushing, lifting, applying pressure, etc. Babies for instance, learn about gravity not just through passive observation but also through active experimentation. That’s why they so often like to throw objects on the ground. It’s not necessarily to annoy us, but rather a way to verify that gravity indeed applies to every single object. Physical interaction isn’t strictly required to understand the fundamentals of the physical world. After all, infants acquire many concepts long before they can use their limbs in any meaningful way. Nevertheless, it significantly speeds up the learning process. Many things can never be fully understood through passive observation or mere tactile perception alone.

Subtitle: Interactions are not indispensable for intellectuality

My personal opinion however, is that physical interaction isn’t essential for the development of intellectual abstractions specifically. Since most intellectual abstractions only feature general concepts about the world, they can in principle be grasped just through vision (or touch). Active and precise physical manipulations are a non-negotiable in tasks requiring fine motor skills, but it’s not necessary to develop concepts such as spatiality, quantity, sets, order (for math) and events, loops, categories, causality (for coding), because these notions are general enough to not really require any interaction to be learned.

Section 10- Non-intellectual abstractions (intentionality, emotions, selfhood, etc.)

I’ve shown how a lot of intellectual tasks are deeply tied to the physical world. But the same is also true of other abstract concepts that aren’t as “intellectual”. Let’s look at a few of them.

Subtitle: Social abstractions

Sometimes, social abstractions seem so abstract that it doesn’t feel like they are “physical” concepts. They seem like relatively groundless constructs of the human mind. Intentions and emotions are two good examples of this.

Intentionality is an abstraction for people’s goals and motives. Since we can’t read other people’s minds, we learn to guess their intentions mostly through physical cues. We develop a mental model of their behavior. Animals are very capable of this abstraction. They are very good at figuring out what other animals are up to based on their body language. There are videos of animals observing predators from several meters away and understanding that an attack is imminent despite no obvious sign of that. 

Emotions are even easier to grasp through the physical world than intentionality. Humans are very good at telling the emotions of loved ones through years of familiarity and observation. A nervous quirk can reveal anxiety, silence can reveal hidden anger, etc. 

For blind people, those abstractions are harder to form, but they do exist. To read others’ emotions and intentions, they tend to develop a hypersensitivity to tone of voice, speech rhythm and hesitations, all audio-based clues thus part of the sensory world. In more intimate relationships, they may pick up on physical cues like muscle tensions and face touching.

Subtitle: When a “metaphysical” abstraction… is actually physical!

But what about concepts often thought by many to be borderline mystical, like the notion of self? I am referring to the ability to know what the word “I” or “you” means. As shocking as it can be, even such a concept can almost 100% be developed through simple observation aka vision. Many animals don’t seem to be conscious of themselves. They don’t recognize themselves as a distinct agent in this world with a will and desires. Yet… they still come to us when we call them by their names. They may not know that the name is referring to them (no concept of selfhood), but they react as if they did. For instance, through experience they might understand that whenever they hear that name, something good follows (petting sessions, treats, etc.).

My point is that AI doesn’t need to have consciousness, whatever that is, to understand abstract concepts like intention or selfhood. It just needs to observe the world and observe people and it will understand these indirectly.

Consider the following thought experiment. 

----

Let’s imagine that every morning, around 9 a.m. at sunrise, a mysterious song is heard on an island. When the inhabitants head toward the approximate source of the sound, they always find some kind of surprise: sometimes food, other times money, diamonds, or even old but useful objects. No one will think, "This is meant for ME. I am a special and unique being". However, everyone still goes to the exact same location each time the song is heard, because a correlation has been established between the song and these mysterious surprises.

Likewise, an AI doesn’t need to feel that it exists or experience emotions to understand those concepts. It only needs to observe reality and people, and it will indirectly figure everything out. When someone refers to the AI as “you”, that word will trigger specific behavioral patterns similar to those of a human who truly has a sense of self. Understanding the word “I” ≠ truly feeling what that word means (just like understanding others’ emotions doesn’t imply an ability to feel them).

Section 11- Isn’t AI already smarter than us on certain tasks? (playing Go, solving equations)

Earlier in this text, I made the point that because many abstractions come from the physical world, it is often necessary to grasp that underlying substrate to use these abstractions effectively (see section 6). However, a common observation seems to challenge this claim: tons of AI systems are superhuman at various tasks. 

Equation solvers and theorem provers perform better than most math experts. AlphaGo and AlphaZero can crush any human Go player. AlphaFold predicts protein structures with greater accuracy than entire teams of biologists. More recently, today’s LLMs keep consistently ranking among the top 1% of coding and math competitions.

While I certainly don’t master all the technical details behind those systems, I have collected enough insights on how they work at a conceptual level (I even interacted with some of them!) to make an informed analysis here.

To provide more relevant insights, I’ll frame my analysis around the concept of “understanding” or “performance” instead of “intelligence”. It keeps the spirit of the question (human performance vs machine performance, which is really what the question is about) while avoiding arguing about the ill-defined concept of intelligence.

I’ll examine 4 cases where AIs outperform humans, as I think it’ll be enough to see why the existence of superhuman AIs doesn’t undermine my overall thesis.

Subtitle: (1) AIs specialized in maths

This group includes systems like equation solvers and theorem provers. Since math is an open-ended field, these systems rely on simple heuristics (strategies) to fulfill their goal. This allows them to avoid having to capture the full complexity of math and still outperform humans.

Let’s dive into these two examples. I’ll start with a quick reminder of how they work to better understand their limitations.

Equation solvers and theorem provers operate on very similar principles. Once we’ve expressed the task in a formal language understandable by the computer, the machine will usually attempt a few standard algorithms and manipulations either completely randomly or by following a defined order (starting with the algorithms that are usually the most effective). 

For equation solvers, once the problem has been formulated, there is an identification step to determine the type of the equation (linear, polynomial, exponential, logarithmic, trigonometric, etc.). Depending on the type, the solver will test the appropriate manipulations to isolate the variable and find its value. For linear equations, simple arithmetic operations applied to both sides are usually enough. For quadratic equations, completing the square and factorization might be necessary. For exponential equations, solutions typically involve a change of base, use of logarithms and linearization.

Equation solvers are usually relatively easy to design. It's a pretty straightforward process. They are purely algorithmic and work via trial and error

Theorem provers, however, are considerably more interesting. On the surface, they follow a similar approach to equation solvers but with a lot more complexity involved. 

First, designing a formal language in which any theorem can be readily expressed is much, much more difficult than for equations. Like any other language (English, French), ambiguity is a major challenge. If the formal language isn’t complex enough (and it never is), expressing theorems through it can slightly change their meaning. Thus, the system might attempt to prove something that’s completely different than what was intended. 

Second, the strategies used by theorem provers have multiple layers to account for. Like equation solvers, there are basic operations like “substitution, applying a lemma, using definitions and axioms, etc.”, but there are also “meta-strategies” to decide on beforehand: direct proof, proof by contraposition, proof by contradiction, proof by cases, induction, forward chaining, etc. So, the system has to decide on a plausible meta-strategy before attempting all the possible manipulations allowed within this meta-strategy, and there are tons of them. It’s a lot more difficult to brute force theorem proofs than equations. 

This should suffice as a quick overview of these systems.

Subtitle: Limitations of math-specialized AIs

Equation solvers and theorem provers are, in some sense, superhuman because they have a larger knowledge bank than most humans. The sheer number of axioms, lemmas, rules, formulas, and shortcuts they know vastly surpasses the knowledge of most experts. So if you give them a theorem to prove or an equation to solve, they are more likely to get it done than your average math enthusiast.

However, I wouldn’t say they understand math in any deep sense. They are far from being able to deal with the true complexity of mathematics. For one thing, even the most developed formal language system can’t express every possible theorem and equation. Why? Because math isn’t just an abstract domain floating in our heads and removed from any reality. It’s deeply tied to the physical world, which is extremely complex. The concepts and ideas featured in theorems come from the real world, and we can’t capture all of them in a unified symbolic system. No matter how elaborate such a system is, there will always be statements that fall outside its reach. What makes humans special is that our understanding of the world allows us not only to deal with any mathematical theorem and equation, but also to learn and adapt to any symbolic mathematical representations.

Moreover, these systems do not have the intuition that humans have, which comes from the real world. 

First, they usually try their strategies randomly or using a human-provided order. But we are guided by a much more effective process. Humans are guided by intuition. This intuition pushes us toward the most effective approach very quickly. When we force a machine to try mathematical strategies in a specific order, what we are trying to do is hardcode our mental processes into the machine. We reflect on our own thinking process, try to recall the order in which we attempted different strategies, then design the machine to follow that same order. 

But here’s the issue: we can never really capture how our brain thinks inside a machine. We do not understand our own intelligence! Our brain is always two steps ahead of whatever our hand writes on paper or types on a computer. Whenever humans formalize something on a computer, it’s always a very simplified version of what we truly understood. 

Second and more crucially, these systems obviously can’t invent new strategies on the fly to tackle mathematical tasks since, for humans, this ability comes from intuition and analogy with the real world. 

✦✦✦✦

To sum up, do math-specialized AIs outperform most humans? Sure. Are they really “better” at math? I don’t think so, or at least only on known problems. If humans had the same memory and speed as AI to learn and recall hundreds of mathematical strategies and simplification rules, the performance gap would look a lot less impressive. 

Subtitle: (2) AIs specialized in games

What I highlighted in math-specialized AIs, is also true of game bots. While some AIs are superhuman at certain games, they completely fail to capture the complexity of most games, whether video games or board games. 

Just like theorem provers use rules, formulas and strategies defined by humans without common sense, game bots learn pre-defined behaviors without the *why* behind them. Games are filled with concepts from the real world (“move”, “attack”, “board”, “rules”, “objective”), which are the bridge to the “why” of the game.

Without having access to this bridge, AI is limited to learning specific behaviors that have proved effective against other humans, irrespective of their relevance in the current situation. They can learn general strategies, gameplay patterns, typical timings used by experts, what to do in X situation, how to react if the opponent does Y… But if the situation is even slightly different or if the opponent adopts absurd, unconventional, or illogical behavior, they can’t reason through it from first principles like a human would. This leads to amusing scenes where human players make use of irrational decisions and movements to confuse bots that are otherwise highly capable. 

Spamming random moves, jumping for no reason, and attacking into the air can completely throw off an AI that expects "normal" behavior (whereas a human would step back, realize that these moves have no purpose, and easily defeat the opponent). The AI has learned behaviors that are typically effective but has no notion of “intention” (which, like I said in section 10, can be acquired from observing the real world). 

If I am fighting against a human in a game and my opponent randomly starts to dance, I won't try to associate the dance with a strategy. I'll just assume they are out of their mind and simply knock them off because I know that dancing is almost completely unrelated to fighting. In the real world, the concepts of fighting and dancing have completely different purposes. If I am playing a board game and my opponent carelessly exposes their most valuable piece in a way that clearly offers no strategic advantage, I would simply capture the piece and chalk it up to a blunder. An AI on the other hand, would try to retrieve a similar scenario from its training patterns (or knowledge set) to decide on the best move. If it hasn’t seen such a naïve mistake before, it might miss the obvious move because it interprets every decision as a strategy to counter rather than also considering intention (or lack thereof) and the possibility of a mistake (by the way, I’m speaking very generally for game specialized AIs here. I am not referring to a specific chess bot). 

In order to reach that level of mastery in games, the AI needs to be able to grasp more fundamental concepts like "intention", “objective”, “rule”, “move”, “winning”, or even the very concept of a game itself. Not just internalizing commonly effective strategies.

✦✦✦✦

In short, do game-specialized AIs outperform most humans? Sure. Are they truly “better” at games? I don’t think so. Provided a human has trained enough to develop the necessary reflexes and technical knowledge for a given game, they can usually outperform even the smartest AIs. Unless, of course, the required knowledge is simply too vast (for instance, even with a lifetime of training, most humans wouldn’t be able to learn and retain all the strategies used by AlphaGo or AlphaZero).

Subtitle: (3) Generative AI 

This category of AI encompasses systems that operate through pattern-matching: generative AI and especially LLMs. 

In the 1^st^ example (math-specialized AIs), I talked about symbolic systems that use carefully human-crafted frameworks to solve math problems and theorems. Here, I am talking about a completely different type of AI: those based on deep learning. They are trained on a vast amount of data and extract the most recurring templates to be able to deal with new problems. They tend to be less accurate than the symbolic ones, but they also don’t need the problem to be explicitly formalized in a special language to deal with it.

The reason why I bring them up is that LLMs nowadays consistently rank at the top of math and programming competitions. They are trained on enormous math and coding datasets and make use of reinforcement learning, which has proven to be highly effective for these systems. Reinforcement learning has been a significant weapon in getting AI to internalize gigantic amounts of math and coding templates. 

However, it’s mostly regurgitation. Again, there is no common sense here. They learn thousands of templates and apply them in situations that appear sufficiently similar. But first, there is no guarantee they’ll recognize when to apply these templates. Many math and coding problems are formulated very closely, but actually require completely different solutions. Knowing when it makes sense to use the template or not would require understanding the problem in a very deep sense (which is only possible through real-world experience). Second, and more importantly, if the problem doesn’t have a corresponding template in the LLMs’ training data or if the template needs to be adjusted even a little bit, the system is guaranteed to fail. This fact holds even if it’s a problem that could very easily be handled from first principles.

Subtitle: (4) AIs based on objective solutions

There are tasks for which there is an objective algorithm, namely a formula that always yields the optimal result, and sometimes in an optimal timeframe. AIs built on this objective solution do not require any insight into the material world to complete the task. 

A few examples include: Dijkstra for finding the shortest path between nodes, Simplex for finding the optimal value of a linear function under linear constraints, Edmonds-Karp for finding the maximum flow in a network, Huffman for finding a lossless binary encoding that minimizes total encoded size, dynamic programming for finding the optimal order of matrix multiplications to minimize computational cost, etc.

This is the first true exception of this list where we can legitimately say the AI has mastered the task, despite having no common sense. However, this is only possible in extremely narrow and specific tasks. As discussed in section 4, math as a field is anything but narrow. While it does feature narrow tasks, they are endless. There is no objective overarching mathematical framework that would allow a narrow AI system to tackle any theorem or solve any math problem, since the entire field is subjective and open-ended. The same is true for coding. There are tons of narrow coding tasks and problems an AI could optimize for, but the field as a whole is not narrow at all. In general, the intellectual tasks for which an optimal algorithm exists without requiring any intuition from the real world to be solved, are very few to say the least. The majority of human abstractions (math, games, coding) are open-ended tasks that are impossible to completely formalize on a computer.

✦✦✦✦✦✦✦✦

To put a bow on this section, I would say this: yes we have tons of AI systems that perform better than humans on abstract tasks. But it’s hardly evidence that the real world isn’t necessary to understand abstractions. If these systems are faced with a problem that can’t be expressed in the language designed by the engineers, or if the problem is different enough from the AI’s training data, even if it’s a very simple problem, they won’t be able to handle it. It’s relatively easy to design superhuman systems at narrow tasks but the real world becomes necessary as soon as we aim to build more general systems, that can deal with any math, programming or game-related challenge.

Section 12- Is it really EVERYTHING that comes from the physical world?

No, I don’t think so. At least two notable exceptions come to mind: the innate predispositions of infants and existential concepts.

Subtitle: Babies’ innate biases and mental structures

I believe we are born with innate biases and very basic concepts about the world. Babies don’t come into this world as completely blank slates. They are born with some forms of mild expectations about reality. For example, they seem to be hardwired with a basic sense of causality. If ball A rolls and comes into contact with ball B, they expect ball B to move. Otherwise, they’ll show signs of surprise [4]. Moreover, although they obviously cannot anticipate the existence of humans or what our species looks like before being born, they possess innate circuits that enable them to recognize faces very quickly [5]. Therefore, they are either born with minimal expectations about the world or with the necessary mechanisms to quickly learn certain concepts. In both cases, it’s about predispositions that give us a head start on the world.

Babies’ innate mechanisms and reflexes also allow them to engage in a primitive form of “interaction” with their environment: deciding what to look at and pay attention to. While passive observation remains by far their predominant mode of learning, I think their natural instinct for selecting what is worth paying attention to gives them a significant advantage (unlike computers that would look at all parts of an image equally). For instance, their brain is biased to focus on moving entities [6]. In this respect, babies are similar to frogs which can only detect moving insects. Their eyes are also drawn first towards areas with high visual contrasts, which is why a lot of baby toys are striped black and white [7].

They have certain biases that make them focus on specific aspects of the world which speeds up their learning process. I think it’s reasonable to think that the specific features of reality human infants are wired to pay attention to are part of what allows them to understand concepts that even the smartest animals never quite get to understand at a high level. In short, although we are witnessing the same world as animals, we don’t form the same ideas about it because we don’t focus on the same things.

Finally, I suspect humans might be born with mental structures or mechanisms that allow us to perform deep abstractions. Even if almost every abstraction comes from the physical world, there may be something in our brains that forces us to not just see patterns but patterns of patterns (meta-patterns). It could be the way our neurons are connected or a special cognitive module. I don’t know.

Everything I just listed about the innate “knowledge” or tendencies babies seem to possess, all things considered, only provide a tiny amount of information compared to the physical world. The real world is still the fuel of the brain. It’s the backbone of cognition (the alpha and the omega). Only so much can be pre-encoded in our genome, which allegedly only contains a few megabytes worth of data.

Subtitle: Existential concepts (consciousness, justice, etc.) 

I’ll also admit that I would have a hard time tying existential concepts like consciousness, justice, morality, philosophy, and the meaning of life to the physical world in a convincing way. These are notoriously ill-defined concepts with no definitive consensual definitions among scientists and experts. I could see the cognitive traits that allow these concepts to manifest in humans as being either some type of emergent property from the complexity of the brain or being encoded in our genome in some way.

The purpose of this essay is not to claim that EVERYTHING comes from the physical world, but rather to argue that the physical world is deeply involved in nearly every cognitive process in humans.

CHAPTER 3 – IMPLICATIONS FOR AI

Section 13- What is the role of language in intelligence?

As humans, we tend to see language as inseparable from our intelligence. Before going any further, I promise my goal isn’t to belittle or downplay language. The invention of a sophisticated language system was indispensable for our civilization to reach such an advanced state. However, I think language isn’t fundamental to cognition, contrary to what our intuition would naturally lead us to believe.

Subtitle: Language is an important part of the human experience…

Language is a tool that allows our already-existing intelligence to express itself effectively. It’s an efficient way to share information with other intelligent entities. An argument can also be made that language is a crutch that our mind uses to structure our thoughts (that’s why we often like to “think aloud” when dealing with complex problems. Verbalizing one’s thoughts implies achieving some sort of clarity of mind). I believe there have been a lot of studies from linguistics and neuroscience documenting how our mother tongue can really shape the way we think and thus play a pretty big role in cognition. Our personality, the way we see the world, our philosophies on life, all have a connection with our mother tongue to varying degrees.

Subtitle: …But isn’t fundamental

But being able to manipulate language isn’t what makes us smart. What makes us smart is our incredibly efficient ability to process sensory data, and make use of it to solve various problems. The mental processes taking place in our brains have very little to do with language. It’s mostly abstract visuals, mental sensations, feelings… Think of all the times you had an idea in mind but couldn’t put the right words on it. Or you attentively followed a professor's demonstration introducing a complex concept, left the lecture feeling "I think I got the gist of it," but couldn’t explain what you understood to others if your life depended on it. It's not due to a lack of understanding. It's just that at that stage our thoughts still take the form of blurry mental imagery. Images that are deeply personal and that only we can truly understand. While it's a mess that has some structure, that structure isn't defined enough to allow those thoughts to be expressed in a standardized, shared language.

Language is “only” a scaffolding for those thoughts. It is to the human brain what reins are to the horse. It's a guide, an aid, not the fundamental engine of cognition. 

Subtitle: Reasoning without language

The reason why we tend to associate intelligence with language is the "inner voice" most of us possess. It’s the mental voice we constantly use when we think or reason. It's hard to imagine thinking without that inner voice, simply because of how omnipresent it seems. In reality, it "runs" alongside far more complex mental processes occurring in the background. You can sometimes notice them during those rare pauses when you're so focused, absorbed, or lost in your thoughts that you don't even have the energy for internal narration. Those quiet background processes are what truly make up intelligent thought. They consist of highly abstract and personal imagery that is only intelligible to ourselves. The thoughts our brain produces in the background are so abstract that we often forget they even exist!

We use our inner voice so much that many people don’t believe it's possible to reason without that voice. But in fact, yes, it's possible to reason without language.

The most obvious examples are animals. They are devoid of any sophisticated language system, yet they are capable of extremely intelligent behavior (solving puzzles, craftiness, cunning...). We have such examples even among us! Toddlers between 8 and 24 months can't really speak outside of unintelligible babbling, yet they can: 

  • understand simple instructions ("go get your blankie," "give it to daddy")
  • use tools (a stick, a spoon, etc.) to retrieve an object stuck under furniture [8]
  • try to open a box in different ways
  • understand the intent behind simple actions [9]
  • imitate their parents' behavior down to subtle nuances (playing drums by mimicking daddy's gestures, comforting a doll like their parents do for them, pretending to make a phone call, washing hands with all the steps in the right order, etc.) [10]

There have also been some famously tragic cases of individuals who, as a result of artificial experiments or extreme deprivation, were not exposed to any language (or only to a few isolated expressions like "yes," "no," "go," "stop"). Yet, they were still able to reason and display abilities far beyond any AI system today. For example, Genie, a girl discovered at age 13 after years of extreme abuse and isolation, had almost no language exposure outside of one-word sentences (like "stop"). Despite this, she performed above average on visuo-spatial tasks and very clearly wasn't relying on words to reason about them. [11]

Subtitle: Language cannot capture the real world

Remember when I explained that vision, touch, and audio are continuous modalities and why this property helps to extract structure from the world? Language contrasts sharply in that respect. It’s discrete, thus it’s a terrible way to represent the world. I would go as far as to say it cannot describe the world to an entity that wasn’t exposed to sensory data first. To demonstrate this, let's go through an exercise. Let’s try to define a chair with language only! According to dictionaries, a chair is 

a seat, for one person, usually with four legs for support and a rest for the back [12]  

Okay then, is an armchair a chair? Many people would say no. What about stools with tiny backrests? (see this image https://ibb.co/D2yPgLJ ). Which leg are we talking about? Getting a bit more ridiculous, imagine a rock sitting on a mountain held in place by four wooden stakes driven into the mountain. This rock can only support one person. If I draw a rectangular backrest on its surface to indicate to people to lean their back against it, does that count as a chair? Obviously not, but technically it satisfies the definition!

Diving deeper, the definition mentions the word “seat”. Dictionaries define a seat as

something designed to support a person in a sitting position, often a chair, bench or sofa [13]. 

Besides the fact that we are already starting to see some recursive definitions (a chair is defined as a seat which is also defined... as a chair?), can a rug be considered a seat? What’s the definition of sitting exactly? In Japan, the traditional way to sit is to kneel, with legs folded under the thighs and the buttocks resting on the heels (it’s called “seiza”). Yet many Westerners wouldn’t consider this as sitting at all. By the way, using this “sitting” position, a rug would make a perfect seat! But rugs typically aren’t included in the definition of a seat. 

By now, you should see just how cumbersome it is to define something strictly with language. If I played dumb enough, this experience could stretch on forever.

At some point, definitions aren’t enough. One needs real-world experience to have a base upon which definitions can be built fon. Otherwise, we would play this endless game of “A means B, which means C, which means …”. There would never be an ending point because it would never feel rigorous enough. Even worse, most of the time, this exercise ends in a loop: “A means B, which means C, which means … which means A”.

The problem is that it's simply impossible to describe a continuous world through discrete words and symbols. It might not be truly continuous in a mathematical sense, but there are orders of magnitude more nuance to reality than what language can capture. The real world is high-dimensional, full of details, and composed of a virtually infinite number of variations of low-level phenomena. Put it in layman’s terms, there are orders of magnitude more photons, atoms, and sound waves than words in the dictionary. 

Language brings a compression of information that is useful to humans who are already very familiar with what it is referring to. Since it focuses on essence rather than details, it makes for a great assisting tool. When someone tells me about their day, I don’t need them to explain all the low-level phenomena that I am already accustomed to (like how they had to account for gravity while climbing stairs to meet their boss). Speaking is a human activity done with tons of assumptions about the other speaker’s knowledge of the world. So, while language is a great aid, it can’t be the main medium through which one understands the world. There is a reason why humans only learn language after they’ve developed a sufficiently developed model of the world first.

In fact, despite language’s undeniable importance to human cognition and civilization, it has its limitations even for us! Imagine trying to explain modern technology to a Babylonian living in 600BC, including what electricity is, how a computer works and the concept of the Internet, using nothing but language. Even more concrete: imagine assembling an IKEA furniture with a text-only manual. It would be a nightmare! If text presents descriptive limitations even to grounded humans, then what should we think about the feasibility of AGI (i.e., an AI system capable of grasping nuances of reality at a human-level or higher) emerging from this extremely poor modality alone?

Subtitle: AGI could exist without language

To show why sensory data is more fundamental than language, I like to use the idea of “AGI without language”. I can perfectly imagine an AI system at or above human intelligence, that doesn’t understand language. We would communicate with it using stylized drawings, visual diagrams, and gestures. To illustrate this, take a task that most people would agree requires human-level reasoning: building a modern house. An AGI could build a house by imagining the building steps in its mind in advance. It would mentally simulate how to install the foundations, build the walls, design the living room, etc. Again, all of this can be done using mental simulations (not in 4K details but using fuzzy and stylized mental pictures or mental diagrams). This shows that understanding sensory data is not only necessary, but in a sense sufficient (even though it’s obviously better if the AI can speak)

Section 14- We don't need to build the Matrix: AGI ≠ 4K VR physics engine

The vast majority of researchers seem to confuse the concept of “understanding the world” with an ability to generate video (like SORA or Veo does). When the need for AI to be able to perform “mental simulations” is raised, people think we are referring to some sort of physics engine. Something that can generate videos faithful to the laws of physics in 4K. Some researchers, like those on the Spatial Intelligence team “World Labs”, take it even further. 2D videos aren’t enough. We need to get AI to generate hyper-realistic 3D videos of the world. Basically, a Virtual World indistinguishable from reality (all love to the team, by the way. I love Fei-Fei Li!). 

Subtitle: The brain isn’t about precision or exactitude

As you can probably infer by now, I disagree with this vision. Humans demonstrate a high level of intelligence, and most of us can’t even draw a perfect circle without training (artists aside). I can already hear some people pointing out that it's a hand problem, not a brain one.

So, let's perform a thought experiment.  

----

Picture a perfect circle on a sketchbook. Now mentally add two dots and an arc to the circle. You now have a smiling emoji. Picture that emoji slowly coming off the page (but with the same orientation in space as when it was on the sketchbook). Once it's a few centimeters up in the air, make it tilt upward by 30-45° along the x-axis, as if it wanted to face you but doesn't quite become vertical. Can you perfectly visualize what the shadow of the emoji would look like on the paper? (hint: the entire face, including the circle, eyes, and mouth, would be slightly distorted in the shadow.) In general, could you picture what your own shadow would look like on a sunny day? Even the best artists struggle with this! If you’re still not convinced, think of something super familiar you’ve seen countless times like your father’s face. Try to recall it in full detail, down to the moles, gray hairs, tiny scars and the exact texture of the skin. Chances are you’ll be able to reconstruct the most distinctive facial features, but most details will remain blurry.

----

This should suffice to show that the brain doesn’t make perfect simulations of the world. It provides just enough details to create an illusion of a pixel-perfect mental model, but stressing that model easily reveals the gaps. When I refer to mental simulations, it doesn’t have to be pixel-perfect ones. As I have already pointed out many times in this essay, we often rely on either blurry or heavily stylized imagery. You don’t need to picture real arrows in 4K to understand their spatial relationships and figure out that 3 vectors can’t all be linearly independent in a 2D space. The human eye doesn’t even see reality as it is! Once the light signals reach the visual cortex, the brain filters a lot of information out. We see general shapes, obvious visual patterns, but rarely details. In fact, human vision is what scientists call “foveated”. When we look at an image, the brain focuses on a very small portion of the image (the focal point) and progressively blurs the surrounding regions (the peripheral area) depending on their distance from that focal point [14]. The brain uses selective attention mechanisms and many heuristics in order to focus only on information that seems relevant. 

Subtitle: Brains’ lack of precision is a strength

We don't process images exactly as we see them not because biology is flawed, but because reality is overwhelming. I believe there is simply too much visual information to focus on everything and still understand something meaningful about the world. Our attention would be completely diluted and we would miss critical information about reality (by the way, I think that’s why video generators today still make stupid mistakes despite being able to generate stunning 4K videos: they’re focusing on so much at once that they don’t even understand basic physics). 

Instead, our brain and eyes construct their own simplified version of reality to help us concentrate on what matters to us. I think even the most intelligent and sophisticated system would NEED to implement strategies to simplify the world. It's an inescapable constraint born from nature’s unbelievable complexity. That constraint would probably apply even to the mythical concept of ASI. 

Fun fact: it’s this simplification of reality that makes us susceptible to optical illusions since we don't really see the world as it is.

So no, not only do our brains not perform hyper-realistic simulations of the world, we can’t even perceive the world at such a level. There are also strong reasons to believe that even theoretically perfect simulations will always be out of reach, regardless of the potential use of supercomputers or ASI (unless we’re going to literally model atoms and subparticles…).

Subtitle: Approximations are always enough + examples

I’ll take this even further. For the vast majority of cognitive tasks, approximations are more than enough. Video generators or virtual simulations would be completely unnecessary. 

Let’s say I want to predict how a friend will reply to a playful teasing message I sent their way. Just knowing that this friend is pretty sensitive and thin-skinned is more than enough to make a reasonable guess of their behavior. I don’t need to simulate the firing of every neuron in their brain. If I want to predict how my parents will react once they learn I left my little sister at home alone, I just need to remember how they reacted (roughly) the last time they dealt with an intensely emotional situation. I don’t need to remember the exact movements they did pixel by pixel (or particle by particle) at the time. I don’t need to predict what their future reaction will be at a muscle level or the exact words they’ll use. However, I can make a simpler prediction like "they'll probably throw something at me so I better be prepared to dodge, though I can’t predict the direction in advance".

Intelligence is often about making predictions in a simplified space where irrelevant details are eliminated. It is all about knowing how to simplify reality to extract the most important information.

This holds for animals as well. While calculating how to reach a treetop, squirrels don't examine the texture of every piece of bark or the fibers of the leaves. They barely take a few seconds to scan the scene (by quickly paying attention to the overall distance, the apparent sturdiness of the branches, and maybe their own energy level)

Subtitle: Perfect simulations are fundamentally impossible

Sure, if we could simulate reality at an atomic level to the point where we could predict everything with infinite precision, then we could achieve literal miracles: curing cancer, preventing catastrophes decades before their occurrence, etc. But unfortunately, I think it’s probably impossible for many reasons. 

-First, there is still a lot we don’t fully understand about the physical world. We went from atomic theory to quantum field theory, and yet we still haven't found THE theory that explains how the world works at a subparticle level (not a physics expert here, so excuse my ignorance and potential inaccuracies). 

-Second, even if we knew exactly how everything works, some people would argue there is intrinsic randomness in nature. 

-Third, even if the universe was actually deterministic, we know it’s deeply chaotic (small errors at a microscopic scale can lead to completely different results at a macro level) so completely unpredictable anyway unless we find a way to build computers with infinite precision.

Finally, even if it was possible to simulate reality at subatomic levels, humans constitute an existence proof that it’s not necessary to build AGI. 

➤ Side question: How are video generators able to generate (mostly) coherent videos if they are devoid of semantic understanding?

The answer is simply: massive fine-tuning. These systems are trained on an enormous number of videos in hopes of anticipating virtually any prompt a user could present. 

They don’t memorize prompts word for word or videos pixel by pixel (which is a common misconception among AI skeptics), but they memorize textual and visual patterns. They regurgitate superficial patterns. They learn to associate certain keywords and sentence structures with recurring texture schemes and motion dynamics. 

However, the resulting system is very brittle. Any semi-original prompt ends in erratic outputs (especially if it exceeds a few seconds). The most basic laws of physics are disregarded the moment the user steps outside the anticipated scenarios. When a system seems to grasp physics in familiar settings but collapses into nonsense in the unfamiliar ones, the simplest explanation is that it never truly understood physics in the first place. One does not selectively understand gravity. You might not know how it works in complicated scenarios, but if it applies to an apple then it’ll always apply to all apples, regardless of the context or who was holding the apple. 

Section 15- Where are we at? Isn't multimodality baked into modern AI?

Many people mistakenly confuse understanding the physical world with simply recognizing objects or describing what is happening in an image. From this perspective, it’s tempting to claim we have systems that can understand the physical world to some extent: CNNs, Vision-Transformers, Multimodal LLMs, VLMs, VLAs, Generative AI in general, etc. 

This is only a surface-level understanding at best. 

To begin with, the most advanced versions of these systems still make shockingly “stupid” mistakes like not being able to tell which is bigger between a small 2cm circle and a 10cm one, when both are literally next to each other in the same image (see the examples in the reference section [15]).

Subsequently, understanding the physical world means not only being able to describe what you see (and touch, hear…) but also much more: what are the likely intentions of the people involved in the visual scene? What is the most likely context? What is likely to happen next in a second, a minute, an hour, days, years? If I replaced or removed one of the elements of the scene, what would happen? Obviously, there is an infinite number of answers for these questions, but only a subset is plausible and coherent with reality (that subset still being infinite). Current AI is far from any of that, and multiple benchmarks show this convincingly (look up the new benchmarks invented by Meta to test V-JEPA 2 [16]).

Finally, ultimately what we want is for AI to develop such a complex model of the world, that it is able to build its own abstractions to describe said world. That could manifest through: 

  • inventing new categories
  • inventing its own metaphors
  • inventing a new language
  • inventing a new system to capture the logical structures of the world (aka a new math system)
  • etc.

Unless we pair current vision systems with LLMs, none of them has yet shown an emergent ability to do these things natively or even the potential for it. To be clear here, I am not asking for vision systems to magically speak (even for humans, speech is largely culturally inherited), but to form good enough visual abilities for concepts to naturally emerge way before learning a specific language. 

Take categories, for example. Humans have an extraordinary ability to recognize not just standard categories like “dog” or “car,” but also to mentally distinguish far more nuanced elements of reality. Many things in life don't have official names, but our mind can somehow clearly tell them apart with expressions like: "this weird T-shaped corner I see all the time in this building", "the clouds that look like dragons" (vs the ones that look like teapots). This ability is precisely what allows us to design rich languages. Babies develop an advanced grasp of visual features and subtle categories before they can even say a single word. 

Therefore, what I am advocating is for AI to have such a good grasp of visual data, that concepts would already exist in their mind even before they have official words for those concepts.

Subtitle: Is everything doom and gloom?

AI has made a lot of progress, deep learning in particular. LLMs have mastered language and we have vision systems (CNNs, multimodal LLMs, JEPA, etc.) which can deal with images at a surface level. However, I think we are still nowhere near having a system that understands the physical world at a deep enough level to reliably exhibit the abilities I just outlined. Current systems fall short of even animals’ understanding of the real world and by a wide margin. 

But such an assessment is not as pessimistic as one would think. Animals are extremely smart. Failing to meet their intelligence baseline is no disgrace by any means.

Some of them can:

  • adapt very quickly to new environments with minimal trial and error
  • solve unfamiliar puzzles
  • open doors just by observing
  • drive (e.g. orangutans)
  • plan highly complex actions simply by scanning their surroundings (e.g. cats are amazing at figuring out how to reach platforms by jumping on furniture. They can plan everything in their head while staying perfectly still).

I think we are missing fundamental ideas to reach an animal-level understanding of the world. But once we get there, going from cat-level to human-level could be much faster than we think! It’s just about figuring out the fundamentals of intelligence.

Section 16- How to build AGI?

This text is already way too long, but it wouldn’t be complete without giving my opinion about how we should approach AGI. I am very open-minded when it comes to building AGI. Intelligence is the most difficult natural phenomenon to apprehend, so I welcome any approach that comes with a good justification.

Subtitle: Only 2 requirements for AGI

To remain consistent with everything I've said so far, I think researchers only have two requirements to satisfy, no matter the strategy they choose to implement (at least for those taking the deep learning route):

1) Training the AI on video

Every hour, the visual data that YouTube alone can provide is superior to what a child has seen in 4 years of life! (because tons of videos get uploaded every second) [17]. So, video data shouldn’t be an issue for AI. Ideally, the most suitable videos would be simple ones showing nature in action or people interacting. I think VLOGs would be perfect for this.

Once the training is complete, the researchers should stress the AI as much as possible to figure out the extent to which it truly understands the physical world. We have a lot of benchmarks for this. Some test the AI's ability to detect videos that break physical laws. Others show a given video and ask it to predict what would happen in the future given a certain timeframe. Others, in turn, ask it textual questions about the video (this is possible by jointly training the "visual AI" with an LLM). Currently, even the SOTA on these tasks is ridiculously short of human or even animal-level understanding.

2) Avoiding direct pixel manipulation as much as possible

As I’ve said earlier, the human brain rarely focuses on low-level details. Even in the most detail-oriented tasks like a watchmaker repairing a watch or an artist analyzing a sketch, we perceive textures and patterns, not anything that would be equivalent to pixels. 

However, for AI specifically, I am not necessarily as extreme as LeCun who insists that AI should never look at pixels no matter what. AI isn’t necessarily supposed to perfectly imitate biology. I just think the AI should primarily focus on broader, more obvious elements (general shape, general movement and rough dynamics) and only focus on pixels when the task really demands it. Whether or not AI should pay attention to individual pixels at all is still an open question for me. Time will tell.

✦✦✦✦

Essentially, with these two requirements, I am proposing that researchers try to create a brain (a neural network) trained on video that nails as many benchmarks on real-world reasoning as possible. Once that brain has been created, we would only need to combine it with an LLM to translate its visual reasoning processes into words. 

As for the mechanisms used inside that neural network, it’s completely up to the researcher. Predictive coding, foveated vision, saccadic glimpsing, active inference, renormalization, neuronal synchronization… it doesn’t matter as long as it helps the AI to understand the physical world.

Subtitle: Isn’t this proposal just a World Model?

My entire proposal for AGI already has a name in AI: World Models. That's the common name people use for AI systems specialized in video. However, I refrain from using that word because people associate it with video generators. Again, the goal isn't for AI to generate video at all. We shouldn't care about whether or not something is outputted from the model. What we care about is only its UNDERSTANDING of video. Feed it a video, ask it questions about it, and if it can answer accurately, then we can confidently assert that the AI "has a good world model". Put it into a robot and if reliable real-world performance comes out of it, then it's indirect evidence of a robust world model. 

A world model, in my definition of the term, has nothing to do with video generation. It’s not about modeling the entire physical reality, but about capturing just enough to perform well on relevant tasks. It’s about forming useful abstractions, even if they are biased or technically incorrect (just like it is for us humans!)

The driving force behind the obsession over video generation is that it’s a very intuitive way of judging whether or not a system has formed a good model of the world. We need to visualize its understanding, so to speak. If the AI can produce beautiful and coherent videos, then to us that’s a convincing sign it gets how the world works (including intuitive physics, how people behave, what the notion of “intention” is, etc.). 

Unfortunately, I think we will have to rely on more indirect evidence. After all, we know humans and animals are smart and yet we can’t directly see inside others’ brains. How do we know it? We know it because every day our intelligence is put to the test in a wide variety of scenarios. The same should apply to AI.

Subtitle: AI needs to learn on its own

I would also like to hammer home the importance for AI to form ITS OWN abstractions (aka learn by itself). I point this out because I’ve heard people proposing to manually design a world model by hand, by providing a system with exact physics formulas describing the world (like F = ma, v = d/t, PV = nRT, etc.). 

This isn’t a good idea for a few reasons:

a) These equations are simplifications of much more complex phenomena. The real world cannot be captured by equations. I once heard "the only things we ever manage to model with math and equations are extremely simple phenomena we already intuitively understood long ago". We could cram in hundreds of thousands of equations in a system, and we’d still be light-years behind the physics mental model possessed by a cat. 

b) More generally, as I’ve repeated many times, it’s not a good idea for humans to manually describe the world to AI because we would only provide it with an extremely poor version of reality. Our mental abstractions are far more complex than anything we can express, verbalize or formalize on a computer. It's better to stick to designing a solid enough framework that allows the AI to learn independently

Section 17- Closing thoughts

In conclusion, I believe it’s a major mistake to dissociate intellectual endeavors (math, coding, science) from physical ones (grabbing objects, furniture assembly, spatial localization...). My view is that it's impossible to create an AI truly “human-level” in math while said AI doesn’t at the very least have the potential to also be good at assembling IKEA furniture with minimal training. It might not yet have the required physical body to perform the task, but it would have the required grasp of physical reality to do it after only a handful of repetitions or trials at most. 

My thought process is simple: I believe the concepts involved in intellectual fields are fundamentally the same as those involved in physical tasks. Humans acquire them through vision, touch, hearing and physical interactions. However, my position is that the subset of concepts relevant to intellectual fields specifically could be acquired through vision alone (watching YouTube videos).

--------

I didn’t invent any of the ideas discussed in this essay. I developed them largely by listening extensively to LeCun’s interviews and through my growing interest in neuroscience and AI research in general. These ideas have also all been hinted at by various people before. There are known fields that argue intelligence cannot exist without embodiment. 

However, I have this impression that LeCun is truly one of a kind in this community. He is the only researcher I know who happens to meet all of these criteria at once:

  • understands that the physical world is the fuel of intelligence
  • understands that vision in particular is very powerful
  • has some notion of biology
  • understands that living beings do not have 4K simulators in their brains
  • is established enough to have the means to bring his ideas to life (with Meta’s capital)

His position at Meta is an amazing opportunity as it lets him test different ideas at scale without worrying about funds drying up.

I hope more people embrace these ideas in the future. Vision is hard, and we need as much brainpower as possible to “solve” it. Current vision systems sometimes struggle with simple counting or telling which object is bigger between two objects with drastic size differences. Getting AI to understand the world through vision will not be easy. Reaching animal-level intelligence alone will be a huge mountain to climb … but that’s what makes the journey exciting for me!

Hope to further discuss this topic with you on r/newAIParadigms/ !

REFERENCES

-[1]:

Michael Faraday and field lines: 

https://www.britannica.com/science/electromagnetism/Faradays-discovery-of-electric-induction

-[2]:
Reminder: ZFC is the most widely accepted system of axioms used today as the foundation of modern mathematics (not universal, as I hinted in the text, but close!)
=>Examples of "most likely true" (or false) but unprovable statements within ZFC: 
→Martin's maximum
→Projective determinacy
→Large cardinal property (https://en.wikipedia.org/wiki/Large_cardinal)
→Continuum Hypothesis (most likely false)

-[3]: 

Mathematicians: Lawrence T. Wos, Abraham Nemeth

Programmers: T.V. Raman (not born blind, though)

-[4]:

Babies are born with basic causality: https://pubmed.ncbi.nlm.nih.gov/23587033/

-[5]:

Babies learn to recognize faces very quickly: https://pmc.ncbi.nlm.nih.gov/articles/PMC4496551/

-[6]:

Babies’ bias towards moving entities

→ 1st source: https://www.nature.com/articles/s41598-020-79451-3

→ 2nd source: https://www.aao.org/eye-health/tips-prevention/baby-vision-development-first-year

-[7]:

Attraction to high-contrast: https://www.parentingcounts.org/vision-attracted-to-high-contrast-patterns-edges-0-2-months/

-[8]:

Tool use: https://www.researchgate.net/publication/227499349_The_Beginnings_of_Tool_Use_by_Infants_and_Toddlers

-[9]:

Intentions: https://pubmed.ncbi.nlm.nih.gov/27522041/

-[10]:

Imitation 1: https://www.washington.edu/news/2013/10/30/a-first-step-in-learning-by-imitation-baby-brains-respond-to-anothers-actions/

Imitation 2: https://pmc.ncbi.nlm.nih.gov/articles/PMC10586717/

-[11]:

Genie's performance on visuo-spatial tasks: https://en.wikipedia.org/wiki/Genie_(feral_child)

-[12]:

Definition of chair: https://www.dictionary.com/browse/chair

-[13]:

Definition of seat: https://www.dictionary.com/browse/seat

-[14]:

Foveated vision: https://arxiv.org/abs/1807.08476

-[15]:

Stupid mistakes: 

https://www.reddit.com/r/OpenAI/comments/1mm968o/7_billion_phds_in_you_pocket/

https://www.reddit.com/r/ChatGPT/comments/1l7150r/o3_also_fails_spectacularly_with_the_fake/

https://www.reddit.com/r/singularity/comments/1kt1j6k/sonnet_4_cant_even_get_a_simple_image_prompt/

-[16]:

Physical world benchmarks: https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/

-[17]:

→Amount of video data: https://x.com/ylecun/status/1750614681209983231

→Simple calculation: 

- According to Yann LeCun, a 4-year-old child has seen 1E15 "bytes" of visual data 

-More than 500hours of videos are uploaded every minute (https://soax.com/research/how-many-hours-of-video-are-uploaded-to-youtube-every-minute).

-That's 30k hours uploaded every hour (500*60).

-Each hour of video is about 562.5MB (assuming the videos are in 480p on average), thus ~ 562 500 000 bytes. Source: https://support.smoothcomp.com/article/247-how-much-data-will-be-used-for-streaming

-Total bytes: 30k * 562 500 000 bytes = ~ 1.6875 × 10¹⁶ bytes (1E16)

=>Thus, an hour of YT uploads (1.6875 × 10¹⁶ bytes) contains more data than a child is exposed to in four years of life (1×10^15^ bytes) 

Note: I wouldn't be surprised if I made a mistake or if some of these figures are really rough approximations so please take these numbers with a grain of salt. One of my sources is about streaming on YouTube but I don't know if streaming consumes as much as simple uploading...

Edit

Pub: 20 Sep 2025 06:45 UTC

Views: 348