➤The motivation behind V-JEPA 2
V-JEPA 2 is the new world model from LeCun's research team designed to understand the physical world by simple video watching. The motivation for getting AI to grasp the physical world is simple: some researchers believe understanding the physical world is the basis of all intelligence, even for more abstract thinking like math (this belief is not widely held and somewhat controversial).
V-JEPA 2 achieves SOTA results on nearly all reasoning tasks about the physical world: recognizing what action is happening in a video, predicting what will happen next, understanding causality, intentions, etc.
➤How it works
V-JEPA 2 is trained to predict the future of a video in a simplified space. Instead of predicting the continuation of the video in full pixels, it makes its prediction in a simpler space where irrelevant details are eliminated. Think of it like predicting how your parents would react if they found out you stole money from them. You can't predict their reaction at the muscle level (literally their exact movements, the exact words they will use, etc.) but you can make a simpler prediction like "they'll probably throw something at me so I better be prepared to dodge".
V-JEPA 2's avoidance of pixel-level predictions makes it a non-generative model. Its training, in theory, should allow it to understand how the real world works (how people behave, how nature works, etc.).
➤Benchmarks used to test V-JEPA 2
V-JEPA 2 was tested on at least 6 benchmarks. Those benchmarks present videos to the model and then ask it questions about those videos. The questions range from simple tests of its understanding of physics (did it understand that something impossible happened at some point?) to testing its understanding of causality, intentions, etc. (does it understand that reaching to grab a cutting board implies wanting to cut something?)
Benchmark 1 — Something-Something v2
Objective: recognizing a simple action or interaction in a video (tested using multiple choice!)
Input:
- A video of a few seconds with very little complexity (typically one simple action)
- Ex: Video showing someone opening a big box
(Expected) Output:
- Selecting a description of the video based on multiple choices
- Example choices: a) Someone is pushing a box, b) Someone is opening a box, c) Someone is breaking a box
To answer the questions on this benchmark, they simply trained a classifier head using V-JEPA 2's embeddings (the classifier learns that it is supposed to describe a video and uses V-JEPA 2's understanding to select the correct answers for new unseen questions)
V-JEPA 2 score: 77.3% (SOTA)
Human score: not found
Benchmark 2 — Epic-Kitchens-100
Objective: predict the next action in a first-person video, within a realistic and noisy environment.
Input:
- A video showing someone in first-person view just before they take an action
- Ex: A hand is seen approaching a cutting board
(Expected) Output:
- Select what the next action should be within a set of fixed multiple options. The model must assume that the action will take place in 1 second (for example)
- Ex: "Pick nose", "Cut onion", "Call someone"
V-JEPA 2 score: 39.7% (+12.1% improvement on previous SOTA)
Human score: not found
Benchmark 3 — VideoQA
Objective: testing the ability to understand a scene only (they don't test its ability to make predictions for instance).
For this benchmark, the questions are a bit more open-ended (to force more semantic understanding of the scene). Since V-JEPA 2 only understands visual data, it is paired with an LLM using an intermediary module which allows V-JEPA 2 to answer with natural language. Basically, the LLM is augmented with V-JEPA 2 rather than with a generative vision module (like VLMs).
The questions are kept simple and short because what we want to test is specifically V-JEPA 2's understanding, not the LLM's ability to understand language.
Input:
- A video showing a complex set of actions and events
- Ex: Someone enters a room, takes the cellphone on the table, puts it on top of the drawer instead, takes a box and puts it below the drawer, then exits the room
- Text asking a question about the video
- Ex: "Which object was moved first?"
(Expected) Output:
- A textual response*. Example:* "the phone"
The output is manually evaluated by human testers (since you can't really automate the evaluation for open-ended answers)
In some cases, they even evaluated V-JEPA 2's perception of time! So they'd show a video showing a box slowly falling from a shelf following a gust of wind and ask it "how much time elapsed between the gust of wind and the fall of the box?".
V-JEPA 2 score: between 76.9 and 84%
Human score: 85-95%
Now, here are the benchmarks created by Meta themselves (they are much harder):
Benchmark 4 — IntPhys 2
Objective: detect a violation of a physical law in a video
Input:
- Two videos are shown. Both are very similar but one of them contains an impossible event (that violates a physical law)
- Example:
- Version 1: A ball falls from a shelf and hits the floor (normal)
- Version 2: A ball falls from a shelf but goes through the floor (impossible)
(Expected) Output:
- Choose the video that contains a violation of a physical law
- Example: Version 2
V-JEPA 2 score: ~50% (barely above random)
Human score: ~95%
Benchmark 5 — MVPBench (Minimal Video Pairs)
Note: this benchmark is technically still part of VideoQA but is way harder.
Objective:
Testing whether the model can grasp the subtle differences between two highly similar videos. It needs to prove that it has a robust understanding of the content of a video by answering the same question asked twice with two very slightly different versions of the same video (hence the term "video pairs").
Input:
- Two nearly identical video versions
- Version A: A person pulls a box toward themselves
- Version B: The same person pushes the box away
The two versions of the video have the same setting, the same objects, the same actor, nearly identical gestures. Everything is designed to avoid the system from cheating and exploiting random features of the room to make the correct decision (like noticing that in videos where the person is pushing the object, usually the box is bigger)
- Textual question: "Was the box pushed or pulled?"
(Expected) Output:
- Multiple choice (often between 2 and 4 options)
- Example:
- Video A → pulled
- Video B → pushed
If the model were to answer "pulled" to both, it would get marked wrong for the entire question because it didn’t prove that it really understands the difference between the two versions of the video (maybe it got it right by chance). This forces the model to prove its understanding by answering correctly to basically the same question framed slightly differently. This "double-checking" makes this benchmark really hard, even for JEPA.
Another example:
Input:
- Version A: Object falls on its own
- Version B: A finger lightly touches it before it falls
- Textual question: "Did the object fall by itself or was it pushed?"
Expected Output:
- Video A -> by itself
- Video B -> pushed
V-JEPA 2 score: 44.5%
Human score: 92.9%
Benchmark 6 — CausalVQA
Objective: testing causal reasoning. Instead of just choosing a description of what is happening in a video or simply predicting what's next, this benchmark requires the model to "reason" and simulate situations mentally.
Input:
- A video showing a complex situation
- Ex: a flower vase is sitting at the edge of a shelf. Below are two boxes: a big one and a smaller one. Someone opens a door, a gust of wind blows in and causes the flower vase to fall over. The flower vase ends up inside the big box
- Textual questions (requiring reasoning about causality):
- How to prevent the flower vase from falling?
- What would happen if the door wasn't closed?
- How to get the vase on the smaller box instead?
Those questions require mental simulations of hypothetical situations (not just predicting what's going to happen next). The answers are also way more open-ended.
(Expected) Output:
- Open-ended answers
- Example:
- "Not opening the door" or "move back the vase so it's not at the edge of the shelf"
- "The vase wouldn't fall due to the absence of the wind"
- "Moving and placing the vase above the small box" or "moving and placing the small box under the vase"
- Example:
Again, since the answers are a bit more diverse and open-ended, human evaluators are required to evaluate the AI's response.
V-JEPA 2 score: not found (Meta says it's very bad)
Human score: 85–95%
➤General remarks
- Completely unsupervised learning.
No human-provided data. It learns how the world works by observation only (by watching videos)
- Zero-shot generalization in many tasks.
Generally speaking, in today's robotics, systems need to be fine-tuned for everything. Fine-tuned for new environments, fine-tuned if the robot arm is slightly different than the one used during training, etc.
V-JEPA 2 is provided 62hrs of robotics data (DROID dataset) and is able to:
🠖 control different robotic arms (even if they have different shapes, joints, etc.)
🠖 work in unknown environments
🠖 demonstrate transfer learning on new tasks it hasn't seen before
It achieves 65-80% accuracy on tasks like "take an object and place it over there" even if it has never seen the object or place before
- Significant speed improvements
Since V-JEPA 2 doesn't rely on pixel prediction to understand the world, it is able to understand and plan much quicker than previous SOTA systems. It takes 16 seconds to plan a robotic action (while Cosmos from NVIDIA took 4 minutes!)
- It's the SOTA on many benchmarks
V-JEPA 2 demonstrates at least a weak intuitive understanding of physics on many benchmarks (it achieves human-level on some benchmarks while being generally better than random chance on other benchmarks)
🠖 77.3% on Something-Something v2 (action recognition)
🠖 +12% on Epic-Kitchens-100 (action anticipation) compared to the previous SOTA
🠖 Up to 84% on VideoQA
These results show that we've made a lot of progress with getting AI to understand the physical world by pure video watching. However, let's not get ahead of ourselves: the results show we are still significantly below even baby-level understanding of physics (or animal-level).
BUT...
- 16 seconds for thinking before taking an action is still very slow.
Imagine a robot having to pause for 16 seconds before ANY action. We are still far from fluid interactions that living beings are capable of.
- Barely above random chance on many tests, especially the new ones introduced by Meta themselves
Meta released a couple new very interesting benchmarks to stress how good models really are at understanding the physical world. On these benchmarks, V-JEPA 2 sometimes performs significantly below chance-level.
- Its zero-shot learning has many caveats
Simply showing a different camera angle can make the model's performance plummet.
- No major architectural innovations
I didn't actually read the full paper (disclaimer) but based on a quick skim, I don’t think V-JEPA 2 introduces any truly novel ideas. It’s mostly just a scaled-up version of V-JEPA 1 with a few optimization tricks.
➤Where we are at for real-world understanding
Not even close to animal-level intelligence yet, even the relatively dumb ones. The good news is that in my opinion, once we start approaching animal-level, the progress could go way faster. I think we are missing many fundamentals currently. Once we implement those, I wouldn't be surprised if the rate of progress skyrockets from animal intelligence to human-level (animals are way smarter than we give them credit for, see this thread: https://www.reddit.com/r/newAIParadigms/comments/1jtz4tg/do_we_also_need_breakthroughs_in_consciousness/ ).
➤Pros
- Unsupervised learning from raw video
- Zero-shot learning on new robot arms and environments
- Much faster than previous SOTA (16s of planning vs 4mins)
- Human-level on some benchmarks
➤Cons
- 16 seconds is still quite slow
- Barely above random on hard benchmarks
- Sensitive to camera angles
- No fundamentally novel ideas (just a scaled-up V-JEPA 1)
➤How to improve future JEPA models?
This is pure speculation since I am nothing more than an enthusiast. I really believe in this "let AI try to understand the world through video watching" method. To match animal and eventually human intelligence, I think we might need to implement some of the mechanisms of the eye or even the brain. For instance, our eyes are biased toward looking at certain things first. We also don't process images exactly as we see them because reality is overwhelming. Instead, our eyes construct their own simplified version of reality to help us focus on what matters to us (which makes us susceptible to optical illusions since we don't really see the world as is). AI might need to implement some of those heuristics to understand the world as efficiently as we do.
Here are some things I thought about:
- Foveated vision
This is a concept that was proposed in a paper titled "Meta-Representational Predictive Coding (MPC)". The human eye only focuses on a single region of an image at a time (that's our focal point). The rest of the image is progressively blurred depending on how far it is from the focal point. Basically, instead of letting the AI give the same amount of attention to an entire image at once (or the entire frame of a video at once), we could design the architecture to force it to only look at small portions of an image or frame at once and see a blurred version of the rest. More about the MPC paper in this thread: https://www.reddit.com/r/newAIParadigms/comments/1jy1aab/mpc_biomimetic_selfsupervised_learning_finally_a/
- Saccadic glimpsing
Also introduced in the MPC paper. Our eyes almost never stop at a single part of an image. They are constantly moving to try to see interesting features (those quick movements are called "saccades"). Maybe forcing JEPA to constantly shift its focal attention could help?
- Forcing the model to be biased toward movement
This is a bias shared by many animals and by human babies. Note: I have no idea how to implement this
- Forcing the model to be biased toward shapes
I have no idea how either.
- Implementing ideas from other interesting architectures
Ex: predictive coding, the "neuronal synchronization" from Continuous Thought Machines, the adaptive properties of Liquid Neural Networks, etc.
🤓 Fun fact:
I often say "X model is still inferior to even baby humans and animals at physics". But how do we even test babies' and animals' understanding of physics? This is indeed quite hard and not as straightforward as it is for AI. However, developmental psychologists use a lot of clever strategies for this.
For instance, here is how psychologists determine if babies and animals possess an intuitive understanding of the two following concepts:
- object permanence (a hidden object still exists):
they simply show the subject a ball, and then put it behind a cardboard. If the baby or animal tries to see behind the cardboard to find the ball then clearly they are aware that simply hiding an object doesn't mean the object has *really* disappeared
- shape constancy (an object shouldn't spontaneously change shape for no reason):
they show the subject a ball and try to get said subject to get interested in the ball (maybe because it smells good or something). Then when the subject shows interest, they put it behind a cardboard and quietly replace the ball with a cube. Finally, all they need to do is observe the subject’s reaction once they look behind the cardboard. If the animal or baby cries (because the ball they wanted is no longer there), sniffs aggressively as if trying to find the original ball, or stares at the scene for several seconds with a shocked and perplexed look, then it's a strong sign they understand that an object isn't supposed to change shape for no reason.