This document has been translated at ChinaTalk.media, but the critical technical section (18. to 48.) is not free. So I (Cosmia Nebula) translated that part. The other parts were copied from there.
A High Level Closed-Door Session Discussing DeepSeek: Vision Trumps Technology
January 26th. WeChat Link, Archive.
DeepSeek-R1 has sparked a frenzy in the global AI community, but there is a relative dearth of high-quality information about DeepSeek.
On January 26, 2025, 李广密 Guangmi Li, Founder and CEO of 拾象 Shixiang, organized a closed-door discussion on DeepSeek with dozens of top AI researchers, investors and frontline AI practitioners to discuss and learn from DeepSeek's technical details, organizational culture, and short-, medium-, and long-term impacts of its entry into the world. This discussion attempted to lift the veil of this “mysterious eastern force” about which we have so little information.
Below is a summary of the key points from this discussion.
The Mystical DeepSeek. ‘The most important thing about DeepSeek is pushing intelligence’
- Founder and CEO Liang Wenfeng is the core person of DeepSeek. He is not the same type of person as Sam Altman. He is very knowledgeable about technology.
- DeepSeek has a good reputation because it was the first to release the reproducible MoE, o1, etc. It succeeded in acting early, but whether or not it did the absolute best remains to be seen. Moving forward, the biggest challenges are that resources are limited and can only be invested in the most high-potential areas. DeepSeek’s research and culture are still strong, and if given 100,000 or 200,000 chips, they might be able to do better.
- From its preview to its official release, DeepSeek’s model’s long-context capabilities have improved rapidly. DeepSeek’s long-context 20K can be achieved with very conventional methods.
- The CEO of Scale.ai said that DeepSeek has 50,000 chips, but that is definitely not reality. According to public information, DeepSeek had 10,000 old A100 chips and possibly 3,000 H800 cards before the ban. DeepSeek pays great attention to compliance and has not purchased any non-compliant GPUs, so it should have few chips. The way the United States uses GPUs is too extravagant.
- DeepSeek focused all its efforts on a single goal and subsequently gave up many things, such as multimodality. DeepSeek is not just serving people, but seeking intelligence itself, which may have been a key factor in its success.
- In some ways, quant trading can be said to be the business model of DeepSeek. Huanfang (another quantitative investment company founded by Liang Wenfeng) is the product of the last round of machine learning. DeepSeek’s highest priority is to push intelligence. Money and commercialization are not high priorities. China needs several leading AI labs to explore things that can beat OpenAI. Intelligence takes a long time to develop, and has begun to differentiate again this year, so new innovations are bound to result.
- From a technical perspective, DeepSeek has been instrumental as a training ground for talent.
- The business model of AI labs in the United States is not good either. AI does not have a good business model today and will require viable solutions in the future. Liang Wenfeng is ambitious; DeepSeek does not care about the model and is just heading towards AGI.
- Many of the insights from DeepSeek’s paper involve saving hardware costs. On a couple of big dimensions of scaling, DeepSeek’s techniques are able to reduce costs.
- In the short-term, everyone will be driven to think about how to make AI more efficient. In the long-run, questions about computing power will remain. Demand for compute remains strong and no company has enough.
-
Discussing DeepSeek’s organization:
- When investing, we always choose the most advanced talent. But we see from DeepSeek’s model (the team is mostly smart young people who graduated from domestic universities) that a group that coheres well may also gradually advance their skills together. It has yet to be seen whether poaching one person might break DeepSeek’s advantage, but for now this seems unlikely.
- While there’s a lot of money in the market, DeepSeek’s core advantage is its culture. The research culture of DeepSeek and ByteDance are similar, and both are critical for determining the availability of funding and long-term viability. Only with an important business model can there be a sustainable culture. Both DeepSeek and ByteDance have very good business models.
-
Why did DeepSeek catch up so fast?
- Reasoning models require high-quality data and training. For LLMs or multimodal AI, it’s difficult to catch up with a closed source model from scratch. The architecture of pure reasoning models hasn’t changed much, so it’s easier to catch up in reasoning.
- One reason R1 caught up quickly was that the task was not particularly difficult. Reinforcement learning only made the model choices more accurate. R1 did not break through the efficiency of Consensus 32, spending 32 times the efficiency, which is equivalent to moving from deep processing to parallelization, which is not pushing the boundaries of intelligence, just making it easier.
Pioneers vs. Chasers: 'AI Progress Resembles a Step Function – Chasers Require 1/10th the Compute’
- AI is similar to a step function, where the compute requirements for followers have decreased by a factor of 10. Followers have historically had lower compute costs, but explorers still need to train many models. The exploration of new algorithms and architectures will not stop. Behind the step function, there are significant investments by many people, meaning compute investments will continue to advance. Many resources will also be allocated to products. Apart from reasoning, there are other directions that are compute-intensive. While the vast amount of compute resources spent by explorers may not be visible, without such investment, the next "step" might not occur. Additionally, many are dissatisfied with current architectures and RL methods, and progress will continue.
- When exploring directions, performance achieved with 10,000 GPUs may not always be significantly better than that of 1,000 GPUs, but there is a threshold somewhere. It’s unlikely that meaningful results can be achieved with only 100 GPUs because the iteration time for each solution would be too long.
- Advancements in physics can be divided into academic research in universities and industry labs. The former focuses on exploring multiple directions without requiring immediate returns, while the latter prioritizes efficiency improvements.
- From the perspectives of explorers and chasers, small companies with limited GPUs must prioritize efficiency, whereas large companies focus on achieving models as quickly as possible. Methods that improve efficiency on a 2,000-GPU cluster may not work effectively on a 10,000-GPU cluster, where stability becomes a higher priority.
- The advantage of the CUDA ecosystem lies in its extensive and complete set of operators. Chinese companies like Huawei have targeted commonly used operators to achieve breakthroughs, leveraging their latecomer advantage. If a company has access to 100,000 GPUs, the decision between becoming a leader or a chaser is critical. Being a leader comes with high costs, while being a chaser offers higher efficiency. The next direction for China to follow could be multi-modality, especially since GPT-5 has been delayed for a long time.
Technical Detail 1: SFT
“There's no need to do SFT at the inference level anymore.”
- The biggest shock brought by DeepSeek is not open source or low cost, but that there is no need to do SFT. (Note: SFT: Supervised Fine-Tuning, a technique to improve the performance of a pretrained model on a specific task or domain on labeled data.) But only for logical tasks. Non-logical tasks may still require SFT. It is interesting to discuss this point -- Does this present a new paradigm or architecture that makes training models more sample-efficient? Or would the models iterate faster?
- DeepSeek-R1 shows the benefits of using SFT for distillation. DeepSeek-R1 did do some SFT, but in the third step, and then RLHF (Reinforcement Learning from Human Feedback) for the final alignment step.
- R1 is SFT-trained by synthetic data generated by RLHF-trained models, which means that there is no need to use a particularly complex method, as long as there is a good enough method, you only need to distill it with standard SFT.
- The essence of GRPO lies in the fact that the base model must be smart enough. 1 prompt gets 16 rollouts, because it takes a dozen attempts to get even one right answer. R1 showed us that this works: a good base model plus a verifier. Math and coding are good, because they are easy to verify, but theoretically, similar processes can be done for other scenarios and tasks, and eventually a generalist RL model will be realized.
- R1-Zero got CoT emerging without SFT. The CoT just got longer and longer during training. This emergence is very meaningful, SFT is just a help: Without SFT we still get a model. With SFT we get a model much faster.
- This incident shows that many small companies in the game for AI models can now use SFT to distill from large models, and just get good small models. However, [though R1 is not a small model], we didn't fully abandon SFT for training R1.
- Consider a set of infinitely long CoTs generated by an LLM. That set can theoretically be viewed as a Turing machine, and arbitrarily complex computational tasks can be solved by it. A non-infinite CoT that you actually get is essentially just an intermediate computational trace -- an optimized way to iteratively sample potential output. It might get the right result sometimes, and [every time it does, it] nudges the model towards the right result. In essence, the model has to do some non-reducible amount of computation to accomplish some task, and the CoT is simply the intermediate computation that the model has to go through. We can call the final answer an "emergence", but we can also say this is just what computation is.
- Although there is no mention of long context in DeepSeek's paper, but from the vibes, the effective context window has increased a lot between R1-preview and R1. I guess it is because they used better Long2Short CoT -- including the CoT used in the third stage of SFT, which was also finally removed during generation. [Comment: I have no idea what this sentence means. It originally says "包括在第三阶段的 SFT 用的 CoT 最终在 generation 的时候也被去掉".] The final released version of R1 may have used even cleaner CoT data for its SFT.
- There are several kinds of data for SFT. One is the cold-start data, which is more like giving the model a good strategy and a better initialization, so that it can explore better. After all, in the GRPO objective, there is that term [the KL penalty term] which encourages the policy to stay close to the starting policy. Two is the synthetic data generated after RL was done [on R1-Zero], and then added with other data, and then used to train the
DeepSeek-V3-Base
by SFT. Essentially, each domain has its own data processing pipeline, and the ability that can be learned from this synthetic data was ultimately sourced from the base model. The distillation lost nothing. Putting together data from multiple domains may have led to generalization. - I'm not sure about the sample-efficiency for training R1. I guess OpenAI has done similar things for sample-efficiency, such as fine tuning. [For R1, they actually trained twice. The first RL-trained R1 was an internal model] not the final R1. It was simply used to just generate training data. That generated data was then used to do SFT on
DeepSeek-V3-Base
again, and that led to R1. The synthetic data contained 600K of reasoning data and 200K non-reasoning data. In the second stage, the model might have sometimes received a problem that required reasoning, but outside the example domains. In those cases, it might still have solved the problem, and thus obtain the reasoning data. The non-reasoning data is part of the V3 SFT data, by letting V3 impute a CoT. 800K data It's still pretty small, pretty efficient.
Technical Detail 2: Data
“DeepSeek takes data labeling very seriously.”
- Scale.AI won't necessarily fail. Now for RL on various domains, most commonly math and coding, we still need expert labels. Data labeling may become more complex, but the market will exist.
- For training, we hardly see the benefit of multimodal data. In other words, the cost is too high. Today there is no evidence it is useful. In the future, opportunities may be bigger.
- DeepSeek attaches great importance to data labeling, and I heard that Liang Wenfeng himself did labeling. In addition to algorithms and skills in AI, the accuracy of the data is also very critical, the cost of Tesla's labeling is almost 20 times the cost of China's self-driving efforts., China's self-driving effort's data began as a large and comprehensive thing, then the effort kept becoming more and more refined, until they made the final discovery that they need people with special driving experience and ability. This is what Tesla was doing at the very beginning. Tesla's robot's movements are labeled by people with very healthy cerebellums, and the degree of smoothness is very good, while the smoothness of labels by people hired in China's self-driving effort is very poor. So DeepSeek's investment in data labeling is one of the keys to good model efficiency.
Technical detail 3: distillation
“The bad thing about distillation is that model diversity goes down.”
- If you avoid understanding the biggest technical pain points in model training, by just doing distillation, you may be trapped by some technical problem when the next generation of technology is proposed.
- Big model and small model capability are not matched. Distillation from big model to small model is real distillation, teacher to student. If you try to distill Chinese data from the model that does not know Chinese at all, the performance may drop. But in fact, distillation into small models does lead to a very obvious performance improvement, R1-distilled models would be much better at RL, because it is trained using data that comes from beyond the model itself.
- The disadvantage of distillation is that the diversity of the model decreases, which affects the upper limit of the model and prevents it from surpassing the strongest model. However, in the short term, distillation is a way forward.
- There will be some hacky things during distillation. When using RL to train an instruction-tuned model, it would, during the early stage, make up useless ideas first, and then suddenly answer the questions correctly at the end. The reason is that a lot of those odd RL hacks have subtle causes. The model may have memorized a lot of the questions in the pre-training, so even if the model is pretending to be thinking, it is in fact just nudging itself towards the problems it memorized. This is the hidden danger of distillation. If we distill without annotation, then when we do Reinforcement Learning with Verifiable Rewards (RLVR), it will lead to the model solving the problem in a simpler way, instead of thinking about the problem. OpenAI has not solved. It may be a flaw for this generation of technology.
- You can take shortcuts -- instead of thinking about how to do the technical solution through the vision by yourself, you can just reproduce it. But this has a hidden downside in the long run. For example, our generation of technology assumes there's no qualitative change in
long context
. Assuming that, the upper limit of problem solving may be limited. R1-Zero may be a right direction, and it may be better to do R1-Zzero from the start, and it may be better to avoid starting with o1-like data. Following someone else's technical solution may not be good. Should explore more. - Other models can also get pretty good results from disillation. In the future, there may be a distinction between the roles of teacher and student in the model ecosystem. Producing models that can be a good student might become a viable business model.
- In terms of distillation and technical route, R1 brings less shock than AlphaGo, but in terms of business, the ability for it to become a breakout success [outside of the AI circle] is much better than AlphaGo.
- Distillation is divided into two phases, if you just distill o1 or R1 without establishing your own system and verifiable reward, it will lead to more and more reliance on distillation, but it is impossible to distill your way to a generalist model, because you don't get the reward signal, and you don't get the special CoT. Moreover, the first stage of distillation has traces. A model distilled from OpenAI's models may leave much of annealing scars from OpenAI. Why did R1-Zero get such powers during pure RL? Directly related to the self-reflection ability of the base model, obtained after annealing.
- I don't really believe that a model pretrained on purely Internet data and no annealing can achieve such behavior, because there is almost no high quality data on the internet.
- there are probably only a few top labs exploring exactly how much data and data ratios are needed for the annealing phase. Either distillation or no-distillation -- both can be thought of as RL. After all, distillation is just Behavior Cloning, a form of unlimited RL, but SFT-only has a very low performance ceiling, and compromises diversity.
- the primary market startups got very excited by DeepSeek. If DeepSeek can follow up with more iteration models, for a public company that's not one of the big ones, using AI allows great flexibility. DeepSeek also distilled a few small version that can run on a phone. If this direction would be proven, it'd raise the performance ceiling on many AI applications.
- To distill, it is very important to determine what the goal is. OpenAI did no data distillation. You definitely can't get a better model than OpenAI by distillation.
- In the future, the model may need to learn to skip steps to answer questions, like human beings. Can it increase the ceiling performance of the model under the fixed context length?
Technical Detail 4: Process Reward
“The upper limit of process supervision is the human. As for the limit of the model itself? That's outcome supervision.”
- Process Reward might still work, but Process Reward may be susceptible to reward hack, i.e., the model doesn't learn anything, but can make the reward very high. If you are solving a math problem, and you use a model to generate 1000 generations, there may be none that is close to the correct answer. In that case, any RLVR-like method would fail to train anything. If there is an OK Process Reward at this time, you may be able to get the model close to the correct direction. In that case, Process Reward is helpful. It depends on how hard it is to solve the problem, how reliable the process reward is, etc.
- For process reward in PRM estimation, if it deviates from the real reward, then it's quite easy to hack. Process supervision is theoretically possible, but the problem lies in the strength of the process, and how to give the reward based on the strength of the process. Right now, Outcome Reward Modeling is merely the simplest method of extracting the final answer from the model output, and matching against the ground truth label. Nobody has a very mature way to get a neural network reward model that can't be easily hacked. And self-iteration by the models itself would lead to the easiest reward hacking. Labelling the process data isn't too hard. We can just enumerate them exhaustively. It's just that people don't want to do it. May be a promising direction.
- The upper limit of process supervision is the human. Humans can't imagine many weird corner cases. As for the limit of the model itself? That's outcome supervision.
- The reason why AlphaZero is more effective is that it can judge the winner and loser in the final game, and the whole reward can be calculated according to the winning rate. An LLM doesn't know if the stream of generation would eventually get the answer, which is a little bit similar to genetic algorithm. The upper limit may be higher, but it's also possible that it can't be hacked towards.
- One of the advantages of AlphaGo over AlphaZero is that the rules of Go are fixed. So now the model starts from math and coding because it is easier to validate. Whether validation is good enough or not will affect the quality of the final RL. The rules have to be good enough, otherwise the model will reward-hack -- the model can satisfy the rules, but the result is not what we want.
Why didn’t the other companies take the DeepSeek approach: ‘Models from the big labs need to maintain a low profile’
- The question of why OpenAI and Anthropic did not do work in DeepSeek’s direction is a question of company-specific focus. OpenAI and Anthropic might have felt that investing their compute towards other areas was more valuable.
- One hypothesis for why DeepSeek was successful is that unlike Big Tech firms, DeepSeek did not work on multi-modality and focused exclusively on language. Big Tech firms’ model capabilities aren’t weak, but they have to maintain a low profile and cannot release too often. Currently, multimodality is not very critical, as intelligence primarily comes from language, and multimodality does not contribute significantly to improving intelligence.
The Divergence and Bets of 2025 Technology: ‘Can We Find Architectures Beyond Transformer?’
- In 2025, models will begin to diverge. The most enticing vision is to continuously push the boundaries of intelligence, with many potential breakthrough paths. Methods might change, such as through synthetic data or alternative architectures.
- 2025 will, first and foremost, see interest in new architectures beyond Transformers. Some initial exploration is already underway, aiming to reduce costs while pushing the boundaries of intelligence. Secondly, the potential of reinforcement learning (RL) has yet to be tapped into completely. On the product side, there is significant interest in agents, though they have yet to see widespread application.
- Multimodal products capable of challenging the ChatGPT paradigm might emerge in 2025.
- The success of R1 and V3 in achieving low cost and high performance demonstrates the viability of this direction. This does not conflict with the approach of expanding hardware or increasing parameters. However, in China, due to certain restrictions, the former path is the primary option.
- On DeepSeek:
- First, DeepSeek may have been "forced" into its current path from base models or may simply be following the Scaling Law.
- Second, from the perspective of distillation, DeepSeek likely follows a "large to small" approach. This is beneficial for closed-source models, which are growing larger and larger.
- Third, there are currently no anti-scaling metrics emerging in the field. If such metrics arise, they could pose a challenge to the Scaling Law. However, open-source models can implement everything closed-source models do while also reducing costs, which is advantageous for closed-source models as well.
- It is reported that Meta is still in the process of reproducing DeepSeek, but so far, this has not significantly impacted their infrastructure or long-term roadmap. In the long run, beyond exploring the boundaries of the technology, cost efficiency must also be considered. Lowering costs will let us have more fun.
Have developers moved from closed-source models to DeepSeek? ‘Not yet’
- Will developers migrate from closed-source models to DeepSeek? Currently, there hasn’t been any large-scale migration, as leading models excel in coding instruction adherence, which is a significant advantage. However, it’s uncertain whether this advantage will persist in the future or be overcome.
- From the developer's perspective, models like Claude-3.5-Sonnet have been specifically trained for tool use, making them highly suitable for agent development. In contrast, models like DeepSeek have not yet focused on this area, but the potential for growth with DeepSeek is immense.
- For large model users, DeepSeek V2 already meets most needs. While R1 improved speed, it didn’t provide significant additional value. Interestingly, when engaging in deep reasoning, some previously correct answers now tend to be incorrect.
- When choosing models, users tend to simplify problems using engineering methods. 2025 may become a year of applications, with industries leveraging existing capabilities. However, this could lead to a bottleneck, as most day-to-day tasks might not require highly intelligent models.
- Currently, reinforcement learning (RL) solves problems with standard answers but has not achieved breakthroughs beyond what AlphaZero accomplished. In fact, it is often simpler. Distillation addresses problems with standard answers, and RL methods work effectively when training with such answers. This explains why distillation and RL have made rapid progress in recent years.
- Humanity’s demand for intelligence is vastly underestimated. Many critical problems, such as cancer and SpaceX's heat shield materials, remain unsolved. Existing AI primarily automates tasks, but there are numerous unsolved challenges ahead. Looking forward, the potential for explosive growth is immense, and the advancement of intelligence cannot stop.
OpenAI Stargate’s $500B Narrative and Changes in Computing Power Demand
- The emergence of DeepSeek has led people to question the latest $500B narrative from Nvidia and OpenAI. There’s no verdict yet on compute — and OpenAI’s $500B narrative is their attempt to throw themselves a lifeline.
- Regarding the doubts about OpenAI’s $500B infrastructure investment: because OpenAI is a commercial company, it could be risky if debt is involved.
- $500B is an extreme number — likely to be executed over 4 or 5 years. SoftBank and OpenAI are the leading players (the former providing capital, the latter technology) — but SoftBank’s current funds can’t support $500B; rather SoftBank is using its assets as collateral. OpenAI, meanwhile, isn’t very cash-rich either, and other AI companies are more technical participants than they are funding providers. So it will be a struggle to fully realize the $500B vision.
- OpenAI’s $500B computing power makes sense: during the exploration phase, the cost of trial and error is high, with both human and investment costs being substantial. But although the path isn’t clear and getting from o1 to R1 won’t be easy, at least we can see what the finish line looks like: we can track the intermediate markers, and from day one, aim for others’ proven end states; this gives us a better bearing on our progress. Being at the frontier exploring the next generation is most resource-intensive. The followers don’t bear exploration costs — they’re always just following. If Google/Anthropic succeed in their exploration areas, they might become the frontier company.
- In the future, Anthropic might replace all their inference with TPU or AWS chips.
- Domestic Chinese companies were previously constrained by computing power, but now it’s proven that the potential technical space is vast. For more efficient models, we might not need especially large cards — we can provide relatively customized chips that can be adapted for compatibility with AMD and ASIC. From an investment perspective, Nvidia’s moat is very high, but ASIC will have yet greater opportunities.
- The DeepSeek situation isn’t really about compute — it’s about America realizing China’s capabilities and efficiency. DeepSeek isn’t Nvidia’s vulnerability; Nvidia will grow as long as AI grows. Nvidia’s strength is its ecosystem, which has been built up over a long time. Indeed, when technology develops rapidly, the ecosystem is crucial. The real crisis comes, though, when technology matures like electricity: it becomes commoditized; then, everyone will focus on products, and many ASIC chips will emerge for specific scenario optimization.
Impact on the Secondary Market: ‘Short-term sentiment is under pressure, but the long-term narrative continues’
- DeepSeek has had a significant short-term impact on the US AI sector and stock prices: pretrain demand growth is slowing, while post-training and inference scaling haven’t scaled up fast enough, creating a gap in the narrative for related companies, which will affect short-term trading.
- DeepSeek mainly uses FP8, while the US uses FP16. DeepSeek’s improvements are all based on limited computational engineering capabilities, with efficient use of computing power being the biggest highlight. Last Friday, DeepSeek had a huge impact in North America: Zuckerberg gave higher expectations for Meta’s capital expenditure, but Nvidia and TSMC fell, and only Broadcom rose.
- DeepSeek creates short-term market-sentiment pressure on stock prices and valuations. That’s affecting secondary market computing-related companies, and even energy companies — but the long-term narrative will continue.
- Secondary-market practitioners will worry about potential air pockets in Nvidia’s transition from H cards to B cards. Combined with pressure from DeepSeek, there will be short-term stock-price pressure — but this may give rise to better long-term opportunities.
- This short-term impact reflects sentiment about DeepSeek’s low-cost training investments (see, for instance, how it directly affected Nvidia’s stock price). AI, however, is a growth market with huge potential. Long-term, AI is just beginning, and if CUDA remains the preferred choice, hardware growth potential remains substantial.
Open-Source vs Closed Source: ‘If capabilities are similar, closed source will struggle.’
- The battle between open-source and closed-source intensifies the spotlight on DeepSeek.
- There is a possibility that OpenAI and others have hidden their good models, and no leading models have been released so far. But after DeepSeek’s release, other AI companies may not be able to hide their good models anymore.
- DeepSeek has done a lot of cost optimization. Amazon and others haven't seen any changes as a result and are still following the established plan in a state of coexistence. Open source and closed source models are not contradictory. Universities and small labs should give priority to DeepSeek. There will be no competition for cloud vendors, because cloud vendors support open source and closed source, preserving the current state of coexistence in the ecosystem. DeepSeek’s applications have not been as mature as Anthropic’s, and the latter has spent a lot of time on AI security Deepseek must consider if it hopes to be recognized by European and American markets in the long-term.
- Open source controls the margins of the whole market. If open source can do 95% of what closed source can do and closed source is too expensive, then open source can be used completely. If the capabilities of open source and closed source do not differ greatly, then this presents a big challenge for closed source.
The Impact of DeepSeek’s Breakthrough: ‘Vision Trumps Technology’
- DeepSeek’s breakthrough made the outside world realize China’s AI strength. Previously, outsiders thought China’s AI progress lagged America by two years, but DeepSeek shows the gap is actually 3 to 9 months, and in some areas, even shorter.
- When it comes to technologies and sectors that America has historically blocked China from accessing, if China can break through nonetheless, those sectors ultimately become highly competitive. AI might follow this pattern — and DeepSeek’s success may well prove this.
- DeepSeek didn’t suddenly explode. R1’s impressive results reverberated throughout America’s entire AI establishment.
- DeepSeek stands on the shoulders of giants — but exploring the frontier still requires much more time and human capital cost. R1 doesn’t mean that future training costs will decrease.
- AI explorers definitely need more computing power; China, as a follower, can leverage its engineering advantages. How Chinese large-model teams use less computing power to produce results, thereby having some definite resilience — or even doing better — might end up being how the US-China AI landscape plays out in the future.
- China is still replicating technical solutions; reasoning was proposed by OpenAI in o1, so the next gap between various AI labs will be about who can propose the next reasoning. Infinite-length reasoning might be one vision.
- The core difference between different AI labs’ models lies not in technology, but in what each lab’s next vision is.
- After all, vision matters more than technology.
一场关于DeepSeek的高质量闭门会:比技术更重要的是愿景
2025年01月27日 06:16
腾讯新闻科技主笔 张小珺
编辑 马龙
比技术更重要的是愿景。
DeepSeek-R1以始料未及的速度引发了全球AI社区的狂热,但有关DeepSeek的高质量信息相对匮乏。
2025年1月26日,拾象创始人兼CEO李广密,组织了一场关于 DeepSeek的闭门讨论会,嘉宾包括数十位顶尖AI研究员、投资人与一线 AI 从业者,围绕DeepSeek的技术细节、组织文化以及其出圈后的短中长期影响等,进行了探讨与学习。
这场讨论会试图在有限信息下,揭开这股“神秘的东方力量”面纱的一角。
值得注意的是,本次讨论属于民间技术交流,不代表任何具体个人及机构的观点立场。
就像硅谷著名风投家Marc Andreessen评价DeepSeek-R1称:“作为开源项目,这是对世界的一份深远馈赠 (As open source, a profound gift to the world)。”因而,本次参与讨论的人员也学习DeepSeek,本着开源精神,将闭门会的集体思考公开。
以下是对本场讨论会的要点总结。
该总结由拾象团队整理,作者做了少量编辑。
神秘的DeepSeek
“DeepSeek最重要的事是push智能”
- 创始人兼CEO梁文锋是 DeepSeek 最核心的人,和 Sam Altman 不是一类人,他是很懂技术的。
- DeepSeek 有好口碑的原因在于是第一个把复现 MoE、o1 等发出来,胜在做的早,但能不能做到最好,空间还很大。后面新的挑战在于资源有限,只能把有限的资源放在最亮眼的地方。这个团队的 research 能力、团队文化还是很好的,如果再给 10万、20 万张卡,可能能做出更好的事情。
- DeekSeek 从 preview 到正式发布这段时间,长上下文能力提升很快。DeepSeek 的 Long context 10K 用非常常规的方法就能够做到。
- Scale.ai 的 CEO 说 DeepSeek 有 5 万张卡,实际肯定没这么多,从公开信息来看 DeepSeek 是有 1 万张老的A100卡,可能有 3 千张禁令之前的 H800。DeepSeek 很注重合规,没有采购任何不合规的GPU,所以卡应该很少。美国用 GPU 的方式太粗放了。
- DeepSeek 把所有精力都放在了一个很窄的点,把后续很多东西都放弃了,比如多模态。不是单纯在服务人,而是做智能本身,可能也是成功的关键因素。
- 某种意义上来说,量化可以说是 DeepSeek 的商业模式。幻方(梁文锋创立的另一家量化投资公司)是上一轮 machine learning(机器学习) 的产物。DeepSeek 最重要的事就是 push 智能。钱和商业化的优先级都不高。中国需要有几个领先的 AI labs 来探索能 beat OpenAI 的东西,智能要走的时间很长,今年又开始分化,肯定要有新东西出来。
- 单从技术角度,DeepSeek 作为黄埔军校对人才扩散有很大作用。
- 美国的 AI lab 商业模式也不好,AI 今天确实没有什么好的商业模式,后面可能需要跑通。梁文锋是有抱负的,DeepSeek 不在乎形态,往 AGI 走就是了。
- 读完 DeepSeek 论文的感受是,很多都是节约硬件开销的技术,在比较大的几个 scaling 方向上,DeepSeek 的技巧可以把成本降下来。
- 长期不会对算力有影响,但短期大家会想怎么把 AI 做的更加有效率一点。需求还是很强的,各家都是算力不够用的状态。
- 谈DeepSeek的组织:
1)做投资,都选择最高级的人才组合,但看DeepSeek的模式(团队多是国内高校毕业的聪明年轻人),觉得大家一起磨合好,能力也能慢慢变高级。挖走一个人是否能打破优势组合是一个问题,现在看对于 DeepSeek 的影响可能不是特别大。
2)市场上钱有很多,但 DeepSeek 核心是文化组织。DeepSeek 和字节的 research culture 比较像,比较本质,文化好不好的衡量标准在于是否有足够的钱和长期性,有比较重要的商业模式才能有长期性的文化,这两家公司的商业模式都非常好。 - DeepSeek 为什么能追这么快?
1)Reasoning model(推理模型)的需求是更高质量的数据和训练。如果是长文本、多模态,从 0 开始追一个闭源模型会更困难,但纯 reasoning 模型本身的架构没有大动,reasoning(推理)是一个更好追的方向。
2)R1 能追的快的原因可能在于任务没有特别难,RL(强化学习) 只是让模型选的更准,R1 没有突破 Consensus 32 的效率,同时花了 32 倍效率,相当于把原来并行做探索改成串行了,没有提高智能的边界,只是变得更加容易了。
探索者VS追赶者
“AI类似阶跃函数,追赶者算力需求少10倍”
- AI 类似阶跃函数,现在做追赶者的算力需求少了 10 倍。追赶者的算力成本一直不太高,但探索者还是要训很多模型,大家对于新算法和架构的探索不会停止。阶跃函数背后其实是有很多人投入了很多,所以算力投入还是会一直往前,还会有很多人投在产品上。除了 reasoning 之外,还有很多方向也很费卡。探索者花费很多卡可能大家看不到,但没有这么多花费,可能不会有下一个阶跃。也有很多人不满足架构、RL 方法,会不断往前推进。
- 在探索方向的时候,花 1 万张卡的效果不一定比 1 千张卡好,但可能会有一个门槛,即如果只有 100 张卡,那大概率做不出来,因为迭代一次方案的时间太长。
- 推动物理学的进步,分为学校里的研究者和产业界的实验室,前者需要探索多个方向,不要求回报,后者更关注效率提升。
- 探索者和追赶者角度,小公司卡很少,就需要考虑效率,而大公司考虑的是怎么更快的得到模型,很多在 2 千卡集群上能提高效率的方法在万卡是不 work 的,大家会更考虑稳定性。
- CUDA 生态优势在算子的多和全,而华为等国内公司突破的时候是找了一些常用的算子,有后发优势,假如拥有 10 万张卡,在决定资源投入的时候,做领先者的成本很高,做追赶者效率更高,该如何抉择。国内下一个追赶的方向是什么,比如多模态,因为海外 GPT-5 一直迟迟没有出来。
技术细节1:SFT
“在推理层面不需要做SFT了”
- DeepSeek 带来的最大的震撼不是开源或者低成本,而是不需要做 SFT了。(注:SFT:Supervised Fine-Tuning,有监督微调,一种重要的模型优化技术,它通过在预训练模型的基础上,使用标注好的数据进行进一步训练,以提升模型在特定任务或领域上的性能。)但只是在推理层面,推理以外的任务可能还是需要做 SFT。围绕这个点很值得讨论的是,是不是由此提出了一个新的范式或架构,使得训练模型对数据的利用效率更高了?或者模型表现的迭代速度会更快?
- DeepSeek-R1 一定程度上说明用 SFT 做蒸馏有很大好处。DeepSeek-R1 并不是完全不做 SFT,而是在第三步骤只做了 SFT,最后一步 alignment(对齐)再用了 RLHF(基于人类反馈的强化学习)。
- R1 本质是 SFT 训练出来的,比较特殊的是数据是用 RLHF 训练出来的模型生成的,说明不需要用特别复杂的方法,只要有足够好的方法,只需要用 SFT 蒸馏就行。
- GRPO 的本质在于 base model(基础模型)得足够聪明,一个 prompt 生成用了 16 个 generation,得尝试几次才能大概率有正确的答案。不错的 base model 加上可以 verify,是 R1 提供的思路,math 和 coding 很合适是因为这两类任务比较容易 verify,但理论上可以在其他场景任务上做类似的过程,最终实现一个通用的 RL 模型。
- R1 - Zero 没有用 SFT 就出现了 CoT 的过程,CoT 会越来越长,这个涌现过程很有意义,SFT 更像是一个辅助手段,模型没有 SFT 也能产生,有了 SFT 能很快生成。
- 这件事说明现在很多小模型厂商可以用 SFT 去蒸馏大模型,并且效果会很好,但也没有在 R1 的过程中完全被抛弃。
- 一个 LLM 集合无限长的 CoT 理论上可以看成一台图灵机,理论上通过无限长的 CoT 可以解决极复杂的计算问题(computational problem),但 CoT 本质上只是中间搜索结果,用一种优化的方式去不停 sample potential output,可能会输出正确结果,然后让模型往更可信的方向去推。本质上是模型为了得到这样的结果,必须要做一些 computation,CoT 是 computation 中间必须经过的中间输出,最终结果可以说是涌现,也可以说是它作为计算机的本质。
- DeepSeek 的论文里面虽然没有提到长上下文,但体感上 R1-preview 和 R1 之间模型的 context window 提升了很多,猜测是做了一些 Long2Short CoT 的提升,包括在第三阶段的 SFT 用的 CoT 最终在 generation 的时候也被去掉,最后发布的版本可能是用了更加 clean 的 CoT 数据做 SFT。
- SFT 的数据种类有几种:一个是冷启动的数据,更像是给模型一个很好的策略,给一个比较好的初始化,这样能做的探索更好,RL 中有一个优化目标是和原策略更接近;另一种数据是做了 RL 之后,生成很多 data,再加上别的数据,再在 base model SFT,本质上每个 domain 有自己的 data processing pipeline 之类的,这个数据的能力是从 base model 来的,蒸馏是无损的,把多个 domain 放到一起可能会有泛化。
- 不确定 R1 这个过程的数据效率怎么样。猜测 OpenAI 针对数据效率也做了类似的事情,比如 fine tuning。R1 第三阶段没有用 RL 做出来的模型作为 base 去训练,而是去生成了数据,再去 SFT 得到 R1,数据包含 600K 的 reasoning data 和 200K non-reasoning data。第二阶段的模型可能在 example 的 domain 之外但仍然需要某种 reasoning 的场景下,可能也能展示解题能力,从而得到 reasoning data。而 non reasoning data 是 V3 SFT data 的一部分,是让 V3 脑补出了一个 CoT。800K 的数据还是挺小的,挺有效率的。
技术细节2:数据
“DeepSeek在数据标注上非常重视”
- Scale.AI 不一定会失败,现在需要在各种 domain 上做 RL,比较常用的是 math 和 coding,还是需要 expert 来标注,但数据标注可能会更复杂,但市场会存在。
- 在 training 上,多模态数据几乎看不出效果,或者说成本太高了,今天还没有任何证据说有用,未来机会可能比较大。
- DeepSeek 在数据标注上非常重视,听说梁文锋自己也会打标签,在 AI 上除了算法和技巧,数据的精确度也很关键,特斯拉的标注成本几乎是中国自动驾驶的 20 倍,中国自动驾驶的数据经历了大而全、精细化到最终发现要找开车经验和能力特别丰富的人,这个是特斯拉一开始就在做的事。特斯拉的机器人的动作是找的小脑非常健康的人做的标注,丝滑程度很好,而中国找的人的丝滑程度很差。所以 DeepSeek 在数据标注上的投入是模型效率好的关键之一。
技术细节3:蒸馏
“蒸馏坏处是模型diversity下降”
- 如果不去了解模型训练中最大的技术痛点,而选择用蒸馏的技术去避免了解,那么在下一代技术提出的时候,就可能会掉进坑里。
- 大模型和小模型能力是不匹配的,从大模型往小模型进行蒸馏是真的蒸馏,teacher to student,如果从完全不会中文的模型蒸馏各种中文数据,性能可能会下跌。但实际上蒸馏小模型确实有很明显的性能提升,R1 蒸馏出来后的模型再做 RL 会增长很多,因为是用和模型不匹配的数据做出来的。
- 蒸馏的坏处是模型 diversity 下降,影响模型上限,无法超越最强的模型。但短期看,蒸馏也是一条路线。
- 用蒸馏会有一些 hack,早期一般在 instruction 调过的模型做 RL,这个阶段模型会呈现出的特征是:先去生成没有用的想法,然后最后突然答对,原因在于很多 RL hack 做得非常隐晦,模型可能在预训练的时候背了很多问题,所以明面上是在思考,其实只是在靠近背的题。这就是蒸馏的隐患。如果不做标注就蒸馏,那现在做 具有可验证奖励的强化学习(Reinforcement Learning with Verifiable Rewards, RLVR)的时候,就会导致模型会用更简单的方式解决,而不是去思考这个问题 OpenAI 也没有解决。可能是这一代技术的缺陷。
- 长期来说,通过走捷径的方式,而没有自己通过愿景去想怎么做技术方案,而是直接复现,中间可能会有不知道的坑。比如在这一代技术 long context 没有质变的前提下,解决问题的上限可能会被限制。R1-zero 可能是一个正确的方向,从头就做 R1-zero 或不通过类 o1 的数据启动可能更好。照着别人的技术方案可能不太好,希望更多探索。
- 其他模型用蒸馏也能得到较好的结果,未来在模型生态里面可能就会有老师、学生的角色区分,有能力当一名好学生也是一种可以的商业模式。
- 在蒸馏和技术路线上,R1 带来的震撼不如 AlphaGo,但在商业上,出圈能力比 AlphaGo 要好很多。
- 蒸馏分两个阶段,如果只是蒸馏 o1 或者 R1,而没有建立自己的体系和 verifiable reward,会导致大家越来越依赖蒸馏,但通用领域是不可能蒸馏的,因为 reward 无法得到,以及在蒸馏过程中特殊的 CoT 怎么得到。而且第一阶段的蒸馏都有痕迹,用 OpenAI 蒸馏的模型可能遗留了 OpenAI 大量的退火痕迹,为什么 zero 能够在纯 RL 阶段上获得这样的能力,和基础模型在退完火之后具有反思能力是有直接关系。
- 不太相信纯互联网的数据而不经过退火的模型能做到这样的行为,因为互联网上几乎没有高质量数据。
- 目前可能只有几个 top lab 在探索到底需要多少退火阶段的数据和数据配比。蒸馏与否都是 RL 算法的一种,SFT 是行为模仿,是无限的强化学习,但只做 SFT 的上限很低,而且会损害多样性。
- 一级市场上的创业公司看见 DeepSeek 还是很激动的,如果后续 DeepSeek 还能继续迭代,对于不是大的上市公司来说,使用 AI 上会有非常大的灵活性,DeepSeek 还蒸馏了几个小版本可以在手机上用起来,如果这个方向被证明,对于很多 AI 应用会提高天花板。
- 蒸馏很重要的是确定目标是什么,OpenAI 是没有数据蒸馏的,要超过 OpenAI 是肯定不能做蒸馏。
- 未来可能模型需要像人类一样学会跳步回答,在固定 context 长度下,能否提高模型能力表现上限。
技术细节4:Process Reward
“过程监督上限是人,结果监督才是模型上限”
- Process Reward (过程奖励)不一定不行,但 Process Reward 可能容易被 reward hack(奖励劫持),也就是模型没学到什么,但能把 reward 做的很高。如果解决数学问题,用模型生成 1000 个 generation,可能就是没有 1 个能靠近正确答案,那用类似 RLVR 的方式是没有办法训练到任何东西的,如果这时候有个还可以的 process reward,可能能接近正确方向,过程分也是有帮助的。要看解决问题有多难、过程 reward 有多可靠等。
- 过程分在 PRM 估算中,如果和真实有偏差就很好 hack。过程监督理论上是可能的,问题在于 process 的力度,以及基于 process 力度怎么给到 reward,现在结果监督也是用抽取出来的答案去做匹配,各家也没有很成熟的让模型打分而不 hack 的方案,模型自己迭代是最容易 hack 的。标过程也不难,可以枚举的,只是大家没有做,可能是一个有前途的方向。
- 过程监督上限是人,人很多是想不到的。结果监督才是模型的上限。
- AlphaZero 比较有效的原因在于棋局终局的时候是可以做输赢判断的,而且整个 reward 是可以根据胜率计算,但是 LLM 不知道最后不停生成能不能给出答案,有点类似遗传算法,上限可能更高,但也有可能 hack 不到。
- AlphaGo 到 AlphaZero 的一个优势是围棋的规则是固定的,现在模型从 math 和 coding 开始就是因为比较容易验证,验证的方法是不是足够好会影响最后 RL 的质量。规则得足够完善,不然模型会去 hack,模型能满足规则,但生成的结果不是想要的。
其他公司为何没用DeepSeek方法?
“大厂的模型得低调”
- OpenAI 和 Anthropic 之前没有做 DeepSeek 的方向是一个公司聚焦方向的问题,OpenAI 和 Anthropic 可能觉得把现有算力投入其他地方会更有价值。
- 相比大厂,DeepSeek 可能因为没有在多模态上做事,而是集中在语言,所以能做出成果。大厂的模型能力不弱,但得低调,不能发太多。现在多模态不是很关键,智能来源主要是语言,对于提升智能没有帮助。
2025技术的分化与押注
“除Transformer能不能找别的架构”
- 模型在2025会发生分化。最诱人的愿景是不断推进智能的边界,可能有很多突破的路径,方法可能会发生变化,比如合成数据、别的架构。
- 25年首先关注新的架构,除了 Transformer 之外能不能找别的,现在已经有了一些探索,可以降低成本,在降低成本的同时也可以探索智能的边界;其次,RL 的全部潜力还没有发挥出来;产品上,大家关心 agent,还没有被大规模应用。
- 25 年多模态可能会出现能挑战 ChatGPT 形态的产品。
- R1 和 V3 带来的低成本、高效果,说明这是一个方向,和另一个扩硬件、涨参数的方向是不冲突的,国内是受到限制只能走前者。
- 第一,DeepSeek 是从 base model 逼出来的,还是遵循 Scaling Law,第二,从蒸馏角度,DeepSeek 蒸馏还是先大后小,对于越做越大的闭源模型是好事,第三,对技术发展中,还没有出现反规模指标,如果出现,那对于 Scaling Law 可能是一个比较大的打击,而且开源模型的所有东西都可以在闭源模型做一遍,同时还可以降低成本,对于闭源模型也是利好。
- 据了解,Meta 目前还在复现 DeepSeek 的过程中,但目前还没有特别影响 infra 或者长期 roadmap(路线图) 的地方出现。长期来说除了探索边界之外,也要考虑成本,只有成本更低,才能有更多的玩法。
开发者是否从闭源模型迁移至 DeepSeek?
“目前还没有”
- 开发者是否会从闭源模型迁移至 DeepSeek?目前看还没出现大批迁移,因为领先模型的 coding 指令遵循能力是比较有利的,但不确定这一优势在未来是否会被攻克。
- 开发者角度来说,Claude-3.5-Sonnet 是做了 tool use(工具使用)专门训练,对于做 agent 非常有利,但 DeepSeek 之类模型暂时没有提供,但 DeepSeek 带来的空间很大。
- 对于大模型应用者,DeepSeek V2 就已经满足了所有需求,R1 速度提高了,没有带来特别大的额外价值,但开启深度思考的时候,以前能答对的题目现在反而错了。
- 应用者选择模型的时候会用工程方法把问题简化,25 年可能是一个应用年,各行各业会使用现有的能力做,可能慢慢会到一个瓶颈了,因为日常可能用不到那么聪明的模型。
- 现在 RL 是解决了有标准答案的问题,并没有比 AlphaZero 做更多突破,甚至更简单,蒸馏解决了标准答案的问题,有标准答案后用 RL 的方法去训练时可以得到很好的效果,这是为什么现在蒸馏或者 RL 能很快突破的原因。
- 人类对智能的需求是远远被低估的,比如癌症问题、SpaceX 上的隔热材料都还没有被解决。现有的任务是自动化的问题,还有很多问题,对未来增量的爆发非常乐观,智能是不能停下来的。
OpenAI Stargate 500B叙事
与算力需求变化
- DeepSeek 的出现让大家开始质疑英伟达(NVIDIA)和 OpenAI 最新的 500B 叙事。训练资源问题目前还没有清晰判断,OpenAI 的 500B 叙事是给自己加救命稻草。
- 对于 OpenAI 500B 基础设施投入的事情是存疑的,因为 OpenAI 是商业公司,如果涉及举债,那可能是有风险的。
- 500B 是一个很夸张的数字,可能会分 4、5 年去执行。因为 leading 的角色是软银和 OpenAI,前者是资金,后者是技术,软银现在账上的资金没有办法支持 500B,而是用手上的资产去做抵押,而 OpenAI 本身资金也不是很充沛,其他更多是技术参与方,而不是资金提供方,因此要完整实现 500B 是有挑战。
- OpenAI 500B 的算力是有道理的,在探索阶段,试错成本很高,人力和投资成本都很高,但因为路线是不明确的,从 o1 到 R1 可能也不容易,但至少知道最后是怎么样的一个结果,中间的特征词也可以观察到,可以一开始就对着别人的最终形态去做,比较有方向感。而如果是在前线探索下一代,是最费资源的,而追赶者不需要承担探索,但永远只是追赶。如果 Google、Anthropic 在探索的领域做成功了,可能就会成为最前沿的那家公司
- Anthropic 未来有可能把所有的 inference 都换成 TPU 或者 AWS Chip。
- 国内公司原来受困于算力,现在证明了潜在的技术空间是非常大的。对于更加 efficient 的模型,可能不需要特别大的卡,可以提供相对定制化的芯片,可以在 AMD、ASIC 芯片上提供适配,从投资角度,英伟达壁垒非常高,但 ASIC 也会有更大的机会。
- DeepSeek 的事情和算力没有太大关系,更多让美国觉得中国比较厉害,比较有效率,英伟达的软肋不在 DeepSeek,只要 AI 还在发展,英伟达就能发展,英伟达的优势在生态,这是靠时间积累的。技术在快速发展的时候,生态就很重要,真正危机在于技术成熟后,类似电力,变成标准品,大家会关注做产品,就会有很多 ASIC 芯片出来做特定场景的优化。
对二级市场的影响
“短期情绪有压力,长期叙事继续”
- DeepSeek 短期对美国 AI 圈冲击大,短期上对股价有影响:pretrain 需求增速放缓,post-train 和 inference scaling 还没有足够快地 scale up,在相关公司的叙事上会有一个 gap,对于短期交易确实会有影响;
- DeepSeek 更多是 FP8,美国是 FP16,DeepSeek 所有都是基于有限算力工程能力的提升,对于算力高效的使用是最大亮点。上周五 DeepSeek 在北美有巨大的发酵,扎克伯格对 Meta 资本支出给了更高的预期,但英伟达和台积电都是跌,只有博通是涨的。
- DeepSeek 在短期情绪上对股价、估值有压力,对二级的算力相关公司,甚至能源公司有压力,但长期叙事会继续。
- 二级从业者会担心英伟达从 H 卡到 B 卡的转换上会有一些 air pocket,再加上 DeepSeek 的压力,短期会有股价承压,但可能是长期看更好的机会。
- 短期受影响是 DeepSeek 在训练上的低成本投入的情绪体现,比如英伟达的股价就很直接,但 AI 是一个增量市场,潜力很大,长期来看,AI 才刚开始,如果 CUDA 还是大家喜欢的选择,那硬件增长空间还是很大的。
开源 VS 闭源
“如果能力差不多,对闭源是挑战”
- DeepSeek 之所以受关注,更多是开源和闭源路线之争。
- 有可能会导致 OpenAI 等把好的模型藏在后面,目前看领先的模型都没发布。但 DeepSeek 拿出来之后,其他 AI 公司好的模型可能也藏不住了。
- DeepSeek 成本上做了很多优化,Amazon 等还没有看到因此做出的改变,还是按照既定的计划做,目前是一个共存的状态。开源和闭源模型并不矛盾,高校和小 lab 应该会优先选择 DeepSeek,不会对云厂商有竞争,因为云厂商对开源、闭源都是支持的,生态不会改变,目前也是共存状态。DeepSeek 在 tool use 等上面还没有像 Anthropic 这么成熟,以及后者已经花了很多时间在 AI 安全上,DeepSeek 如果长期希望得到欧美市场的认可,是需要考虑的。
- 开源对整个市场的 margin 是有控制的,如果开源能做到闭源的 95%,那如果闭源太贵,那完全就可以用开源来做,如果开源和闭源能力差不多,那对闭源是一个很大的挑战。
DeepSeek出圈的影响
“比技术更重要的是愿景”
- DeepSeek 的出圈让外界意识到了中国的 AI 很强。以前外界认为中国的 AI 进展落后美国两年,但 DeepSeek 表明其实差距在 3-9 个月,甚至某些方面更强。
- 历史上中国被美国封锁的东西,如果能被突破的话最终都会很卷,AI 可能也是,DeepSeek能跑出来就是一个证明。
- DeepSeek 不是突然爆发的,这次 R1 结果很漂亮,触及到了美国从上到下的核心圈。
- DeepSeek 是站在巨人的肩膀上,但探索前沿需要的时间和人力成本还是要高很多,R1 并不代表以后的训练成本会同时降低。
- AI探索者一定是需要更多算力的,中国作为追赶者可以发挥在工程能力上的优势。中国的大模型团队怎么用较少的算力做出成果,从而有一定的抵御能力、甚至做的更好,可能是未来中美 AI 格局的推演。
- 中国今天还是在复现技术方案,reasoning 是 OpenAI 在 o1 提出的,所以接下来各个 AI labs 之间的差距在于谁能提出下一个 reasoning。无限长度的 reasoning 可能是一个愿景。
- 不同 AI labs 的模型之间的核心差别在于 AI labs 本身的下一个愿景是什么,而不是技术。
- 毕竟,比技术更重要的是愿景。
(本文观点仅供行业研究参考,不作为投资依据)