(tl;dr: Meta spends 16× more on R&D, yet Meituan’s new MoE—LongCat—already handles 68 % of internal traffic and slashes 20 % off support costs. Uber? Still no public LLM, still renting OpenAI APIs.)
🥊 Meta vs. Meituan: The Scoreboard
| 🧮 Metric | 🏢 Meta Llama 4 | 🥟 Meituan LongCat |
|---|---|---|
| ⚙️ Parameters | 17 B static MoE | 18–31 B dynamic MoE |
| 📚 Corpus | 15 T public crawl | 6 T proprietary (menus, calls, photos) |
| 🔓 Open weights | Research licence | Apache-2.0 on HF |
| 🛠️ Adoption | < 5 % of Meta services | 68 % of API calls |
| 💵 ROI | Unknown | 3-month payback via CS savings |
⚡ 2. How Meituan Built a “Better” Model With 5 % of Meta’s Budget
| Advantage | Meituan | Meta |
|---|---|---|
| Data density | 60 M orders/day → 1.2 B daily Chinese-first, multi-modal datapoints | Public crawl + licensed books |
| Free warm GPUs | 35 % of pre-training FLOPs were idle cycles between lunch & dinner traffic | Paid clusters 24/7 |
| RLHF reward | Money saved per support ticket | Human rater preference score |
| Architecture agility | Half-scale dense → 400 B MoE warm-start in 11 days | Static design locked 9 months before ship |
🚗 3. Uber: Same GPUs, Far Narrower Surface Area
| Layer | Meituan | Uber / Uber Eats |
|---|---|---|
| Perception on the edge | Jetson AGX Xavier on drones & sidewalk bots | None disclosed (relies on partners like Avride) |
| Real-time dispatch | GPU cluster solves 10⁵ routes in 200 ms | CPU-heavy OR-Tools + GPU post-processing |
| LLM R&D | 400 B MoE now in production | No public LLM; calls GPT-4 via Gen-AI Gateway |
| GPU utilisation pattern | Cyclical—same GPUs switch from dispatch to training | Fragmented 45-country footprint → under-utilised |
🛑 4. Why Uber Hasn’t (and Probably Won’t) Ship Its Own LLM
- 💰 Unit economics: Uber Eats GMV per user ≈ ¼ Meituan’s; custom-LLM savings are negative
- ⚖️ Regulation: GDPR/CCPA make open-weight release legally painful
- 🏗️ Vertical depth: Meituan owns warehouses, drones, cold-chain → control of every GPU cycle; Uber is asset-light
🎯 5. The Takeaway
Meituan didn’t out-spend Meta, it out-positioned it:
- 📊 Proprietary data → cheaper, higher-signal training
- ⚡ Cyclical infra → free pre-training FLOPs
- 💡 Business feedback loop → every RL step directly cuts cost
Uber uses GPUs too, but only where the economics are obvious.
Meituan turned the entire delivery pipeline into a self-funding AI lab. That’s why its stack is already a full generation ahead and why Meta’s next paper may still lose to a food-delivery company.
Meituan just open sourced their new MoE LLM LongCat on @huggingface
— Tiezhen WANG (@Xianbao_QIAN) August 30, 2025
It's exciting to see new players! The model looks very interesting too with technical report.https://t.co/DduHMQxw5F pic.twitter.com/QMq0K8qJa0