Global Empire Dashboard

🍜 China’s Uber -> Meituan Just Shipped a Better LLM Than Meta & Why Its Tech Stack Is Already a Full Generation Ahead of Uber

(tl;dr: Meta spends 16× more on R&D, yet Meituan’s new MoE—LongCat—already handles 68 % of internal traffic and slashes 20 % off support costs. Uber? Still no public LLM, still renting OpenAI APIs.)


🥊 Meta vs. Meituan: The Scoreboard

🧮 Metric🏢 Meta Llama 4🥟 Meituan LongCat
⚙️ Parameters17 B static MoE18–31 B dynamic MoE
📚 Corpus15 T public crawl6 T proprietary (menus, calls, photos)
🔓 Open weightsResearch licenceApache-2.0 on HF
🛠️ Adoption< 5 % of Meta services68 % of API calls
💵 ROIUnknown3-month payback via CS savings

⚡ 2. How Meituan Built a “Better” Model With 5 % of Meta’s Budget

AdvantageMeituanMeta
Data density60 M orders/day → 1.2 B daily Chinese-first, multi-modal datapointsPublic crawl + licensed books
Free warm GPUs35 % of pre-training FLOPs were idle cycles between lunch & dinner trafficPaid clusters 24/7
RLHF rewardMoney saved per support ticketHuman rater preference score
Architecture agilityHalf-scale dense → 400 B MoE warm-start in 11 daysStatic design locked 9 months before ship

🚗 3. Uber: Same GPUs, Far Narrower Surface Area

LayerMeituanUber / Uber Eats
Perception on the edgeJetson AGX Xavier on drones & sidewalk botsNone disclosed (relies on partners like Avride)
Real-time dispatchGPU cluster solves 10⁵ routes in 200 msCPU-heavy OR-Tools + GPU post-processing
LLM R&D400 B MoE now in productionNo public LLM; calls GPT-4 via Gen-AI Gateway
GPU utilisation patternCyclical—same GPUs switch from dispatch to trainingFragmented 45-country footprint → under-utilised

🛑 4. Why Uber Hasn’t (and Probably Won’t) Ship Its Own LLM

  • 💰 Unit economics: Uber Eats GMV per user ≈ ¼ Meituan’s; custom-LLM savings are negative
  • ⚖️ Regulation: GDPR/CCPA make open-weight release legally painful
  • 🏗️ Vertical depth: Meituan owns warehouses, drones, cold-chain → control of every GPU cycle; Uber is asset-light

🎯 5. The Takeaway

Meituan didn’t out-spend Meta, it out-positioned it:

  • 📊 Proprietary data → cheaper, higher-signal training
  • ⚡ Cyclical infra → free pre-training FLOPs
  • 💡 Business feedback loop → every RL step directly cuts cost

Uber uses GPUs too, but only where the economics are obvious.
Meituan turned the entire delivery pipeline into a self-funding AI lab. That’s why its stack is already a full generation ahead and why Meta’s next paper may still lose to a food-delivery company.