AI-powered robotic chef cooking pizza and pasta from scratch — kneading, plating, the lot.
A robot doesn't know how to knead. It knows how to follow a trajectory, apply force, measure resistance. Kneading requires all three at once, with closed-loop tactile feedback. The problem has been open for twenty years.
NonnoRobot is our attempt: use a recipe as symbolic anchor, a camera as visual oracle, and a transformer as action planner. Not a demo in a cold lab — a robot that can produce an edible margherita in under twelve minutes.
We vectorise 47 recipes (steps, ingredients, gestures) and index them with a multimodal embedding combining text and image. At cook time, the robot retrieves the recipe closest to the user brief, generates a ROS 2 action plan, and executes step by step, re-checking state after each one.
The stack runs across eleven Docker services: FastAPI for orchestration, ROS 2 Jazzy for motor control, PyTorch for vision, Postgres for the journal, Redis for transient state. Everything controllable from a Next.js dashboard — useful in demos, essential in debugging.
11
47
120 ms
Next ▸ 002
Memoria