◂ Back to journal
2026-04-22 · process · 8 min
Teaching a robot to knead
Three months on NonnoRobot. Here's what dough taught us about closed-loop action planning.
We started thinking kneading was a force problem. It's actually a feedback problem. The dough doesn't tell you what it expects — you find out by touching it.
We vectorised 47 recipes to give the robot a gesture memory. At first it kneaded every dough the same. Then it started to differentiate — a pizza wants firmness, long pasta wants softness. Multimodal embeddings did the work we expected from a transformer.
Takeaway: vision-to-action latency caps at 120 ms when embeddings are cached on the Redis side. Below that, the robot anticipates instead of reacts — which is exactly what we wanted.