Full disclosure: I left the company that became iRobot well before the Roomba, so I have zero insider knowledge.
But if you're familiar with Rod Brooks' public work on the "subsumption architecture", the Roomba algorithms are pretty obvious.
Early gen Roombas have 3 obvious behaviors:
1. Bounce randomly off walls.
2. Follow a wall briefly using the "edge" brush.
3. When heavy dirt is detected, go back and forth a bit to deep clean.
Clean floors are an emergent result of simple behaviors. But it fails above a certain floor size in open plan houses.
Later versions add an ultra-low-res visual sensor and appear to use some kind of "simultaneous localization and mapping" (SLAM) algorithm for very approximate mapping. This makes it work much better in large areas. But you used to be able to see the "maps" from each run and they were horribly bad—just good enough to build an incredibly rough floor plan. But if the Roomba gets sufficiently confused, it still has access to the old "emergent vacuuming" algorithm in some form or another.
The newest ones may be even smarter, and retain maps from one run to the next? But I've never watched them in action.
I really like the old "subsumption architecture" designs. You can get surprisingly rich emergent behavior out of four 1-bit sensors by linking different bit patterns to carefully chosen simple actions. There are a couple of very successful invertebrates which don't do much more.
But if you're familiar with Rod Brooks' public work on the "subsumption architecture", the Roomba algorithms are pretty obvious.
Early gen Roombas have 3 obvious behaviors:
1. Bounce randomly off walls.
2. Follow a wall briefly using the "edge" brush.
3. When heavy dirt is detected, go back and forth a bit to deep clean.
Clean floors are an emergent result of simple behaviors. But it fails above a certain floor size in open plan houses.
Later versions add an ultra-low-res visual sensor and appear to use some kind of "simultaneous localization and mapping" (SLAM) algorithm for very approximate mapping. This makes it work much better in large areas. But you used to be able to see the "maps" from each run and they were horribly bad—just good enough to build an incredibly rough floor plan. But if the Roomba gets sufficiently confused, it still has access to the old "emergent vacuuming" algorithm in some form or another.
The newest ones may be even smarter, and retain maps from one run to the next? But I've never watched them in action.
I really like the old "subsumption architecture" designs. You can get surprisingly rich emergent behavior out of four 1-bit sensors by linking different bit patterns to carefully chosen simple actions. There are a couple of very successful invertebrates which don't do much more.