Researcher Benjamin Bogenberger combines three-dimensional vision with language models. (Credit/Copyright: A. Schmitz / TUM)
This Robot Searches For Missing Items Using Human-Like Logic
In A Nutshell
- Researchers at the Technical University of Munich built a robot that uses AI to reason about where a missing object is most likely to be found, rather than searching at random.
- The system tracks how likely each mapped object is to have moved, so the robot knows which parts of its own knowledge to trust and which areas to revisit.
- In testing, it detected 95% of real-world map changes and, in key scenarios, roughly doubled the success rate of the current leading competing system.
- The robot is not yet ready for home use (it needs flat floors, an external computer, and a controlled environment) but the core problem of change-aware object search has now been meaningfully solved.
Ask someone to find a missing plate, and they head to the kitchen. They check near the sink, maybe the dining table, because that’s where plates end up. It sounds obvious, but that kind of everyday reasoning has been one of the hardest things to build into a robot. A new system developed by researchers at the Technical University of Munich can now do exactly that: search for objects using contextual clues similar to how a person might reason, rather than wandering at random or relying on a last-known location that may no longer be accurate.
Published in IEEE Robotics and Automation Letters, the research describes a robotic navigation system that takes plain-language requests like “Find my glasses” or “Where is my book?” and uses an AI reasoning layer to determine where those items are most likely to be found. In testing, it outperformed competing approaches by a wide margin, in some scenarios, roughly doubling the success rate of the current leading method.
Homes don’t hold still. Furniture migrates, objects pile up in unexpected places, and something that was on the nightstand yesterday has moved to the kitchen counter by morning. Most robotic systems map a space once and assume it stays that way, a brittle approach that fails constantly in practice. This team built their system around the opposite assumption: that change is the default, and a useful robot has to work with it.
How This AI Robot Stops Searching in the Wrong Places
At the center of the approach is a large language model, the same category of AI that underlies modern chatbots. When the robot receives a search task, it queries that AI layer to score every object in its map by how likely the target is to be found nearby. “Find my glasses” produces high relevance scores for books, nightstands, and reading chairs. “Where is my plate?” points the robot toward cups, coffee tables, and cabinets. Rather than sweeping a room from one end to the other, it heads straight for the most probable spots first.
Researchers call this open-vocabulary navigation. There is no predefined list of searchable items. Any object a person can describe in plain language can, in principle, become a valid target, and the AI layer figures out where to look. Earlier systems could only locate objects baked into their training data; this one has no such ceiling.
What Tells the AI Robot Which Parts of Its Map to Trust
Knowing where objects tend to belong is only half the problem. A map built three days ago might show a chair in a corner that has since been moved, and searching based on stale information wastes time.
To handle this, the system maintains a confidence score for every object it has mapped, tracking how likely each item is to still be in its last known location. Coffee tables almost never move, so their scores stay high. Chairs shift constantly, so theirs decay faster. When enough time passes without the robot revisiting an area, scores for moveable objects there drop until the system flags that zone as potentially outdated and dispatches the robot to check. Together with the AI reasoning layer, this gives the robot something close to the judgment a person brings to a familiar space: a sense of where things tend to be, and a reasonable suspicion about what might have changed.
Real-World Results: How the AI Robot Held Up
Tests were conducted on Hello Robot’s Stretch 3, a wheeled platform with a camera that captures both color and depth data, tested in real office environments and a kitchen at the Technical University of Munich. Across those trials, the system detected 95% of all map changes on average and cut navigation time by more than 29% compared to both random exploration and a patrol strategy that sweeps the space on a fixed grid.
In a structured simulation matchup, the team ran 60 navigation tasks against DynaMem, currently one of the leading competing systems. On objects that had been moved since the robot’s last survey, the new system succeeded 50% of the time to DynaMem’s 25%. On objects the robot had never seen before, it hit 45% against DynaMem’s 20%. For a real-world finding test, both systems achieved a perfect success rate, but the new approach located targets roughly 14% faster. A random search strategy was occasionally quicker when it got lucky, but it came through only one in four tries. Researchers also tested whether the robot could tell two objects of the same type apart, placing two different sports balls in a room, removing one, and later returning it. By combining visual appearance data with three-dimensional shape information, the system correctly reidentified each ball as an individual object rather than treating them as interchangeable.
Significant Hurdles Remain Before a Home Debut
Several real constraints come with the current setup. Navigation is limited to flat, single-floor environments, and the heavy computation runs through an external GPU workstation the robot connects to wirelessly rather than hardware it carries itself. Spaces where people and animals are constantly moving pose a separate challenge the current design does not address, and the authors flag this as a priority for future work.
Household robots have been a near-future promise for decades, with many falling short because they couldn’t adapt to how homes actually function. A robot that reasons about where something is likely to be and actively questions what it thinks it knows is a meaningful step toward one that could genuinely be useful in daily life.
Paper Notes
Limitations
Navigation is restricted to wheeled robots on flat, single-floor surfaces; multi-story environments and uneven terrain are outside the system’s current scope. Accurate positioning relies on a visual-inertial odometry system running alongside the main software, and real-time performance requires a wireless connection to an external workstation with high-end GPU hardware. Test environments, though real-world settings, were relatively controlled office and kitchen spaces. Fully dynamic conditions involving continuous human or animal movement were not tested. No standardized benchmark exists for closed-loop semi-static object navigation, so performance comparisons required custom-designed tasks.
Funding and Disclosures
Funding was provided in part by the German Federal Ministry of Research, Technology and Space (BMFTR) under the Robotics Institute Germany (RIG) with BMBF under Grant 16ME0997K, and in part by the European Union’s Horizon Europe project under the Marie Skłodowska-Curie Actions under Grant 101155035. No conflicts of interest were disclosed.
Publication Details
Authors Benjamin Bogenberger, Oliver Harrison, and Orrin Dahanaggamaarachchi are affiliated with the Learning Systems and Robotics Lab and the Munich Institute of Robotics and Machine Intelligence at the Technical University of Munich. Lukas Brunke, Jingxing Qian, and Angela P. Schoellig hold joint appointments at the Technical University of Munich, the University of Toronto Institute for Aerospace Studies, the University of Toronto Robotics Institute, and the Vector Institute for Artificial Intelligence in Toronto. Siqi Zhou is affiliated with the Technical University of Munich and Simon Fraser University in Burnaby, British Columbia. The paper, titled “Where Did I Leave My Glasses? Open-Vocabulary Semantic Exploration in Real-World Semi-Static Environments,” was published in IEEE Robotics and Automation Letters, Vol. 11, No. 3, March 2026. DOI: 10.1109/LRA.2026.3656790.







