RescueBot®
I created an autonomous search-and-rescue simulation built in Unity. The robot uses simulated LiDAR sensing and differential drive locomotion to navigate obstacles, detect victims, and map hazardous environments. The project demonstrates applied robotics, sensor visualization, and human-centered system design.
1 month
2025
Robotics, AI
Passion Project
Challenge
When earthquakes strike or buildings collapse, first responders face an impossible choice: risk human lives or lose precious minutes waiting for safer access. Robots can help — but today’s interfaces are clunky, stressful, and slow.
I asked myself: what if we could design a more intuitive way for humans and robots to collaborate in high-stakes rescue missions?
Results
Across three navigation methods, the robot showed clear performance differences. Manual control averaged 19 seconds per rescue with a 58% efficiency rate. Semi-autonomous mode cut times to 14 seconds and boosted efficiency to 74%. Full autonomy performed best, completing tasks in 10 seconds with 92% efficiency. Overall, the system demonstrated significant gains in speed, accuracy, and consistency, validating the effectiveness of autonomous strategies in rescue scenarios.
Navigation Method Efficiency
*Factoring for time to rescue victims, collision count, and excess pathway travelled
56%
Joystick Navigation (Manual)
74%
Point-and-Search (Semi-Auto)
92%
AI-led Navigation (Autonomous)
Joystick (~19 seconds)
AI-led (~10 seconds)
Point-and-Search (~14 seconds)
Process
Research & Setup: I studied existing search-and-rescue robotics approaches and defined core objectives: mobility in cluttered environments, victim detection, and efficient navigation.
System Architecture: Using Unity, I built a simulated disaster environment with obstacles and multiple victims. The robot was equipped with motor controls, LiDAR-based sensing, and collision handling.
Prototyping & Testing: I developed 3 navigation methods: Manual joystick control, Waypoint guidance, and Semi-autonomous LiDAR navigation. Each prototype was tested for speed, accuracy, and reliability. I sourced 3 participants whom each tried all navigation methods.
Iteration: Based on testing, I refined sensor scripts and robot constraints to improve detection consistency and navigation stability.
Integration: Finally, I combined detection and navigation into a cohesive workflow, enabling the robot to locate and approach victims effectively in complex environments.
“The AI Rescue mode gave me the most confidence. It handled obstacles smoothly and found victims faster than I could manually.”
-Usability Test Participant #2

.
Real-Life Application
The development of RescueBot showcases how robotics and sensing technologies can be applied to real-world disaster scenarios. By integrating LIDAR mapping, obstacle avoidance, and victim detection, I built a system capable of navigating complex environments with efficiency. The project not only demonstrates the technical feasibility of autonomous search and rescue, but also highlights the importance of iterative testing and user-centered design. The successful results emphasize how blending engineering, simulation, and design thinking can create impactful solutions with potential applications in emergency response and beyond.





