The Unitree Go2 is DimOS's primary reference platform. Full autonomous navigation, mapping, and agentic control — no ROS required.
- Unitree Go2 Pro or Air (stock firmware 1.1.7+, no jailbreak needed)
- Ubuntu 22.04/24.04 with CUDA GPU (recommended), or macOS (experimental)
- Python 3.12
First, install system dependencies for your platform:
Then install DimOS:
uv venv --python "3.12"
source .venv/bin/activate
uv pip install 'dimos[base,unitree]'# Replay a recorded Go2 navigation session
# First run downloads ~2.4 GB of LiDAR/video data from LFS
dimos --replay run unitree-go2Opens the command center at localhost:7779 with Rerun 3D visualization — watch the Go2 map and navigate an office in real time.
- Robot is reachable and low latency <10ms, 0% packet loss
ping $ROBOT_IP-
Built-in obstacle avoidance is on. (DimOS handles path planning, but the onboard obstacle avoidance provides an extra safety layer around tight spots)
-
If video is not in sync with lidar/robot position, sync your clock with an NTP server
sudo ntpdate pool.ntp.orgor
sudo sntp -sS pool.ntp.orgexport ROBOT_IP=<YOUR_GO2_IP>
dimos run unitree-go2That's it. DimOS connects via WebRTC (no jailbreak required), starts the full navigation stack, and opens the command center in your browser.
| Module | What It Does |
|---|---|
| GO2Connection | WebRTC connection to the robot — streams LiDAR, video, odometry |
| VoxelGridMapper | Builds a 3D voxel map using column-carving (CUDA accelerated) |
| CostMapper | Converts 3D map → 2D costmap via terrain slope analysis |
| ReplanningAStarPlanner | Continuous A* path planning with dynamic replanning |
| WavefrontFrontierExplorer | Autonomous exploration of unmapped areas |
| RerunBridge | 3D visualization in browser |
| WebsocketVis | Command center at localhost:7779 |
From the command center (localhost:7779):
- Click on the map to set navigation goals
- Toggle autonomous exploration
- Monitor robot pose, costmap, and planned path
uv pip install 'dimos[base,unitree,sim]'
dimos --simulation run unitree-go2Full navigation stack in MuJoCo — same code, simulated robot.
Natural language control with an LLM agent that understands physical space:
export OPENAI_API_KEY=<YOUR_KEY>
export ROBOT_IP=<YOUR_GO2_IP>
dimos run unitree-go2-agenticThen use the human CLI to talk to the agent:
humancli
> explore the spaceThe agent subscribes to camera, LiDAR, and spatial memory streams — it sees what the robot sees.
| Blueprint | Description |
|---|---|
unitree-go2-basic |
Connection + visualization (no navigation) |
unitree-go2 |
Full navigation stack |
unitree-go2-agentic |
Navigation + LLM agent |
unitree-go2-agentic-ollama |
Agent with local Ollama models |
unitree-go2-agentic-mcp |
Agent with MCP tool access |
unitree-go2-spatial |
Navigation + spatial memory |
unitree-go2-detection |
Navigation + object detection |
unitree-go2-ros |
ROS 2 bridge mode |
- Navigation Stack — column-carving voxel mapping, costmap generation, A* planning
- Visualization — Rerun, Foxglove, performance tuning
- Data Streams — RxPY streams, backpressure, quality filtering
- Transports — LCM, SHM, DDS
- Blueprints — composing modules