Skip to content

[Official] AstraNav-Memory: Contexts Compression for Long Memory. An image-centric memory framework for lifelong embodied navigation via visual context compression and Qwen2.5-VL. SOTA on GOAT-Bench & HM3D-OVON.

Notifications You must be signed in to change notification settings

amap-cvlab/AstraNav-Memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AstraNav-Memory: Contexts Compression for Long Memory

Home Page arXiv

🌐 Astra Ecosystem

AstraNav-Memory is part of the Astra series for robust embodied intelligence:

  • AstraNav-Memory (This Repo): Focuses on long-term implicit memory through visual context compression.
  • AstraNav-World: Focuses on predictive planning via tightly coupled visual foresight and action generation.
  • OmniNav: Focuses on unified multi-paradigm navigation and real-time execution via a fast-slow system.

Core Highlights

🤏 20× Visual Context Compression: Employs a lightweight PixelUnshuffle+Conv tokenizer to compress frames into ~30 tokens, expanding context capacity from tens to hundreds of images for massive, long-term implicit memory.

🎞️ Qwen-DINO Unified Policy: Couples Qwen2.5-VL reasoning with frozen DINOv3 features in an end-to-end framework, replacing fragile object-centric pipelines with a robust, scalable image-centric memory interface.

🗺️ Lifelong Navigational Mastery: Sets new SOTA benchmarks on GOAT-Bench and HM3D-OVON by balancing efficient exploration in novel environments with optimal, high-speed pathfinding in familiar ones.

Image-Centric Memory

Compression Method

🔥 Latest News!!

  • January 20, 2026: We release the inference checkpoints with various downsampling rates and context lengths.
  • December 30, 2025: We release the training and inference code.

Quickstart

🧰 Installation

Clone the repo:

git clone https://github.com/amap-cvlab/AstraNav-Memory.git

Install Training dependencies:

# Ensure torch >= 2.6.0
cd train_code
pip install -r requirements.txt

Install habitat-sim and habitat-lab for inference

● habitat-sim
git clone https://github.com/facebookresearch/habitat-sim.git && cd habitat-sim && git checkout v0.2.3
pip install -r requirements.txt
python setup.py install --headless
● habitat-lab
git clone https://github.com/chongchong2025/habitat-lab && cd habitat-lab && git checkout v0.2.3_waypoint
python -m pip install -r habitat-baselines/habitat_baselines/rl/requirements.txt
python -m pip install -r habitat-baselines/habitat_baselines/rl/ddppo/requirements.txt
pip install -e .
cd habitat-baselines
pip install -e .

🎁 Model

Models Download Link
DS16-Context50 ModelScope
DS16-Context100 ModelScope
DS4-Context100 ModelScope
DS64-Context100 ModelScope

⚡ Inference

● Goat-Bench

cd inference_code/hm3d-online
python goat-nav.py

● OVON

cd inference_code/hm3d-online
python ovon-nav.py

⚡ Training

cd train_code
bash run_train.sh

🏛️ Citation

If you find this repository useful, please consider giving a star ⭐ and citation

@article{ren2025astranav-memory,
    title={AstraNav-Memory: Contexts Compression for Long Memory}, 
    author={Botao Ren and Junjun Hu and Xinda Xue and Minghua Luo and Jintao Chen and Haochen Bai and Liangliang You and Mu Xu},
    year={2025},
    eprint={2512.21627},
}

Acknowledgments

Thanks to OmniNav, MTU3D, and OVON for open-sourcing the construction of training data and the closed-loop inference code. Their contributions have significantly enriched the open-source community.

About

[Official] AstraNav-Memory: Contexts Compression for Long Memory. An image-centric memory framework for lifelong embodied navigation via visual context compression and Qwen2.5-VL. SOTA on GOAT-Bench & HM3D-OVON.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors