VULCAN: Vision-Language-Model Enhanced Multi-Agent Cooperative Navigation for Indoor Fire-Disaster Response

Abstract

This paper presents VULCAN, a Vision-Language-Model enhanced framework for multi-agent cooperative navigation in indoor fire-disaster response scenarios. By integrating state-of-the-art vision-language models with multi-agent coordination mechanisms, VULCAN enables efficient and adaptive navigation strategies for emergency response robots operating in complex and hazardous environments.

Publication
In IEEE INFOCOM 2026 Workshop on Embodied Intelligence Networks (EIN)

Our paper VULCAN has been accepted to the IEEE INFOCOM 2026 Workshop on Embodied Intelligence Networks (EIN)!

This work presents a novel approach to multi-agent cooperative navigation for indoor fire-disaster response, leveraging Vision-Language Models to enhance situational awareness and coordination among emergency response robots.