Pioneering the future of robotics and automation. From industrial automation to humanoid robots, we explore the cutting-edge technologies shaping tomorrow's world.

Get in Touch

Address

1500 Innovation Drive, Silicon Valley, CA

Phone

+1 (555) 123-ROBOT
NaviSense: How AI and Machine Vision Are Revolutionizing Accessibility for the Visually Impaired

In a significant leap forward for assistive technology, researchers at Penn State University have unveiled NaviSense, an innovative smartphone-based system poised to transform how visually impaired individuals interact with their environment. This AI-powered application leverages advanced machine vision and language models to identify everyday objects in real time, offering unprecedented autonomy and speed.

NaviSense, which recently earned the Best Audience Choice Poster Award at the ACM SIGACCESS ASSETS ’25 conference, addresses critical limitations of current assistive navigation tools. Many existing solutions are either reliant on human support teams or require pre-loaded object databases, severely restricting their flexibility and real-world applicability.

Breaking Bottlenecks with Real-Time AI

As explained by Vijaykrishnan Narayanan, Evan Pugh University Professor and A. Robert Noll Chair Professor of Electrical Engineering, the need to preload object models has been a major bottleneck. “This is highly inefficient and gives users much less flexibility when using these tools,” Narayanan notes. NaviSense shatters this paradigm by connecting to an external server powered by sophisticated Large Language Models (LLMs) and Vision-Language Models (VLMs).

This powerful combination enables NaviSense to process voice commands, scan the surroundings, and identify target objects on the fly, without the need for static, pre-programmed libraries. “Using VLMs and LLMs, NaviSense can recognize objects in its environment in real-time based on voice commands, without needing to preload models of objects,” Narayanan emphasized. “This is a major milestone for this technology.”

Designed with User Input, Delivering Intuitive Guidance

The development of NaviSense was deeply rooted in user experience, with extensive input from visually impaired participants. Ajay Narayanan Sridhar, a computer engineering doctoral student and lead student investigator, highlighted how these interviews shaped the app’s core functionalities, mapping directly to real-world challenges.

The system intelligently filters out irrelevant objects based on spoken requests and can engage in conversational feedback, asking clarifying questions when needed – a flexibility often missing in older systems. A standout feature is its ‘hand guidance’ capability. By tracking the smartphone’s movement, NaviSense provides precise audio and haptic cues to guide the user’s hand directly to the identified object. This feature, consistently requested by users during surveys, fills a crucial gap in active physical navigation assistance.

Promising Performance and Commercial Readiness

Early trials with 12 participants demonstrated NaviSense’s superior performance compared to two commercial alternatives. The system significantly reduced object search times and provided more accurate detection, leading to a much-improved overall user experience. One enthusiastic participant praised its directional cues: “I like the fact that it is giving you cues to the location of where the object is, whether it is left or right, up or down, and then bullseye, boom, you got it.”

With support from the U.S. National Science Foundation, the Penn State team is now focusing on refining power consumption and optimizing model efficiency. According to Narayanan, the technology is rapidly approaching commercial readiness, promising a future where AI-driven assistance offers unparalleled independence and accessibility for the visually impaired.

 

Connect with the CTO ROBOTICS Media Community

Follow us and join our community channels for the latest insights in AI, Robotics, Smart Manufacturing and Smart Tech.

0 Comments

Leave a Reply

Don't miss the latest updates! Subscribe to our newsletter: