Visionary

Vision Beyond Sight.

Empowering the visually impaired with cutting-edge computer vision technology for seamless indoor navigation and daily assistance.

Coming Soon Feature

Our Purpose

Globally, over 285 million people are visually impaired, with 39 million being completely blind. These individuals face daily challenges that many of us take for granted—recognizing familiar faces, identifying objects, reading signs, and safely navigating public spaces.

62% of visually impaired people report difficulties in performing basic tasks independently

Over 90% of visually impaired participants in one study reported collisions with street obstacles while walking in local surroundings

From our own personal experiences, seeing loved ones deal with these frustrations has been eye-opening. Simple activities, like recognizing a friend's face in a crowd or reading a street sign, become insurmountable obstacles. Constant reliance on others for help can deeply impact one's sense of independence and dignity.

Majority of visually impaired individuals feel a loss of confidence when navigating new environments

Visually impaired person using assistive technology

Our Solution

This is where Visionary comes in. We developed Visionary to address these critical challenges head-on. By using cutting-edge machine learning, computer vision, and real-time object detection, Visionary empowers users to take back control of their environment. It detects familiar faces, reads aloud important signs and text, and provides real-time feedback about the objects in front of them—restoring confidence and autonomy in daily life.

With Visionary, we aim to change that. Our app allows users to recognize their surroundings, safely move through spaces, and interact with the world around them—without barriers.

At Visionary, we believe that technology has the power to transform lives. Our mission is to harness these advancements to break down the barriers faced by the visually impaired, giving them the tools they need to regain their freedom. We are driven by the goal of improving accessibility and providing solutions that help people live with confidence and independence every day.

Simple. Intuitive. Accessible.

Shake to sync your data

Hold to Start and Stop

Digital Walking Stick 2

Intuitive Design

The core of our app is accessibility. Our intuitive app design ensures a seamless and user-friendly experience tailored for ease of use. With innovative gesture controls, the app is designed to be accessible even without visual cues. Syncing contacts for familiar face detection is as simple as three quick shakes of your device, making setup hassle-free. To activate real-time video detection, just hold down the screen, and the app immediately springs into action, delivering instant feedback. Every interaction is designed to be smooth, intuitive, and efficient, so users can navigate the world with confidence and ease.

Cutting Edge "Digital Walking Stick" Technology

Digital Walking Stick 1

SmartSight - Real-time Object Detection

Our application offers real-time object detection, empowering visually impaired users to navigate their surroundings safely. By leveraging YOLOv8 and TensorFlow, the system identifies and classifies objects with precision, providing instantaneous feedback to the user. Whether it's a nearby obstacle or a household item, users are always aware of their environment in real-time. We also use LiDAR for distance detection so the user knows what object is near them but also how much space is between them and the object.

Digital Walking Stick 2

FaceTrace - Familiar Face Detection

Recognizing familiar faces is made effortless with our familiar face detection feature. Built on advanced machine learning models using Python, OpenCV, and FaceNet, our system distinguishes familiar faces from unknown individuals, delivering a highly personalized experience. Now, visually impaired users can identify friends and family, ensuring comfort and confidence in social settings.

Digital Walking Stick 2

InkSense - Text Processing

Our sophisticated text processing system reads aloud signs, labels, and other written information in the user's environment. Powered by Apple's OCR for high-accuracy text extraction, the application converts visual text into actionable information. Users can rely on this feature for reading indoor signs, directions, and more with ease. We fine tuned our YOLO model on a dataset of 40000 images to detect text better than it currently does, allowing more accurate tect detection and retrieval.

Digital Walking Stick 2

VoiceView - Text-to-Speech Integration

With seamless text-to-speech (TTS) integration, our application speaks directly to the user, translating visual information into clear, natural speech. Using Apple's SpeakText technology, the system ensures smooth and responsive audio feedback, offering users a real-time auditory experience that enhances their independence and confidence in any setting.

Coming Soon

Coming Soon Feature

Geolocation Feature

We're excited to announce that our geolocation feature is currently in development. A beta version of this feature is available in the demo. This innovative addition will revolutionize how visually impaired individuals navigate both indoor and outdoor environments by combining the Digital Walking Stick Technology with geolocation, making Visionary the one stop application for any visually impaired people. Stay tuned for updates!