Obstacle
detection
and
object
identification
play
a
crucial
role
in
enhancing
safety
and
independence
of
blinds
and
individuals
with
visual
impairments
mobility
as
it
empower
users
to
navigate
their
surroundings
with
greater
confidence
and
autonomy,
enabling
them
to
participate
more
fully
in
daily
activities
and
interact
with
their
environment
more
effectively.
This
is
achieved
with
a
number
of
smart
sensors
each
one
playing
a
special role
Obstacle
Detection:
Our
wearable
device
is
equipped
with
five
LiDAR
(Light
Detection
and
Ranging)
sensors,
three
in
front
(left,
right
and
low-height
obstacles),
one
at
the
left
side
and
one
at
the
right
side
of
the
shoulders.
These
sensors
emit
signals
and
detect
the
reflections
to
measure
the
distance
between
the
user
and
nearby
obstacles.
When
an
obstacle
is
detected
within
a
certain
range,
the
device
alerts
the
user
through
auditory
cues
and
haptic feedback (vibrations).
Object
Identification:
In
addition
to
detecting
obstacles,
the
device
can
also
classify
objects
based
on
their
characteristics
by
using
special
cameras
and
AI-based
computer-vision
sofware.
It
can
differentiate
between
low-lying
objects
like
curbs
or
steps,
hanging
obstacles
like
tree
branches,
or
larger
obstacles
like
walls
or
furniture.
This
classification
helps
the
user
understand
the
nature
of
the
obstacle
and navigate accordingly.
Sensors
and
Feedback
Mechanisms:
Wearable
devices
for
the
blind
often
utilize
a
combination
of
sensors
and
feedback
mechanisms
to
provide
real-time
information
to
the
user.
Our
system
uses
sensors
that
can
detect
obstacles
in
the
user's
path,
while
vibrational
feedback
patterns
can
convey
the
distance
and
direction
of
the
obstacle.
The
vibration
feedback
is
specially
used
to
signal
objects
at
the
left
or
right
side
of
the
person
Auditory
cues,
such
as
beeps
and
spoken
instructions,
also
supplement
the
feedback
to
provide
additional
information
about
the environment.
Data
Processing
and
Interpretation:
The
sensor
data
collected
by
the
wearable
device
is
processed
in
real-time
and
anonymously
to
identify
obstacles
and
objects
in
the
user's
surroundings.
This
processing
involves
algorithms
that
analyze
the
sensor
data
and
classify
objects
based
on
predefined
criteria
set
by
the
user
during
setup
process.
Machine
learning
techniques
is
also
be
employed
to
improve
the
accuracy
of
object
recognition
over
time,
based
on
the
user's
feedback
and interaction with the device.
User
Interface
and
Interaction:
Wearable
devices
for
the
blind
typically
feature
user-
friendly
interfaces
designed
for
easy
interaction
by
individuals
with
visual
impairments.
Our
system
includes
tactile
buttons
and
most
importantly
voice
commands,
to
help
blind
person
easily
operate the device.
Integration
with
Navigation
Systems:
Our
wearable
device
includes
integrated
navigation
system
and
optionally
smartphone
app
to
provide
additional
functionality
such
as
route
planning,
destination
guidance,
and
location-based
information.
This
integration
enables
users
to
navigate
unfamiliar
environments
more
effectively and independently.