CN116698045B - Walking auxiliary navigation method and system for vision disturbance people in nursing home - Google Patents

Walking auxiliary navigation method and system for vision disturbance people in nursing home Download PDF

Info

Publication number
CN116698045B
CN116698045B CN202310966256.7A CN202310966256A CN116698045B CN 116698045 B CN116698045 B CN 116698045B CN 202310966256 A CN202310966256 A CN 202310966256A CN 116698045 B CN116698045 B CN 116698045B
Authority
CN
China
Prior art keywords
information
auxiliary
gesture
navigation
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310966256.7A
Other languages
Chinese (zh)
Other versions
CN116698045A (en
Inventor
陈放
汪陆生
高红梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guozhengtong Technology Co ltd
Original Assignee
Guozhengtong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guozhengtong Technology Co ltd filed Critical Guozhengtong Technology Co ltd
Priority to CN202311299138.1A priority Critical patent/CN117357380A/en
Priority to CN202310966256.7A priority patent/CN116698045B/en
Publication of CN116698045A publication Critical patent/CN116698045A/en
Application granted granted Critical
Publication of CN116698045B publication Critical patent/CN116698045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Mathematical Physics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application discloses a walking auxiliary navigation method and system for vision impairment people in a nursing home, and relates to the technical field of vision impairment people navigation. The method comprises the following steps: the image acquisition device acquires image information according to the first coordinate information; acquiring character identification information and character gesture information according to the image information; acquiring a personalized auxiliary navigation mode and destination information according to the task identification information, the task gesture information and the auxiliary knowledge graph; generating a first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information; and sending the first auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the individual auxiliary navigation method. The application can enable visually impaired people to conduct personalized setting according to own needs in the process of walking to a destination in a corridor, and selects the most suitable guiding mode for use.

Description

Walking auxiliary navigation method and system for vision disturbance people in nursing home
Technical Field
The application relates to the technical field of navigation of visually impaired people, in particular to a walking auxiliary navigation method and a walking auxiliary navigation system for visually impaired people in a nursing home.
Background
The office nursing home at present is usually built and operated by national cash, and is mainly supported by local finance, but due to the limited funds of the local finance, many nursing homes have no way to provide independent houses for each old person, and the houses can be provided with complete kitchen, washroom and other facilities, and the nursing home is usually only provided with a public washroom at each time, and the nursing home is also usually provided with a medical care station at a certain layer.
In this case, the aged in the nursing home has to go out from his own room to go to the toilet or the medical station by walking when going to the toilet or the medical station is required.
Most of the aged people in the nursing home are relatively short of hands (especially at night), which results in that the aged cannot walk to the toilet or the medical care station by himself or herself if the aged in the nursing home has visual defects (for example, natural blind, glaucoma, eye diseases caused by diabetes, etc.) because the aged cannot or cannot not clearly see the front view, not every time the aged needs to go to the toilet or the medical care station.
The prior art is used for assisting walking of people with vision impairment, and the walking of the blind person is generally guided by an intelligent guiding device (such as an intelligent crutch or a head-mounted intelligent guiding device), and the disadvantage of the method is that a certain light source is needed around the blind person, namely, if the blind person is in a dark environment, the blind person cannot be guided well, and the blind person cannot navigate without being held by the blind person.
Disclosure of Invention
The invention aims to provide a walking auxiliary navigation method for vision impairment people in a nursing home, which at least solves one technical problem.
The invention provides a walking auxiliary navigation method for vision impairment people in a nursing home, which comprises the following steps of:
acquiring room door opening information;
acquiring first coordinate information according to room door opening information, and transmitting the first coordinate information to each camera device in the nursing home, wherein the camera devices can shoot the first coordinate information;
acquiring image information transmitted by an image pickup device capable of acquiring a scene at a door of a room according to the first coordinate information;
Acquiring first person identification information and first person gesture information according to one or more image information in each image information;
acquiring an auxiliary knowledge graph, wherein the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, a personalized auxiliary navigation mode corresponding to each gesture node and destination information corresponding to each gesture node;
acquiring each gesture node of the character node corresponding to the first person identification information;
performing similarity measurement on the first person gesture information and each gesture node, so as to obtain a personalized auxiliary navigation mode and destination information corresponding to a gesture node with the highest similarity with the first person gesture information in each gesture node of the corresponding person nodes;
generating a first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information;
and sending the first auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the first auxiliary individual auxiliary navigation method.
Optionally, the acquiring the room door opening information includes:
and acquiring a door opening detection signal transmitted by a door opening detector and identification information of the door opening detector.
Optionally, the image capturing device capable of capturing the first coordinate information in each image capturing device in the nursing home according to the room door opening information and sending the first coordinate information includes:
acquiring a coordinate database, wherein the coordinate database comprises at least one preset door opening detector identifier and first coordinate information corresponding to each preset door opening detector identifier;
acquiring first coordinate information corresponding to a preset door opening detector identifier which is identical to the identifier information of the door opening detector;
acquiring a preset camera device coordinate database, wherein the preset camera device coordinate database comprises at least one preset camera device identifier and a photographable coordinate table corresponding to each preset camera device identifier, and each photographable coordinate table comprises at least one preset first coordinate information;
acquiring preset camera device identifiers corresponding to all photographable coordinate tables with preset first coordinate information identical to the first coordinate information;
and sending the first coordinate information to each camera corresponding to each preset camera identifier according to the acquired preset camera identifiers.
Optionally, the acquiring the first person identification information and the first person gesture information according to one or more image information in the respective image information includes:
acquiring a trained face neural network model;
face features in each image information are extracted respectively;
inputting the face characteristics into the trained face neural network model so as to acquire first person identification information;
acquiring a preset gesture navigation database, wherein the preset gesture navigation database comprises preset character identification information and gesture navigation reminding information corresponding to each preset character identification information;
acquiring gesture navigation reminding information corresponding to preset character identification information identical to the first character identification information;
the gesture navigation reminding information is sent to a corresponding reminding device, so that the reminding device carries out reminding operation according to the gesture navigation reminding information;
acquiring a trained gesture neural network model;
respectively extracting gesture features in each image information transmitted by the image pickup device capable of picking up the first coordinate information after reminding operation;
and inputting the gesture characteristics into the trained gesture neural network model so as to acquire first person gesture information.
Optionally, the personalized auxiliary navigation mode includes light guide navigation, voice guide navigation and mixed guide navigation.
Optionally, the generating the first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information includes:
generating a first planning walking track according to the destination information and the first coordinate information, wherein the first planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the first coordinate information;
acquiring a prestoring database of a walking auxiliary navigation system of vision impairment crowd of the pension, wherein the prestoring database of the walking auxiliary navigation system of the vision impairment crowd of the pension comprises auxiliary device identifiers and preset auxiliary coordinate information corresponding to the auxiliary device identifiers;
acquiring each auxiliary device identifier corresponding to preset auxiliary coordinate information which is the same as at least one of the plurality of necessary coordinate information;
and generating a first auxiliary personal auxiliary navigation method according to the auxiliary device identifier and the personal auxiliary navigation mode.
Optionally, after the first auxiliary personal auxiliary navigation method is sent to the walking auxiliary navigation system of vision impairment crowd in the nursing home, the walking auxiliary navigation method of vision impairment crowd in the nursing home further comprises:
Judging whether new door opening information of the room is acquired in a preset time period, if so, then
Acquiring second coordinate information according to the new door opening information of the room;
judging whether the first coordinate information is the same as the second coordinate information, if not, then
Transmitting the second coordinate information to each image pickup device capable of picking up the second coordinate information in the image pickup devices of the nursing home;
acquiring new image information transmitted by an image pickup device capable of acquiring a new room entrance scene according to the second coordinate information;
acquiring second person identification information and second person gesture information according to one or more pieces of image information in each piece of new image information;
the auxiliary knowledge graph is obtained, and the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, individual auxiliary navigation modes corresponding to each gesture node and destination information corresponding to each gesture node;
acquiring each gesture node of the character nodes corresponding to the second character identification information;
performing similarity measurement on the second character gesture information and each gesture node, so as to obtain a character auxiliary navigation mode and destination information corresponding to the gesture node with the highest similarity to the second character gesture information in each gesture node of the corresponding character node, wherein the character auxiliary navigation mode obtained through the second character gesture information is called a second character auxiliary navigation mode, and the destination information obtained through the second character gesture information is called a second destination information;
Generating a second auxiliary personality auxiliary navigation method according to the second personality auxiliary navigation mode and the second destination information;
generating a second planning walking track according to the second destination information and the second coordinate information, wherein the second planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the second coordinate information;
judging whether the second planning walking track and the first planning walking track have common necessary coordinate information, if so, then
Judging whether the personalized auxiliary navigation mode acquired through the first person gesture information conflicts with the second personalized auxiliary navigation mode, if not, then
And sending the second auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the second auxiliary individual auxiliary navigation method.
Optionally, after the personalized assisted navigation method is sent to the walking assisted navigation system of vision impairment crowd in the nursing home, the walking assisted navigation method of vision impairment crowd in the nursing home further comprises:
judging whether the personalized auxiliary navigation mode conflicts with the second personalized auxiliary navigation mode, if so, then
Acquiring floor information of pedestrians corresponding to the first person identification information and floor information of pedestrians corresponding to the second person identification information by each camera device in the vision impaired crowd walking auxiliary navigation system of the pension hospital;
if the floor information of the pedestrian corresponding to the first person identification information and the floor information of the pedestrian corresponding to the second person identification information are not the same floor, then
Carrying out auxiliary navigation on floor information of pedestrians corresponding to the first person identification information in a walking auxiliary navigation system of vision impaired people in the nursing home according to the first auxiliary individual auxiliary navigation method, generating second person identification information navigation information, and informing the pedestrians corresponding to the first person identification information in a voice broadcasting mode;
and carrying out auxiliary navigation on the floor information of the pedestrian corresponding to the second person identification information in the walking auxiliary navigation system of the vision impaired crowd of the nursing home according to the second individual auxiliary navigation method, generating first person identification information navigation information, and informing the pedestrian corresponding to the second person identification information in a voice broadcasting mode.
Optionally, after performing the assisted navigation on the floor information of the pedestrian corresponding to the second person identification information in the walking assisted navigation system of the vision impairment crowd of the pension hospital according to the second personality assisted navigation method, the walking assisted navigation method of the vision impairment crowd of the pension hospital further includes:
and monitoring the positions of pedestrians corresponding to the first person identification information and the positions of pedestrians corresponding to the second person identification information in real time, and fixing the second personal auxiliary navigation method into voice guidance navigation when the positions of pedestrians corresponding to the first person identification information and the positions of pedestrians corresponding to the second person identification information are located on the same floor.
The application also provides a walking auxiliary navigation system for vision impairment people in the nursing home, which comprises:
the plurality of imaging devices are arranged at first positions of the nursing home respectively, the first positions comprise all corridors and elevators of the nursing home, the imaging devices comprise a plurality of rotatable imaging devices and a plurality of non-rotatable imaging devices, the rotatable imaging devices are used for carrying out rotation shooting according to first coordinate information and second coordinate information, and the non-rotatable imaging devices are used for shooting images of fixed positions; wherein, each corridor is internally provided with at least one rotatable camera device and at least one non-rotatable camera device, and the non-rotatable camera devices in each corridor can cover the shooting area in the whole corridor;
The voice broadcasting devices are multiple, each camera device is respectively arranged at each second position of the nursing home, and each first position comprises each corridor and elevator of the nursing home;
the night lamps are multiple in number, each night lamp is arranged at each third position of the nursing home, the third positions comprise all the corridor, the elevator, the toilet gate and the gate of the functional room of the nursing home, different night lamps can form different light tracks when being simultaneously lighted, and the different light tracks can be used for guiding different light tracks;
and the master controller is respectively connected with each night lamp, each camera device and each voice broadcasting device and is used for executing the walking auxiliary navigation method for the vision impairment crowd in the nursing home.
Advantageous effects
The walking auxiliary navigation method for vision impairment people in the nursing home takes the whole building as an auxiliary object, distributes all auxiliary devices (such as a camera device, a voice broadcasting device and a prompt lamp) at each corner of the building, can enable the vision impairment people to walk to a destination in each corridor through the cooperative work of the auxiliary devices, and can select the most suitable guiding mode for each vision impairment people according to the individual setting of each vision impairment people, thereby solving the problem that all auxiliary devices can only be specially used by special people in the prior art, solving the problem that all auxiliary devices need to be carried or worn by users in the prior art, and realizing the artificial intelligent navigation of the vision impairment people for the whole building.
Drawings
Fig. 1 is a flow chart of a walking assisted navigation method for vision impaired people in a nursing home according to an embodiment of the application.
Fig. 2 is a schematic diagram of an electronic device for implementing the walking aid navigation method for visually impaired people in the nursing home shown in fig. 1.
Fig. 3 is a schematic diagram of a floor night light arrangement in accordance with an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application become more apparent, the technical solutions in the embodiments of the present application will be described in more detail below with reference to the accompanying drawings in the embodiments of the present application. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are some, but not all, embodiments of the application. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a walking assisted navigation method for vision impaired people in a nursing home according to an embodiment of the application.
The walking auxiliary navigation method for the vision impairment crowd in the nursing home shown in fig. 1 comprises the following steps:
step 1: acquiring room door opening information;
step 2: acquiring first coordinate information according to room door opening information, and transmitting the first coordinate information to each camera device in the nursing home, wherein the camera devices can shoot the first coordinate information;
step 3: acquiring image information transmitted by an image pickup device capable of acquiring a scene at a door of a room according to the first coordinate information;
step 4: acquiring first person identification information and first person gesture information according to one or more image information in each image information;
step 5: acquiring an auxiliary knowledge graph, wherein the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, a personalized auxiliary navigation mode corresponding to each gesture node and destination information corresponding to each gesture node;
step 6: acquiring each gesture node of the character node corresponding to the first person identification information;
step 7: performing similarity measurement on the first person gesture information and each gesture node, so as to obtain a personalized auxiliary navigation mode and destination information corresponding to a gesture node with the highest similarity with the first person gesture information in each gesture node of the corresponding person nodes;
Step 8: generating a first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information;
step 9: and sending the first auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the first auxiliary individual auxiliary navigation method.
The walking auxiliary navigation method for vision impairment people in the nursing home takes the whole building as an auxiliary object, distributes all auxiliary devices (such as a camera device, a voice broadcasting device and a prompt lamp) at each corner of the building, can enable the vision impairment people to walk to a destination in each corridor through the cooperative work of the auxiliary devices, and can select the most suitable guiding mode for each vision impairment people according to the individual setting of each vision impairment people, thereby solving the problem that all auxiliary devices can only be specially used by special people in the prior art, solving the problem that all auxiliary devices need to be carried or worn by users in the prior art, and realizing the artificial intelligent navigation of the vision impairment people for the whole building.
In this embodiment, the coordinate information of the present application may be generated by SLAM technology (robot positioning and mapping at the same time), that is, the robot first runs the entire building once, thereby creating a virtual map of the building thereof, and providing coordinate information for each location.
In this embodiment, acquiring the room door opening information includes:
and acquiring a door opening detection signal transmitted by a door opening detector and identification information of the door opening detector.
For example, the door opening detector includes an acceleration sensor and a door opening controller, and after the acceleration sensor senses that the door is opened, the door opening controller generates a door opening detection signal and sends the door opening detection signal and identification information of the door opening detector to the overall controller.
In this embodiment, the image capturing device capable of capturing the first coordinate information in each image capturing device of the nursing home by acquiring the first coordinate information according to the room door opening information and transmitting the first coordinate information includes:
acquiring a coordinate database, wherein the coordinate database comprises at least one preset door opening detector identifier and first coordinate information corresponding to each preset door opening detector identifier;
acquiring first coordinate information corresponding to a preset door opening detector identifier which is identical to the identifier information of the door opening detector;
Acquiring a preset camera device coordinate database, wherein the preset camera device coordinate database comprises at least one preset camera device identifier and a photographable coordinate table corresponding to each preset camera device identifier, and each photographable coordinate table comprises at least one preset first coordinate information;
acquiring preset camera device identifiers corresponding to all photographable coordinate tables with preset first coordinate information identical to the first coordinate information;
and sending the first coordinate information to each camera corresponding to each preset camera identifier according to the acquired preset camera identifiers.
In this embodiment, a coordinate position is set for each partial area outside the room, and it can be understood that the coordinate position may be a certain coordinate point or a coordinate area.
When the coordinate position is acquired by the image pickup device, can rotate by a cradle head of the camera device, thereby aligning the photographing region of the photographing device with the coordinate position.
It can be understood that each time the camera is aligned with the coordinate position, the process of locating the target point by the camera can be considered.
In this embodiment, acquiring the first person identification information and the first person gesture information according to one or more image information in each image information includes:
Acquiring a trained face neural network model;
face features in each image information are extracted respectively;
inputting the face characteristics into the trained face neural network model so as to acquire first person identification information;
acquiring a preset gesture navigation database, wherein the preset gesture navigation database comprises preset character identification information and gesture navigation reminding information corresponding to each preset character identification information;
acquiring gesture navigation reminding information corresponding to preset character identification information identical to the first character identification information;
the gesture navigation reminding information is sent to a corresponding reminding device, so that the reminding device carries out reminding operation according to the gesture navigation reminding information;
acquiring a trained gesture neural network model;
respectively extracting gesture features in each image information transmitted by the image pickup device capable of picking up the first coordinate information after reminding operation;
and inputting the gesture characteristics into the trained gesture neural network model so as to acquire first person gesture information.
In this embodiment, the individual auxiliary navigation mode includes light guidance navigation, voice guidance navigation, and hybrid guidance navigation.
In this embodiment, the generating a first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information includes:
generating a first planning walking track according to the destination information and the first coordinate information, wherein the first planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the first coordinate information;
acquiring a prestoring database of a walking auxiliary navigation system of vision impairment crowd of the pension, wherein the prestoring database of the walking auxiliary navigation system of the vision impairment crowd of the pension comprises auxiliary device identifiers and preset auxiliary coordinate information corresponding to the auxiliary device identifiers;
acquiring each auxiliary device identifier corresponding to preset auxiliary coordinate information which is the same as at least one of the plurality of necessary coordinate information;
and generating a first auxiliary personal auxiliary navigation method according to the auxiliary device identifier and the personal auxiliary navigation mode.
In this embodiment, after the first auxiliary personal auxiliary navigation method is sent to the walking auxiliary navigation system for vision impairment people in the nursing home, the walking auxiliary navigation method for vision impairment people in the nursing home further includes:
Judging whether new door opening information of the room is acquired in a preset time period, if so, then
Acquiring second coordinate information according to the new door opening information of the room;
judging whether the first coordinate information is the same as the second coordinate information, if not, then
Transmitting the second coordinate information to each image pickup device capable of picking up the second coordinate information in the image pickup devices of the nursing home;
acquiring new image information transmitted by an image pickup device capable of acquiring a new room entrance scene according to the second coordinate information;
acquiring second person identification information and second person gesture information according to one or more pieces of image information in each piece of new image information;
the auxiliary knowledge graph is obtained, and the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, individual auxiliary navigation modes corresponding to each gesture node and destination information corresponding to each gesture node;
acquiring the second person identification information each gesture node of the corresponding character node;
performing similarity measurement on the second character gesture information and each gesture node, so as to obtain a character auxiliary navigation mode and destination information corresponding to the gesture node with the highest similarity to the second character gesture information in each gesture node of the corresponding character node, wherein the character auxiliary navigation mode obtained through the second character gesture information is called a second character auxiliary navigation mode, and the destination information obtained through the second character gesture information is called a second destination information;
Generating a second auxiliary personality auxiliary navigation method according to the second personality auxiliary navigation mode and the second destination information;
generating a second planning walking track according to the second destination information and the second coordinate information, wherein the second planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the second coordinate information;
judging whether the second planning walking track and the first planning walking track have common necessary coordinate information, if so, then
Judging whether the personalized auxiliary navigation mode acquired through the first person gesture information conflicts with the second personalized auxiliary navigation mode, if not, then
And sending the second auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the second auxiliary individual auxiliary navigation method.
In this embodiment, after the personalized auxiliary navigation method is sent to the walking auxiliary navigation system for vision impairment people in the nursing home, the walking auxiliary navigation method for vision impairment people in the nursing home further includes:
Judging whether the personalized auxiliary navigation mode conflicts with the second personalized auxiliary navigation mode, if so, then
Acquiring floor information of pedestrians corresponding to the first person identification information and floor information of pedestrians corresponding to the second person identification information by each camera device in the vision impaired crowd walking auxiliary navigation system of the pension hospital;
if the floor information of the pedestrian corresponding to the first person identification information and the floor information of the pedestrian corresponding to the second person identification information are not the same floor, then
Carrying out auxiliary navigation on floor information of pedestrians corresponding to the first person identification information in a walking auxiliary navigation system of vision impaired people in the nursing home according to the first auxiliary individual auxiliary navigation method, generating second person identification information navigation information, and informing the pedestrians corresponding to the first person identification information in a voice broadcasting mode;
the floor information of the pedestrians corresponding to the second person identification information in the walking auxiliary navigation system of the vision impaired people in the nursing home is based on the second personality the auxiliary navigation method carries out auxiliary navigation and generates first person identification information navigation information to inform pedestrians corresponding to the second person identification information in a voice broadcasting mode.
In this embodiment, after performing auxiliary navigation on the floor information of the pedestrian corresponding to the second person identification information in the walking auxiliary navigation system for the vision impairment crowd of the pension hospital according to the second personality auxiliary navigation method, the walking auxiliary navigation method for the vision impairment crowd of the pension hospital further includes:
and monitoring the positions of pedestrians corresponding to the first person identification information and the positions of pedestrians corresponding to the second person identification information in real time, and fixing the second personal auxiliary navigation method into voice guidance navigation when the positions of pedestrians corresponding to the first person identification information and the positions of pedestrians corresponding to the second person identification information are located on the same floor.
The application also provides a walking auxiliary navigation system for the vision impairment crowd of the nursing home, which comprises a camera device, a voice broadcasting device, a night lamp and a master controller, wherein,
the number of the camera devices is multiple, each camera device is respectively arranged at each first position of the nursing home, the first position comprises each corridor and elevator of the nursing home, each camera device comprises a plurality of rotatable camera devices and a plurality of non-rotatable camera devices, each rotatable camera device is used for carrying out rotation shooting according to first coordinate information and second coordinate information, and each non-rotatable camera device is used for shooting images of a fixed position; wherein, each corridor is internally provided with at least one rotatable camera device and at least one non-rotatable camera device, and the non-rotatable camera devices in each corridor can cover the shooting area in the whole corridor;
The number of the voice broadcasting devices is multiple, each camera device is respectively arranged at each second position of the nursing home, and each first position comprises each corridor and elevator of the nursing home;
the number of the night lamps is multiple, each night lamp is respectively arranged at each third position of the nursing home, the third positions comprise all the corridor, the elevator, the toilet gate and the gate of the functional room of the nursing home, different light tracks can be formed when different night lamps are simultaneously lightened, and different light tracks can be used for guiding different light tracks;
the main controller is respectively connected with each night lamp, each camera device and each voice broadcasting device, and is used for executing the walking auxiliary navigation method for the vision impairment crowd in the nursing home.
The application is described in detail below by way of examples, which are not to be construed as limiting the application in any way.
Background introduction:
the building for the elderly to live is a multi-story building, for example, having a height of 6 floors, 10 rooms in each floor, and 1 toilet, wherein the first floor has a medical room (it will be understood that other rooms or other facilities may be provided in each floor, and the other rooms and other facilities are not described herein because they are not within the scope of the present embodiment).
In this embodiment, a plurality of image capturing devices are disposed in each corridor, each image capturing device is configured to capture a partial view in the corridor, and each image capturing device can capture images of the entire corridor without dead angles.
In this embodiment, the stairway section between each corridor is also monitored by a corresponding camera.
In this embodiment, some of the cameras are rotatable cameras that can be rotated to different orientations as desired, and combinations of these cameras can be implemented to cover each doorway. In the most extreme way, one camera is mounted opposite each door.
In this embodiment, the remaining image pickup devices are non-rotatable image pickup devices, and these image pickup devices are generally disposed so as to be able to observe the position of the corridor condition to the maximum extent, and are mainly used for monitoring the condition in the corridor.
In this embodiment, a voice broadcasting device, such as a speaker, is disposed in a corridor, and in one embodiment, the voice broadcasting device of the present application is disposed on a wall surface beside each door opening of each room, and the wall surface beside each door opening of each room is provided with the voice broadcasting device, so that the sound volume of the voice broadcasting device can be adjusted as little as possible, but the voice broadcasting device beside the walking person can also perform voice navigation.
In this embodiment, a plurality of night lamps are arranged in the corridor, and the night lamps can be signal lamps, can be signal lamps with one flash, can be signal lamps with colors changed at any time, and can be normally-on signal lamps.
In this embodiment, the night light can be spread over the wall surface of the corridor, and one night light can be arranged every several centimeters along the length direction of the corridor, so that the whole night light can show an obvious walking direction when being lightened. It will be appreciated that the night light itself may also be in the form of a string of lights.
In this embodiment, the arrangement mode of the night lights can make the night lights obviously indicate the route of drawing forth when bright, for example, assume that room a is located one end of corridor, room B is located the other end of corridor, then have a plurality of night lights to set up on the wall between room a and room B at least, when these night lights are all on (this kind of light can be on simultaneously, also can be on one side to the other end direction once with a certain direction, say a total of three night lights, be first night light respectively, second night light, third night light, the mode of on of three can be on simultaneously, also can be first on in proper order, the second night light, the third night light is not on, a preset time of interval, for example after 1 second, the second night light, first night light, the third night light is not on, after another 1 second of interval, first night light and second night light are bad), adopt this kind of mode, just can form a comparatively obvious light path, let the walker walk following the light path.
In one embodiment, not only can be provided with the night-light on the wall, also can set up the night-light on embedding ground on ground to can make subaerial night-light and the night-light of wall when the night-light is simultaneously bright, can observe the user of night-light can have more three-dimensional space to feel.
Referring to fig. 3, the circle in fig. 3 represents a night light, in this embodiment, the night light disposed on the ground is located at the center of the ground, for example, the ground may be regarded as a rectangle, and the rectangle has two long sides and two short sides, wherein the night lights disposed on the rectangle are uniformly distributed along one end of the long side of the rectangle toward the other end, and the distribution may be distributed every 10 cm, or may be distributed at other intervals, and the location of each night light disposed in the short side direction is located at the center point of the short side direction.
In this way, on the one hand, it can be used to guide the route and, on the other hand, it can be used to let the user know the position of the midline of the corridor in dim or no light environment.
It can be understood that each night light, each camera device and each voice broadcasting device have own identification.
In this embodiment, the overall controller of the present application is respectively connected to each night light, each camera device, and each voice broadcasting device in each floor.
The night lamp can be controlled to be on or off respectively through the control of the master controller, each camera device is controlled, the image acquired by each camera device is acquired, and each voice broadcasting device is controlled.
In this embodiment, when auxiliary navigation is not needed, each voice broadcasting device, each night lamp, and the movable camera device in the camera device can be in a dormant state, so that electric quantity can be saved, and only after the main controller obtains the door opening information of the room, the devices can be awakened, and it can be understood that the awakened devices are devices which need to be awakened according to the door opening information of the room, for example, a user is on two floors and wants to walk from the room where the user is to a toilet on two floors, and at this time, the devices (such as the camera device, the voice broadcasting device, and the like) on three floors do not need to be awakened.
The application can adopt light guide navigation, voice guide navigation and mixed guide navigation because not every visually impaired person cannot see light, for example, the visually impaired person is generally classified into five classes:
Namely, a low vision primary (0.1.ltoreq.best eye corrected vision < 0.3), a low vision secondary (0.05.ltoreq.best eye corrected vision < 0.1), a blind primary (0.02.ltoreq.best eye corrected vision < 0.05), a blind secondary (light sensation.ltoreq.best eye corrected vision < 0.02) and a blind tertiary (binocular no light sensation).
From the above description, it can be seen that some people with vision impairment can feel light, that is, feel light source, and can be guided by light.
In practical use, the method of the application is as follows:
scene: at night, the light in the corridor is dim or basically no light.
Step 1: acquiring room door opening information, in this embodiment, if one of the two rooms (referred to as a room a) is opened, then acquiring a room identity, wherein in the present application, a door opening detector is provided for each room, the door opening detector comprises an acceleration sensor and a door opening controller, and the acceleration sensor can determine whether a door opening action is performed, and each door opening controller is pre-stored with a room identity of the room;
The method comprises the steps of acquiring first coordinate information according to room door opening information, and sending the first coordinate information to each camera device capable of shooting the first coordinate information in the camera devices of the nursing home, wherein the camera device is provided with coordinate data of a doorway of each room; for example, assuming that the room identifier of the room a is a, the first coordinate information corresponding to a is found to be (X1, Y1, Z1) through the query of the coordinate database, and the photographable coordinate table corresponding to the image capturing device with the preset image capturing device identifier B is found to be (X1, Y1, Z1) through the search of the preset image capturing device coordinate database, the image capturing device with the preset image capturing device identifier B is considered to be able to capture the image of the first coordinate information position, and then the first coordinate information is sent to the image capturing device capable of capturing the first coordinate information in each image capturing device in the nursing home, that is, the image capturing device with the preset image capturing device identifier B is sent to (X1, Y1, Z1).
It will be understood that the camera, the room, the night light and the voice broadcasting device of each floor are all separately arranged, that is, when the master controller obtains the room identifier of the room a, it will know that the room a is located at the second floor, so that the camera capable of shooting the first coordinate information in the camera capable of sending the first coordinate information to each camera of the nursing home will not send the first coordinate information to the camera of other floors.
In this way, each layer can be divided by the identifier, so that when the coordinate system is established, the coordinate system of each layer can be the same, for example, when the coordinate information of the room a located at the two layers is (X1, Y1, Z1), and the coordinate information of a certain room located at the three layers can also be (X1, Y1, Z1), the difference between them is that the controller can know the floor where the room is located through the identifier of the room, so that the coordinates of the room a located at the two layers can not be sent to the corresponding devices of the three layers.
It will be appreciated that in other embodiments, a different coordinate system may be provided for each floor separately, for distinguishing between different locations of different floors.
After the coordinate information is sent to an image pickup device capable of shooting the coordinate information, image information transmitted by the image pickup device capable of acquiring a scene at a door of a room according to the first coordinate information is acquired;
And acquiring the first person identification information and the first person gesture information according to one or more image information in each image information.
Specifically, acquiring the first person identification information and the first person gesture information according to one or more image information in each image information includes:
acquiring a trained face neural network model;
face features in each image information are extracted respectively;
inputting the face characteristics into the trained face neural network model so as to acquire first person identification information;
acquiring a preset gesture navigation database, wherein the preset gesture navigation database comprises preset character identification information and gesture navigation reminding information corresponding to each preset character identification information;
acquiring gesture navigation reminding information corresponding to preset character identification information identical to the character identification information;
the gesture navigation reminding information is sent to a corresponding reminding device, so that the reminding device carries out reminding operation according to the gesture navigation reminding information;
acquiring a trained gesture neural network model;
respectively extracting gesture features in each image information transmitted by the image pickup device capable of picking up the first coordinate information after reminding operation;
And inputting the gesture characteristics into the trained gesture neural network model so as to acquire first person gesture information.
In this embodiment, the face recognition may be implemented by using a convolutional neural network including ResNet50 and a transducer model with a multi-head self-attention mechanism, and the specific face recognition mode belongs to a conventional technical means in the art, which is not described herein.
In this embodiment, the gesture recognition may also use a convolutional neural network, for example, when the user extends his hand and extends his finger, it means one meaning, and extends two fingers to means the other meaning.
In this embodiment, the present application can determine whether a visually impaired person comes out of the room by means of image recognition, and obtain whether the visually impaired person needs navigation and a specific destination of navigation by means of gesture operation performed by the visually impaired person.
Because the visually impaired person cannot clearly know whether the camera device aims at the camera device or not, or does not know that the gesture operation is proper, the gesture navigation reminding information is sent to the corresponding reminding device, and therefore the reminding device carries out the reminding operation according to the gesture navigation reminding information.
In this embodiment, the reminding device may be a voice broadcast device.
For example, a person with vision impairment (for example, zhang San) comes out of the room A, at this time, the image shot by the image shooting device recognizes that Zhang San is located at the entrance of the room A (i.e., near the first coordinate information and can be shot by the image shooting device), at this time, the voice broadcasting device broadcasts the gesture navigation reminding information (for example, please lift the hand and give a gesture instruction), it can be understood that the specific language information of the gesture navigation reminding information is preset, so that each person using the auxiliary navigation method knows how to lift the hand specifically for gesture operation.
After the gesture and the face image are acquired, acquiring an auxiliary knowledge graph, wherein the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, a personalized auxiliary navigation mode corresponding to each gesture node and destination information corresponding to each gesture node;
acquiring each gesture node of the person node corresponding to the first person identification information, for example, when one person node represents Zhang three, the person node corresponds to the first person identification information, in this embodiment, the first person identification information is acquired according to a face neural network model, the model may output a classification identifier, and the person node may also be set to a preset classification identifier, and when the output respective identifiers are the same as one preset classification identifier, the person node is considered to correspond to the one preset classification identifier;
Performing similarity measurement on the first person gesture information and each gesture node, so as to obtain a personalized auxiliary navigation mode and destination information corresponding to a gesture node with the highest similarity with the first person gesture information in each gesture node of the corresponding person nodes;
in this way, the first person identification information (i.e., zhang San) and the first person gesture information (e.g., zhang San extending a finger) of the person may be obtained.
For example, the personalized auxiliary navigation mode set by Zhang san is light guide navigation, and a finger is extended to represent a medical room to go to one floor.
Generating a first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information; specifically, a first planning walking track is generated according to destination information and first coordinate information, and the first planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the first coordinate information;
acquiring a prestoring database of a walking auxiliary navigation system of vision impairment crowd of the pension, wherein the prestoring database of the walking auxiliary navigation system of the vision impairment crowd of the pension comprises auxiliary device identifiers and preset auxiliary coordinate information corresponding to the auxiliary device identifiers;
Acquiring each auxiliary device identifier corresponding to preset auxiliary coordinate information which is the same as at least one of the plurality of necessary coordinate information;
and generating a personalized auxiliary navigation method according to the auxiliary device identifier and the personalized auxiliary navigation mode.
For example, one implementation is:
performing bidirectional space comparison on a BIM model and a point cloud model of a building, determining an effective movable range and a path connection network in the effective movable range, and generating an effective navigation image;
the starting point and the destination of the pedestrian are determined on the BIM model according to the task data, the first planning walking track of the robot is determined based on the effective navigation image, and a plurality of coordinate points on the first planning walking track are obtained as the necessary coordinate information.
The auxiliary device identifiers which can be used for the coordinate points are obtained through the pre-stored database of the walking auxiliary navigation system of the vision impaired crowd of the nursing home, and it can be understood that the obtained auxiliary device identifiers comprise a voice broadcasting device and a night lamp.
At this time, the method for generating the personalized auxiliary navigation according to the auxiliary device identifier and the personalized auxiliary navigation mode, in this embodiment, since the personalized auxiliary navigation mode for opening three is selected to be the light guide navigation, only the auxiliary device identifier belonging to the night lamp in the auxiliary device identifiers is obtained.
And controlling the night lamps to be lightened according to the acquired auxiliary device identifiers, so that an effective navigation guidance route can be formed, and navigation can be performed for Zhang III.
It can be understood that when the personalized auxiliary navigation mode is light guide navigation, the walking auxiliary navigation method for the vision impairment crowd in the nursing home further comprises the following steps:
acquiring the light brightness of a plurality of preset positions of the current floor;
judging whether the light brightness of each preset position of the current floor exceeds a preset light brightness threshold value, if so, then
Generating navigation voice and sending the navigation voice to a voice broadcasting device;
acquiring image information transmitted by a camera device capable of acquiring a scene at a door of a room according to the first coordinate information within a preset time after the navigation voice is transmitted to the voice broadcasting device;
performing image recognition on each acquired image so as to acquire new first person gesture information, judging whether the first person gesture information is different from the first person gesture information acquired previously, and if so, then
And carrying out similarity measurement with each gesture node according to the first person gesture information.
In this embodiment, the light is used alone to guide the navigation and is more suitable for darker scene, for example does not have the scene of turning on the light evening, and this kind of scene uses light to guide the navigation and can play fine guide effect on the one hand, reduces voice broadcast on the other hand, can prevent to disturb other people.
However, in some cases, the user may forget that the light is dim at night, or the light in the corridor is sufficient, so that the method is not suitable for guiding navigation by using the light, and the user can be navigated by the method, so that other navigation modes are changed.
It will be appreciated that in another embodiment, the user may not navigate, but may automatically modify to voice guidance navigation directly if the light is determined to be bright.
It will be appreciated that in this embodiment, when floor switching is involved, the destination of the starting floor may be planned to the elevator cab when track planning is performed, and the user enters other floors via the elevator.
In special cases, it is possible that two visually impaired people may use the method of the application simultaneously, for example, in a special case, three on three floors come out of the room at 9 o' clock 50 minutes and want to go to a first-floor medical room. At the time of 9 points 55, the four plums positioned on the three layers also come out of the room, and the four plums possibly want to go to the three-layer bathroom, and at the moment, the application can navigate by adopting the following method:
judging whether new door opening information of the room is acquired in a preset time period, if so, then
Acquiring second coordinate information according to the new door opening information of the room;
judging whether the first coordinate information is the same as the second coordinate information, if not, then
Transmitting the second coordinate information to each image pickup device capable of picking up the second coordinate information in the image pickup devices of the nursing home;
acquiring new image information transmitted by an image pickup device capable of acquiring a new room entrance scene according to the second coordinate information;
acquiring second person identification information and second person gesture information according to one or more pieces of image information in each piece of new image information;
the auxiliary knowledge graph is obtained, and the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, a personalized auxiliary navigation method corresponding to each gesture node and destination information corresponding to each gesture node;
Acquiring each gesture node of the character nodes corresponding to the second character identification information;
performing similarity measurement on the second character gesture information and each gesture node, so as to obtain a character auxiliary navigation mode and destination information corresponding to the gesture node with the highest similarity to the second character gesture information in each gesture node of the corresponding character node, wherein the character auxiliary navigation mode obtained through the second character gesture information is called a second character auxiliary navigation mode, and the destination information obtained through the second character gesture information is called a second destination information;
generating a second auxiliary personality auxiliary navigation method according to the second personality auxiliary navigation mode and the second destination information;
generating a second planning walking track according to the second destination information and the second coordinate information, wherein the second planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the second coordinate information;
judging whether the second planning walking track and the first planning walking track have common necessary coordinate information, if so, then
Judging whether the personalized auxiliary navigation mode conflicts with the second personalized auxiliary navigation mode, if not, then
And sending the second auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the second auxiliary individual auxiliary navigation method.
For example, if the Zhang three-planning walking trajectory is completely different from the Liu four-planning walking trajectory, no interference is required, and no interference is required, no matter whether the two use the same personal assistant navigation mode (for example, even if the navigation is performed by using light guidance), the walking of each other is not affected, because the vision-impaired person can see no specific objects, but the approximate position of the destination to be seen is known, for example, going to the toilet needs going to go to the left, at this time, the vision-impaired person will not consider whether the right side of the vision-impaired person has light guidance navigation, because the vision-impaired person using the method of the application naturally knows that other people can use the method of the application simultaneously when using the method of the application.
In one embodiment, the fourth prune can be navigated by voice broadcasting when the fourth prune comes out, for example, the navigation voice is: another user before you have to worry about navigating, unlike what you walk.
If the planned walking tracks of the two are overlapped, misguidance may occur if the light guide navigation mode is adopted, so that at the moment, whether the personalized auxiliary navigation mode conflicts with the second personalized auxiliary navigation mode is judged, if not, the user can select the first personalized auxiliary navigation mode to be the second personalized auxiliary navigation mode
And sending the second auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the second auxiliary individual auxiliary navigation method.
Specifically, if Zhang three uses voice guidance navigation and Lisi four uses light guidance navigation, they can walk according to their respective guidance.
In addition, in this case, whether Zhang san and Li four are on the same floor or not can be judged through real-time monitoring of the image pickup device, and if so, broadcasting can be performed through a voice broadcasting mode, for example, broadcasting is as follows: and people walk on the floor where you are, and the two positions can communicate through the language to prevent collision.
It can be appreciated that if it is determined whether the personality assistance navigation mode conflicts with the second personality assistance navigation mode, if so, then
Acquiring floor information of pedestrians corresponding to the first person identification information and floor information of pedestrians corresponding to the second person identification information by each camera device in the vision impaired crowd walking auxiliary navigation system of the pension hospital;
if the floor information of the pedestrian corresponding to the first person identification information and the floor information of the pedestrian corresponding to the second person identification information are not the same floor, then
Carrying out auxiliary navigation on floor information of pedestrians corresponding to the first person identification information in a walking auxiliary navigation system of vision impaired people in the nursing home according to the first auxiliary individual auxiliary navigation method, generating second person identification information navigation information, and informing the pedestrians corresponding to the first person identification information in a voice broadcasting mode;
and carrying out auxiliary navigation on the floor information of the pedestrian corresponding to the second person identification information in the walking auxiliary navigation system of the vision impaired crowd of the nursing home according to the second individual auxiliary navigation method, generating first person identification information navigation information, and informing the pedestrian corresponding to the second person identification information in a voice broadcasting mode.
For example, if after a conflict is determined, both are not actually on one floor, each floor may be navigated separately, e.g., zhang three on two floors and Lifour on three floors, then each device on two floors navigates for Zhang three and each device on three floors navigates for Lifour.
In this embodiment, after performing auxiliary navigation on the floor information of the pedestrian corresponding to the second person identification information in the walking auxiliary navigation system for the vision impairment crowd of the pension hospital according to the second personality auxiliary navigation method, the walking auxiliary navigation method for the vision impairment crowd of the pension hospital further includes:
and monitoring the positions of pedestrians corresponding to the first person identification information and the positions of pedestrians corresponding to the second person identification information in real time, and fixing the second personal auxiliary navigation method into voice guidance navigation when the positions of pedestrians corresponding to the first person identification information and the positions of pedestrians corresponding to the second person identification information are located on the same floor.
In particular, the second personality-assisted navigation method is fixed to voice-guided navigation once when both are on the same floor due to movement.
It can be understood that the method of the present application can also use the logic to regulate and control when a plurality of people come out, and can limit the people, for example, at most, only two people can be navigated, after two people, if the people are judged to come out and gesture is performed, a third person can be enabled to return to the room first by voice to wait, for example, the voice is used for broadcasting: you do so, more people need to be assisted now, and you need to return to the room first for 10 minutes (the time can be freely set) and then come out.
In addition, each camera device is connected with the central control screen, a real-time picture can be transmitted to the central control screen, a special person can monitor the condition of each corridor layer in a nursing home, and if any problem occurs in the navigation process, the real-time processing can be performed.
In addition, each camera device can also detect the falling, and can also perform alarm navigation once the falling is found.
Compared with the prior art, the application has the following advantages:
1. all of the facilities of the present application are located within a building and do not require separate navigational devices for each visually impaired person.
2. All facilities are arranged in the building, when a visually impaired person needs to walk, the visually impaired person does not need to additionally take the auxiliary equipment, the trouble that the visually impaired person needs to hold or wear the auxiliary equipment under the condition of no accompanying person is reduced, and especially at night, if the visually impaired person wants to go to a toilet, the visually impaired person can forget to hold or wear the auxiliary equipment, or the visually impaired person feels that the hand-held or the head-worn auxiliary equipment is very troublesome.
3. The intelligent navigation system and the intelligent navigation system realize intelligent navigation assistance of the whole building, and no extra equipment is needed no matter how the resident changes after the intelligent navigation system is installed.
4. The navigation of the application can be set individually for each person, and each person can carry out auxiliary navigation of individual auxiliary navigation mode according to own preference, for example, some visually impaired people feel fear of darkness although not seeing light, at the moment, mixed guidance navigation (comprising light guidance and voice guidance) can be selected, and even though not seeing light, the application can be used for comfort.
5. The navigation device can set a plurality of navigation destinations for each user so as to form a plurality of navigation routes, and the existing head-mounted or handheld device only can realize the walking obstacle avoidance function and cannot realize the navigation function of the plurality of destinations at any time.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and will not be repeated here.
The application also provides an electronic device (namely the general controller) which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the walking auxiliary navigation method for the vision impairment crowd of the nursing home when executing the computer program.
The application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program can realize the walking auxiliary navigation method for the vision impairment crowd in the nursing home when being executed by a processor.
Fig. 2 is an exemplary block diagram of an electronic device capable of implementing a walking aid navigation method for vision impaired people in a nursing home according to an embodiment of the present application.
As shown in fig. 2, the electronic device includes an input device 501, an input interface 502, a central processor 503, a memory 504, an output interface 505, and an output device 506. The input interface 502, the central processing unit 503, the memory 504, and the output interface 505 are connected to each other through a bus 507, and the input device 501 and the output device 506 are connected to the bus 507 through the input interface 502 and the output interface 505, respectively, and further connected to other components of the electronic device. Specifically, the input device 504 receives input information from the outside, and transmits the input information to the central processor 503 through the input interface 502; the central processor 503 processes the input information based on computer executable instructions stored in the memory 504 to generate output information, temporarily or permanently stores the output information in the memory 504, and then transmits the output information to the output device 506 through the output interface 505; the output device 506 outputs the output information to the outside of the electronic device for use by the user.
That is, the electronic device shown in fig. 2 may also be implemented to include: a memory storing computer-executable instructions; and one or more processors that, when executing the computer-executable instructions, implement the method of walking aid navigation for vision impaired people in the nursing home described in connection with fig. 1.
In one embodiment, the electronic device shown in FIG. 2 may be implemented to include: a memory 504 configured to store executable program code; the one or more processors 503 are configured to execute executable program code stored in the memory 504 to perform the walking aid navigation method for vision impaired people in the nursing home in the above embodiment.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and the media may be implemented in any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps. A plurality of units, modules or means recited in the apparatus claims can also be implemented by means of software or hardware by means of one unit or total means.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The processor referred to in this embodiment may be a central processing unit (Central Processing Unit, CPU), or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor may perform various functions of the apparatus/terminal device by executing or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
In this embodiment, the modules/units of the apparatus/terminal device integration may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a separate product. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the legislation and the practice of the patent in the jurisdiction. While the application has been described in terms of preferred embodiments, it is not intended to limit the application thereto, and any person skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, and therefore the scope of the application is to be determined from the appended claims.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps. A plurality of units, modules or means recited in the apparatus claims can also be implemented by means of software or hardware by means of one unit or total means.
While the application has been described in detail in the foregoing general description and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the application and are intended to be within the scope of the application as claimed.

Claims (8)

1. The walking auxiliary navigation method for the vision impairment crowd of the nursing home is characterized by comprising the following steps of:
Acquiring room door opening information;
acquiring first coordinate information according to room door opening information, and transmitting the first coordinate information to each camera device in the nursing home, wherein the camera devices can shoot the first coordinate information;
acquiring image information transmitted by an image pickup device capable of acquiring a scene at a door of a room according to the first coordinate information;
acquiring first person identification information and first person gesture information according to one or more image information in each image information;
acquiring an auxiliary knowledge graph, wherein the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, a personalized auxiliary navigation mode corresponding to each gesture node and destination information corresponding to each gesture node;
acquiring each gesture node of the character node corresponding to the first person identification information;
performing similarity measurement on the first person gesture information and each gesture node, so as to obtain a personalized auxiliary navigation mode and destination information corresponding to a gesture node with the highest similarity with the first person gesture information in each gesture node of the corresponding person nodes;
generating a first auxiliary personal auxiliary navigation method according to the personal auxiliary navigation mode and the destination information;
And sending the first auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the first auxiliary individual auxiliary navigation method.
2. The walking aid navigation method for vision impairment group of pension according to claim 1, wherein the acquiring room door opening information comprises:
and acquiring a door opening detection signal transmitted by a door opening detector and identification information of the door opening detector.
3. The walking aid navigation method of vision impairment crowd in a nursing home according to claim 2, wherein the acquiring first coordinate information according to room door opening information and transmitting the first coordinate information to each camera device in the nursing home capable of capturing the first coordinate information comprises:
acquiring a coordinate database, wherein the coordinate database comprises at least one preset door opening detector identifier and first coordinate information corresponding to each preset door opening detector identifier;
acquiring first coordinate information corresponding to a preset door opening detector identifier which is identical to the identifier information of the door opening detector;
Acquiring a preset camera device coordinate database, wherein the preset camera device coordinate database comprises at least one preset camera device identifier and a photographable coordinate table corresponding to each preset camera device identifier, and each photographable coordinate table comprises at least one preset first coordinate information;
acquiring preset camera device identifiers corresponding to all photographable coordinate tables with preset first coordinate information identical to the first coordinate information;
and sending the first coordinate information to each camera corresponding to each preset camera identifier according to the acquired preset camera identifiers.
4. The walking aid navigation method of vision impairment group for senior citizens of claim 3, wherein the acquiring the first person identification information and the first person gesture information according to one or more of the image information comprises:
acquiring a trained face neural network model;
face features in each image information are extracted respectively;
inputting the face characteristics into the trained face neural network model so as to acquire first person identification information;
acquiring a preset gesture navigation database, wherein the preset gesture navigation database comprises preset character identification information and gesture navigation reminding information corresponding to each preset character identification information;
Acquiring gesture navigation reminding information corresponding to preset character identification information identical to the first character identification information;
the gesture navigation reminding information is sent to a corresponding reminding device, so that the reminding device carries out reminding operation according to the gesture navigation reminding information;
acquiring a trained gesture neural network model;
respectively extracting gesture features in each image information transmitted by the image pickup device capable of picking up the first coordinate information after reminding operation;
and inputting the gesture characteristics into the trained gesture neural network model so as to acquire first person gesture information.
5. The walking aid navigation method for vision impairment group of nursing home as defined in claim 4, wherein the individual aid navigation modes comprise light guide navigation, voice guide navigation and mixed guide navigation.
6. The walking aid navigation method of vision impairment crowd in a nursing home according to claim 5, wherein generating a first auxiliary personality aid navigation method according to the personality aid navigation method and destination information comprises:
generating a first planning walking track according to the destination information and the first coordinate information, wherein the first planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the first coordinate information;
Acquiring a prestoring database of a walking auxiliary navigation system of vision impairment crowd of the pension, wherein the prestoring database of the walking auxiliary navigation system of the vision impairment crowd of the pension comprises auxiliary device identifiers and preset auxiliary coordinate information corresponding to the auxiliary device identifiers;
acquiring each auxiliary device identifier corresponding to preset auxiliary coordinate information which is the same as at least one of the plurality of necessary coordinate information;
and generating a first auxiliary personal auxiliary navigation method according to the auxiliary device identifier and the personal auxiliary navigation mode.
7. The vision impairment crowd walking aid navigation method of claim 6, wherein after the first assisted personal assistance navigation method is sent to a vision impairment crowd walking aid navigation system for a nursing home, the vision impairment crowd walking aid navigation method for a nursing home further comprises:
judging whether new door opening information of the room is acquired in a preset time period, if so, then
Acquiring second coordinate information according to the new door opening information of the room;
judging whether the first coordinate information is the same as the second coordinate information, if not, then
Transmitting the second coordinate information to each image pickup device capable of picking up the second coordinate information in the image pickup devices of the nursing home;
Acquiring new image information transmitted by an image pickup device capable of acquiring a new room entrance scene according to the second coordinate information;
acquiring second person identification information and second person gesture information according to one or more pieces of image information in each piece of new image information;
the auxiliary knowledge graph is obtained, and the auxiliary knowledge graph comprises a plurality of character nodes, gesture nodes corresponding to each character node, individual auxiliary navigation modes corresponding to each gesture node and destination information corresponding to each gesture node;
acquiring each gesture node of the character nodes corresponding to the second character identification information;
performing similarity measurement on the second character gesture information and each gesture node, so as to obtain a character auxiliary navigation mode and destination information corresponding to the gesture node with the highest similarity to the second character gesture information in each gesture node of the corresponding character node, wherein the character auxiliary navigation mode obtained through the second character gesture information is called a second character auxiliary navigation mode, and the destination information obtained through the second character gesture information is called a second destination information;
generating a second auxiliary personality auxiliary navigation method according to the second personality auxiliary navigation mode and the second destination information;
Generating a second planning walking track according to the second destination information and the second coordinate information, wherein the second planning walking track comprises a plurality of pieces of necessary coordinate information from the destination information to the second coordinate information;
judging whether the second planning walking track and the first planning walking track have common necessary coordinate information, if so, then
Judging whether the personalized auxiliary navigation mode acquired through the first person gesture information conflicts with the second personalized auxiliary navigation mode, if not, then
And sending the second auxiliary individual auxiliary navigation method to a walking auxiliary navigation system of vision impairment people in the nursing home, so that the walking auxiliary navigation system of vision impairment people in the nursing home performs auxiliary navigation according to the second auxiliary individual auxiliary navigation method.
8. The utility model provides a vision impairment crowd walking auxiliary navigation of nursing home, its characterized in that, vision impairment crowd walking auxiliary navigation of nursing home includes:
the plurality of imaging devices are arranged at first positions of the nursing home respectively, the first positions comprise all corridors and elevators of the nursing home, the imaging devices comprise a plurality of rotatable imaging devices and a plurality of non-rotatable imaging devices, the rotatable imaging devices are used for carrying out rotation shooting according to first coordinate information and second coordinate information, and the non-rotatable imaging devices are used for shooting images of fixed positions; wherein, each corridor is internally provided with at least one rotatable camera device and at least one non-rotatable camera device, and the non-rotatable camera devices in each corridor can cover the shooting area in the whole corridor;
The voice broadcasting devices are arranged in a plurality, and are respectively arranged at second positions of the nursing home, wherein the second positions comprise every corridor and elevator of the nursing home;
the night lamps are multiple in number, each night lamp is arranged at each third position of the nursing home, the third positions comprise all the corridor, the elevator, the toilet gate and the gate of the functional room of the nursing home, different night lamps can form different light tracks when being simultaneously lighted, and the different light tracks can be used for guiding different light tracks;
and the master controller is respectively connected with each night lamp, each camera device and each voice broadcasting device and is used for executing the walking auxiliary navigation method for the vision impairment crowd in the nursing home according to any one of claims 1 to 7.
CN202310966256.7A 2023-08-02 2023-08-02 Walking auxiliary navigation method and system for vision disturbance people in nursing home Active CN116698045B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311299138.1A CN117357380A (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method
CN202310966256.7A CN116698045B (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method and system for vision disturbance people in nursing home

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310966256.7A CN116698045B (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method and system for vision disturbance people in nursing home

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311299138.1A Division CN117357380A (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method

Publications (2)

Publication Number Publication Date
CN116698045A CN116698045A (en) 2023-09-05
CN116698045B true CN116698045B (en) 2023-11-10

Family

ID=87826073

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310966256.7A Active CN116698045B (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method and system for vision disturbance people in nursing home
CN202311299138.1A Pending CN117357380A (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311299138.1A Pending CN117357380A (en) 2023-08-02 2023-08-02 Walking auxiliary navigation method

Country Status (1)

Country Link
CN (2) CN116698045B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087016A (en) * 2017-03-06 2017-08-22 清华大学 The air navigation aid and system of mobile object in building based on video surveillance network
CN108844545A (en) * 2018-06-29 2018-11-20 合肥信亚达智能科技有限公司 A kind of auxiliary traveling method and system based on image recognition
CN109029466A (en) * 2018-10-23 2018-12-18 百度在线网络技术(北京)有限公司 indoor navigation method and device
CN109938973A (en) * 2019-03-29 2019-06-28 北京易达图灵科技有限公司 A kind of visually impaired person's air navigation aid and system
CN115525140A (en) * 2021-06-25 2022-12-27 北京小米移动软件有限公司 Gesture recognition method, gesture recognition apparatus, and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI418764B (en) * 2008-12-19 2013-12-11 Wistron Corp Fingerprint-based navigation method, method for setting up a link between a fingerprint and a navigation destination, and navigation device
US20180185232A1 (en) * 2015-06-19 2018-07-05 Ashkon Namdar Wearable navigation system for blind or visually impaired persons with wireless assistance
US11899448B2 (en) * 2019-02-21 2024-02-13 GM Global Technology Operations LLC Autonomous vehicle that is configured to identify a travel characteristic based upon a gesture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087016A (en) * 2017-03-06 2017-08-22 清华大学 The air navigation aid and system of mobile object in building based on video surveillance network
CN108844545A (en) * 2018-06-29 2018-11-20 合肥信亚达智能科技有限公司 A kind of auxiliary traveling method and system based on image recognition
CN109029466A (en) * 2018-10-23 2018-12-18 百度在线网络技术(北京)有限公司 indoor navigation method and device
CN109938973A (en) * 2019-03-29 2019-06-28 北京易达图灵科技有限公司 A kind of visually impaired person's air navigation aid and system
CN115525140A (en) * 2021-06-25 2022-12-27 北京小米移动软件有限公司 Gesture recognition method, gesture recognition apparatus, and storage medium

Also Published As

Publication number Publication date
CN117357380A (en) 2024-01-09
CN116698045A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
Li et al. Vision-based mobile indoor assistive navigation aid for blind people
Fiannaca et al. Headlock: a wearable navigation aid that helps blind cane users traverse large open spaces
US10189677B2 (en) Elevator control system with facial recognition and authorized floor destination verification
US20170318407A1 (en) Systems and Methods for Generating Spatial Sound Information Relevant to Real-World Environments
US10024667B2 (en) Wearable earpiece for providing social and environmental awareness
Katz et al. NAVIG: Guidance system for the visually impaired using virtual augmented reality
WO2012117508A1 (en) Information processing device, method and program
Al-Wazzan et al. Tour-guide robot
US11725958B2 (en) Route guidance and proximity awareness system
WO2021077941A1 (en) Method and device for robot positioning, smart robot, and storage medium
EP4036524A1 (en) A computer-implemented method, wearable device, computer program and computer readable medium for assisting the movement of a visually impaired user
Wang et al. An environmental perception and navigational assistance system for visually impaired persons based on semantic stixels and sound interaction
de Belen et al. Integrating mixed reality and internet of things as an assistive technology for elderly people living in a smart home
EP3924873A1 (en) Stereophonic apparatus for blind and visually-impaired people
US20230050825A1 (en) Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons
CN116698045B (en) Walking auxiliary navigation method and system for vision disturbance people in nursing home
US10814487B2 (en) Communicative self-guiding automation
Al-Shehabi et al. An obstacle detection and guidance system for mobility of visually impaired in unfamiliar indoor environments
CN114153310A (en) Robot guest greeting method, device, equipment and medium
CN117178241A (en) System and method for intelligently explaining exhibition scene
CN113961133A (en) Display control method and device for electronic equipment, electronic equipment and storage medium
Dong et al. PERCEPT-V: Integrated indoor navigation system for the visually impaired using vision-based localization and waypoint-based instructions
CN111919250A (en) Intelligent assistant device for conveying non-language prompt
CN215897762U (en) Vision auxiliary system
US20230196692A1 (en) Coordinating extended reality 3d space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant