WO2020114214A1 - 导盲方法和装置,存储介质和电子设备 - Google Patents

导盲方法和装置,存储介质和电子设备 Download PDF

Info

Publication number
WO2020114214A1
WO2020114214A1 PCT/CN2019/118110 CN2019118110W WO2020114214A1 WO 2020114214 A1 WO2020114214 A1 WO 2020114214A1 CN 2019118110 W CN2019118110 W CN 2019118110W WO 2020114214 A1 WO2020114214 A1 WO 2020114214A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
visually impaired
voice
impaired person
guidance information
Prior art date
Application number
PCT/CN2019/118110
Other languages
English (en)
French (fr)
Inventor
刘兆祥
林义闽
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Publication of WO2020114214A1 publication Critical patent/WO2020114214A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers

Definitions

  • the present disclosure relates to the field of navigation technology, and in particular, to a method and device for blind guidance, storage media, and electronic equipment.
  • VIP visually impaired people
  • auxiliary equipment In related technologies, electronic travel auxiliary equipment (ETA, Electronic Travel Aids). Most of the work of these auxiliary equipment for providing navigation for VIPs is concentrated on outdoor navigation. For example, navigation using GPS positioning and maps. Also proposed to use WI-FI, RFID, Bluetooth, two-dimensional code, etc. to locate the VIP position to achieve blind guidance. However, positioning-based navigation is easy to fail if the network fails. In addition, there are programs that use computer vision to provide navigation information. However, navigation that relies on visual information will fail if the current scene fails to match the preset scene.
  • the purpose of the present disclosure is to provide a blinding method and device, a storage medium and an electronic device, to solve the problem of possible failure of the related blinding technology.
  • the present disclosure provides a blinding method, the method includes:
  • a prompt message for prompting the visually impaired person to seek voice assistance from the target object is issued;
  • the action guidance information includes action distance information and steering direction information.
  • the acquiring voice messages and extracting the action guidance information from the acquired voice messages include:
  • issuing a prompt message for prompting the visually impaired person to seek voice assistance from the target object includes:
  • the prompt message is generated according to the relative position.
  • the extracting the action guidance information from the obtained voice message includes:
  • the navigation according to the action guidance information includes:
  • a navigation prompt is issued according to the deviation between the actual action path and the target action path.
  • the acquiring the actual action path of the visually impaired person includes:
  • the actual action path is determined according to the relative position information.
  • the method further includes:
  • the navigation according to the action guidance information includes:
  • the method further includes:
  • a prompt message for prompting the visually impaired person to ask the target object for voice assistance is issued again.
  • the preset event includes one or more of the following events:
  • the detected GPS signal strength is less than the preset signal strength threshold
  • the matching degree between the acquired current environment image and the pre-stored environment feature image is lower than the preset matching degree threshold
  • the present disclosure provides a blind guide device, the device comprising:
  • the prompting module is used for issuing a prompting message for prompting the visually impaired person to ask the target object for voice response in response to the preset event;
  • the acquisition module is used to acquire voice messages and extract action guidance information from the acquired voice messages
  • the navigation module is used for navigation according to the action guidance information.
  • the action guidance information includes action distance information and steering direction information.
  • the acquisition module is configured to monitor the voice interaction between the visually impaired person and the target object, and extract the action guidance information from the interactive voice message; and/or,
  • It is used to receive a voice instruction message issued by the visually impaired person, and extract the action guidance information from the voice instruction message.
  • the prompt module is used to:
  • the prompt message is generated according to the relative position.
  • the acquisition module is used to:
  • the navigation module is used to:
  • a navigation prompt is issued according to the deviation between the actual action path and the target action path.
  • the navigation module is used to:
  • the actual action path is determined according to the relative position information.
  • the navigation module is also used to:
  • the device further includes:
  • a prompt message for prompting the visually impaired person to ask the target object for voice assistance is issued again.
  • the preset event includes one or more of the following events:
  • the detected GPS signal strength is less than the preset signal strength threshold
  • the matching degree between the acquired current environment image and the pre-stored environment feature image is lower than the preset matching degree threshold
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements any of the steps of the blinding method.
  • an electronic device including:
  • a processor is configured to execute the computer program in the memory to implement any step of the blind method.
  • Fig. 1 is a flow chart showing a method of guiding blindness according to an exemplary embodiment.
  • Fig. 2 is a flow chart showing another method of blinding according to an exemplary embodiment.
  • Fig. 3 is another schematic diagram illustrating principles according to an exemplary embodiment.
  • Fig. 4 is a block diagram of a blind guide device according to an exemplary embodiment.
  • Fig. 5 is a block diagram of an electronic device according to an exemplary embodiment.
  • the guide system can be applied to guide wearable devices, such as guide helmets, guide suits, etc.; it can also be used for other types of guide equipment, such as guide sticks, self-guided guide equipment and many more.
  • These blind devices can use GPS, WIFI, Bluetooth, NFC and other technologies to obtain the current location information of visually impaired persons under normal blind state, and compare with the preset map information to guide the visually impaired persons according to the comparison results action.
  • Scenario 1 In an outdoor environment, the building will cause the GPS signal to degrade or fail to receive, which will result in the inaccurate positioning of the navigation device positioning VIP or the positioning failure.
  • VSLAM Visual Simultaneous Localization and Mapping
  • the external environment changes, for example, when the shopping center adjusts the lights, changes the decoration, or replaces the original store with another store, the current visual characteristics will not match the pre-stored feature maps, so the current Accurately locate information.
  • Scenario 3 In an indoor environment, due to changes in wireless devices, the distribution of wireless signals may change. For example, some wireless routers are removed or some new wireless routers are installed. Therefore, sometimes the wireless positioning method may fail.
  • Scenario 4 In an indoor or outdoor environment, insufficient navigation and positioning resolution will cause errors when distinguishing between two adjacent shopping stores or two adjacent door openings.
  • the embodiments of the present disclosure provide a blind method to improve the reliability of navigation for the visually impaired.
  • the blinding method can be applied to a blinding helmet.
  • the blinding helmet can include a variety of sensors, for example, a position sensor, an inertial measurement unit, an image sensor, a voice sensing device, etc., to obtain around the visually impaired person wearing the blinding helmet
  • Environmental data can also include speakers, haptic vibration units, etc., to facilitate voice or tactile interaction for visually impaired people.
  • the method includes:
  • the preset event may include an event characterizing the failure of navigation localization.
  • the above-mentioned preset event may be that the strength of the detected GPS signal is less than the preset signal strength threshold, and this event may correspond to scenario 1 described above.
  • the above-mentioned preset event may be that the matching degree between the acquired current environment image and the pre-stored environment feature image is lower than a preset matching degree threshold, and this event may correspond to scene 2 described above.
  • the above-mentioned preset event may be that the location of the currently connected wireless device does not match the location of the preset wireless device, and this event may correspond to scenario 3 described above.
  • the foregoing preset event may also be a voice instruction received by a visually impaired person for feedback of navigation error, and this event may correspond to scenario 4 described above.
  • the navigation error occurs when distinguishing between two adjacent shopping stores or two adjacent door openings.
  • the visually impaired person finds that the door opening cannot be entered, and can feedback the error information through voice commands, thereby further triggering the execution of the above method flow.
  • the above-mentioned preset events can also receive other voice commands from visually impaired persons. For example, during the action of visually impaired persons, voice instructions from visually impaired persons asking where the restroom is can be received. Prompt message used to prompt the visually impaired to ask the target person for voice assistance.
  • the above target object may be a character object, for example, other pedestrians walking on the road, police. It can also be a police box, a traffic command post, etc. Specifically, a target object with specific external characteristics can be searched for by image recognition technology, and then a prompt message for prompting the visually impaired person to ask the target object for voice assistance is issued.
  • the action execution information includes action distance information and steering direction information.
  • the action distance information specifically refers to the distance traveled in a straight line;
  • the steering direction specifically refers to the steering action to be performed, for example, turning left, turning right, or turning backward.
  • the voice interaction between the visually impaired person and the target object may be monitored, and the action guidance information may be extracted from the interactive voice message.
  • a voice instruction message issued by the visually impaired person may be received, and the action guidance information may be extracted from the voice instruction message.
  • the visually impaired person it is not necessary to perceive the interaction of the visually impaired person with its target object, but only to receive the voice instruction message of the visually impaired person.
  • the visually impaired person obtains the action instructions and informs the navigation device through a voice command message.
  • the voice command may be "Oda, go about 100 meters, and then turn left”.
  • the action guidance information extracted from the voice command may be "Go straight for 100 meters and turn left”.
  • the voice training instruction receiving model of the visually impaired person may be learned in advance, so that the accuracy of obtaining the action guidance information from the voice instruction message can be improved.
  • a prompt message carrying the action guidance information may be sent to the visually impaired person to further obtain the visually impaired person's confirmation of the accuracy of the action guidance information.
  • the navigating according to the action guidance information includes: generating a target action path according to the action guidance information; acquiring the actual action path of the visually impaired person; based on the actual action path and the target action path The deviation between them gives a navigation prompt.
  • information about obstacles around the visually impaired person may be obtained, and navigation may be performed according to the action guidance information and the obstacle information.
  • RANSAC Random Sample Consensus
  • ultrasonic sensors and depth cameras are used to identify obstacles in these paths and the walking path is optimized.
  • the measurement error of the depth camera can be compensated by the ultrasonic sensor.
  • Fig. 2 is a flow chart showing another method of blinding according to an exemplary embodiment.
  • the blinding method can be applied to a blinding helmet.
  • the blinding helmet can include a variety of sensors, for example, a position sensor, an inertial measurement unit, an image sensor, a voice sensing device, etc., to obtain around the visually impaired person wearing the blinding helmet
  • Environmental data can also include speakers, haptic vibration units, etc., to facilitate voice or tactile interaction for visually impaired people.
  • the method includes:
  • the preset event may include an event characterizing the failure of navigation localization.
  • the above-mentioned preset event may be that the strength of the detected GPS signal is less than a preset signal strength threshold, and this event may correspond to scenario 1 described above.
  • the above-mentioned preset event may be that the matching degree between the acquired current environment image and the pre-stored environment feature image is lower than a preset matching degree threshold, and this event may correspond to scene 2 described above.
  • the above-mentioned preset event may be that the location of the currently connected wireless device does not match the location of the preset wireless device, and this event may correspond to scenario 3 described above.
  • the above-mentioned preset event may also be a voice instruction received by a visually impaired person to feed back an error in navigation, and this event may correspond to scenario 4 described above.
  • the navigation error occurs when distinguishing between two adjacent shopping stores or two adjacent door openings.
  • the visually impaired person finds that the door opening cannot be entered, and can feedback the error information through voice commands.
  • the above preset events can also receive other voice commands from visually impaired persons. For example, during the action of visually impaired persons, voice instructions from visually impaired persons asking where the restroom is can be received. Prompt message used to prompt the visually impaired to ask the target person for voice assistance
  • the above target object may be a character object, for example, other pedestrians walking on the road, police. It can also be a police box, a traffic command post, etc. Specifically, target objects with specific external characteristics can be found through image recognition technology. Further, the relative position is obtained through a distance measuring sensor.
  • S23 Send a prompt message for prompting the visually impaired person to seek voice assistance from the target object according to the relative position.
  • the prompt message sent according to the relative position may be "Go forward 5 steps and ask the lady about the specific location of the target address.”
  • the acquiring a voice message and extracting action guidance information from the acquired voice message include: monitoring voice interaction between the visually impaired person and the target object, and extracting the action guidance from the interactive voice message Information; and/or, receiving a voice instruction message issued by the visually impaired person, and extracting the action guidance information from the voice instruction message.
  • the action guidance information includes action distance information and steering direction information.
  • the extracting the action guidance information from the acquired voice message includes: converting the voice message into a text message; parsing the text message into a command sequence according to a preset action guidance instruction template, wherein The action instruction information includes the command sequence.
  • the voice message is "Oda, go about 100 meters, and then turn left".
  • the voice message contains two kinds of information semantically.
  • the former is action distance information
  • the latter is turning direction information. Parse the semantics and apply the preset action instruction template to get the command sequence.
  • the obtained command sequence may be " ⁇ straight, 100>, ⁇ left, NULL>". Further, navigation can be performed according to the command sequence.
  • a prompt message confirming the command can be generated according to the command sequence, and subsequent operations can be performed after receiving the confirmation instruction from the visually impaired.
  • the acquiring the actual action path of the visually impaired person includes: acquiring the first image and the second image respectively taken at the first position and the second position on the action path of the visually impaired person Image; based on the difference in the characteristics of the corresponding key points in the first image and the second image in the image, calculate the relative position information of the first position and the second position; determine the position based on the relative position information Describe the actual path of action.
  • FIG. 3 is a schematic diagram of the principle of the above optional embodiment.
  • the coordinate system (1) corresponds to the actual coordinate position when the visually impaired person moves, where Xc, Yc, and Zc represent the three-axis coordinate system. Three axes.
  • the first image is obtained by shooting at the first position, and the second picture is obtained by shooting at the second position.
  • both of the above-mentioned images can be taken from the perspective of the visually impaired.
  • VIO Visual-Inertial Odometry
  • the position difference determines the relative distance parameter and/or relative rotation angle parameter between the first position and the second position.
  • the difference in the proportion of the image of key objects in the image constructed by the key points can be determined as the first The relative distance parameter and/or the relative rotation angle parameter between the position and the second position.
  • IMU Inertial measurement unit, inertial measurement unit
  • IMU Inertial measurement unit, inertial measurement unit
  • the above relative distance parameter and/or relative rotation angle parameter can be compensated to obtain a more accurate first position and Relative position information of the second position.
  • the actual action path of visually impaired persons is gradually constructed on the corresponding coordinate system (2), where X, Y, and Z represent the three axes of the three-axis coordinate system.
  • an IMU contains three single-axis accelerometers and three single-axis gyroscopes.
  • the accelerometer detects the acceleration signal of the object in the independent three-axis of the carrier coordinate system
  • the gyroscope detects the angular velocity of the carrier relative to the navigation coordinate system.
  • Signal measure the angular velocity and acceleration of the object in three-dimensional space, and use this to calculate the attitude of the object, which can improve the accuracy of determining the actual action path, thereby improving the accuracy of navigation.
  • steps S26 and S27 are iteratively executed to detect the actual action path of the visually impaired person in real time and obtain the deviation from the target action path.
  • navigation information can also be combined to optimize navigation decisions.
  • the action instructions learned through voice help are not necessarily accurate. For example, if the target location is 150 meters ahead, if a pedestrian informs the visually impaired that the target location is walking 100 meters forward, then after the navigation guide completes 100 meters forward, you can use a visual sensor to detect whether there is a target location around Feature identification, for example, store nameplates, landscape styles, etc., to determine whether it has reached the destination.
  • Feature identification for example, store nameplates, landscape styles, etc.
  • a prompt message for prompting the visually impaired person to ask the target object for voice assistance may be issued again.
  • navigation based on GPS and WIFI technology fails, it is still possible to guide the visually impaired person to complete multiple voice requests for assistance, and further complete the navigation according to the voice for help.
  • Fig. 4 is a block diagram of a blind guide device according to an exemplary embodiment.
  • the device can be applied to blind guide helmets, blind walking sticks, or other electronic devices through a combination of software and hardware.
  • the device includes:
  • the prompt module 410 is configured to issue a prompt message for prompting the visually impaired person to seek voice assistance from the target object in response to a preset event;
  • the obtaining module 420 is used to obtain a voice message and extract action guidance information from the obtained voice message;
  • the navigation module 430 is used for navigation according to the action guidance information.
  • the action guidance information includes action distance information and steering direction information.
  • the acquisition module is configured to monitor the voice interaction between the visually impaired person and the target object, and extract the action guidance information from the interactive voice message; and/or to receive The voice instruction message issued by the visually impaired person, and extracting the action guidance information from the voice instruction message.
  • the prompt module is used to:
  • the prompt message is generated according to the relative position.
  • the acquisition module is used to:
  • the navigation module is used to:
  • a navigation prompt is issued according to the deviation between the actual action path and the target action path.
  • the navigation module is used to:
  • the actual action path is determined according to the relative position information.
  • the navigation module is also used to:
  • the device further includes:
  • a prompt message for prompting the visually impaired person to ask the target object for voice assistance is issued again.
  • the preset event includes one or more of the following events:
  • the detected GPS signal strength is less than the preset signal strength threshold
  • the matching degree between the acquired current environment image and the pre-stored environment feature image is lower than the preset matching degree threshold
  • An embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, any of the steps of the blinding method described above is implemented.
  • An embodiment of the present disclosure provides an electronic device, including: a memory on which a computer program is stored; and a processor for executing the computer program in the memory to implement any of the steps of the blinding method.
  • Fig. 5 is a block diagram of an electronic device 500 according to an exemplary embodiment.
  • the electronic device may be a blind guide helmet, a blind walking stick, or other electronic devices, such as a smart phone, personal medical equipment, and so on.
  • the electronic device 500 may include: a processor 501 and a memory 502.
  • the electronic device 500 may also include one or more of a multimedia component 503, an input/output (I/O) interface 504, and a communication component 505.
  • the processor 501 is used to control the overall operation of the electronic device 500 to complete all or part of the steps in the blinding method described above.
  • the memory 502 is used to store various types of data to support operations on the electronic device 500, and the data may include, for example, instructions for any applications or methods operating on the electronic device 500, and application-related data, For example, maps, instruction models, and prompt message libraries can also be contact data, messages sent and received, pictures, audio, video, etc.
  • the memory 502 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (Programmable Read-Only Memory (PROM for short), Read-Only Memory (Read-Only Memory (ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • the multimedia component 503 may include a screen and an audio component.
  • the screen may be, for example, a touch screen, and the audio component is used to output and/or input audio signals.
  • the audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in the memory 502 or transmitted through the communication component 505.
  • the audio component also includes at least one speaker for outputting audio signals.
  • the I/O interface 504 provides an interface between the processor 501 and other interface modules.
  • the other interface modules may be a keyboard, a mouse, a button, and so on. These buttons can be virtual buttons or physical buttons.
  • the communication component 505 is used for wired or wireless communication between the electronic device 500 and other devices.
  • Wireless communication such as Wi-Fi, Bluetooth, near field communication (Near Field Communication (NFC for short), 2G, 3G, or 4G, or a combination of one or more of them, so the corresponding communication component 505 may include: a Wi-Fi module, a Bluetooth module, and an NFC module.
  • Wi-Fi Wireless Fidelity
  • NFC Near Field Communication
  • the electronic device 500 may be one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), a digital signal processor (Digital Signal Processor) Processor, referred to as DSP), digital signal processing equipment (Digital Signal Processing Device (DSPD for short), Programmable Logic Device (Programmable Logic Device (PLD for short), Field Programmable Gate Array (Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above-mentioned blinding method.
  • ASIC Application Specific Integrated Circuit
  • DSP digital signal processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components are used to implement the above-mentioned blinding method.
  • the electronic device 500 may further include various sensors, such as a position sensor, an inertial measurement unit, an image sensor, a voice sensing device, etc., to obtain the environment around the visually impaired person wearing the blind helmet Data; can also include speakers, tactile vibration units, etc., to facilitate voice or tactile interaction for visually impaired persons.
  • sensors such as a position sensor, an inertial measurement unit, an image sensor, a voice sensing device, etc., to obtain the environment around the visually impaired person wearing the blind helmet Data
  • a computer-readable storage medium including program instructions is also provided.
  • the program instructions are executed by a processor, the steps of the blinding method described above are implemented.
  • the computer-readable storage medium may be the above-mentioned memory 502 including program instructions, and the above-mentioned program instructions may be executed by the processor 501 of the electronic device 500 to complete the above-mentioned blinding method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

一种导盲方法和装置以及存储介质和电子设备,该方法包括:响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息(S11);获取语音消息,并从获取到的语音消息中提取行动指引信息(S12);根据行动指引信息进行导航(S13)。该方法解决了导盲技术存在失效可能的问题。

Description

导盲方法和装置,存储介质和电子设备 技术领域
本公开涉及导航技术领域,具体地,涉及一种导盲方法和装置,存储介质和电子设备。
背景技术
根据世界卫生组织的官方统计数据,截止2011年,全球约有2.85亿视觉障碍人士(VIP,Visual Impaired People)。这些人中的绝大多数靠导盲宠物或者手杖来获得独立生活的能力。
相关技术中,提出了电子出行辅助设备(ETA,Electronic Travel Aids)。这些为VIP提供导航的辅助设备的工作大部分集中在室外导航上。比如,利用GPS定位和地图实现的导航。还提出了利用WI-FI,RFID,蓝牙,二维码等定位VIP方位从而实现导盲的方案。然而基于定位的导航,容易在联网失败的情况下失效。此外,还有利用计算机视觉提供导航信息的方案。然而依赖于视觉信息的导航,会在当前场景与预设的场景匹配失败的情况下失效。
发明内容
本公开的目的是提供一种导盲方法和装置,存储介质和电子设备,以解决相关导盲技术存在失效可能的问题。
为了实现上述目的,第一方面,本公开提供一种导盲方法,所述方法包括:
响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息;
获取语音消息,并从获取到的语音消息中提取行动指引信息;
根据所述行动指引信息进行导航。
可选的,所述行动指引信息包括行动距离信息和转向方位信息。
可选的,所述获取语音消息,并从获取到的语音消息中提取行动指引信息,包括:
监测所述视障人士与所述目标对象之间的语音交互,并从交互的语音消息中提取所述行动指引信息;和/或,
接收所述视障人士发出的语音指令消息,并从所述语音指令消息中提取所述行动指引信息。
可选的,所述响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息,包括:
响应于所述预设事件,在所述视障人士周围搜寻目标对象;
获取搜寻到的目标对象与所述视障人士之间的相对位置;
根据所述相对位置生成所述提示消息。
可选的,所述从获取到的语音消息中提取行动指引信息,包括:
将所述语音消息转换为文本消息;
根据预设的行动指引指令模板将所述文本消息解析为命令序列,其中,所述行动指引信息包括所述命令序列。
可选的,所述根据所述行动指引信息进行导航,包括:
根据所述行动指引信息生成目标行动路径;
获取所述视障人士的实际行动路径;
根据所述实际行动路径与所述目标行动路径之间的偏差发出导航提示。
可选的,所述获取所述视障人士的实际行动路径,包括:
获取分别在视障人士行动路径上的第一位置和第二位置拍摄的第一图像和第二图像;
根据所述第一图像和所述第二图像中相应关键点在图像中的特征差别,计算所述第一位置和所述第二位置的相对方位信息;
根据所述相对方位信息确定所述实际行动路径。
可选的,所述方法还包括:
获取所述视障人士周围的障碍物信息;
所述根据所述行动指引信息进行导航,包括:
根据所述行动指引信息以及所述障碍物信息进行导航。
可选的,所述方法还包括:
在完成根据所述行动指引信息执行的导航操作后,再次发出用于提示视障人士向目标对象语音求助的提示消息。
可选的,所述预设事件包括以下一种或多种事件:
检测到GPS信号的强度小于预设信号强度阈值;
获取的当前环境图像与预先存储的环境特征图像匹配度低于预设匹配度阈值;
检测到当前连接到的无线设备的定位与预设无线设备定位不符。
第二方面,本公开提供一种导盲装置,所述装置包括:
提示模块,用于响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息;
获取模块,用于获取语音消息,并从获取到的语音消息中提取行动指引信息;
导航模块,用于根据所述行动指引信息进行导航。
可选的,所述行动指引信息包括行动距离信息和转向方位信息。
可选的,所述获取模块,用于监测所述视障人士与所述目标对象之间的语音交互,并从交互的语音消息中提取所述行动指引信息;和/或,
用于接收所述视障人士发出的语音指令消息,并从所述语音指令消息中提取所述行动指引信息。
可选的,所述提示模块,用于:
响应于所述预设事件,在所述视障人士周围搜寻目标对象;
获取搜寻到的目标对象与所述视障人士之间的相对位置;
根据所述相对位置生成所述提示消息。
可选的,所述获取模块,用于:
将所述语音消息转换为文本消息;
根据预设的行动指引指令模板将所述文本消息解析为命令序列,其中,所述行动指引信息包括所述命令序列。
可选的,所述导航模块,用于:
根据所述行动指引信息生成目标行动路径;
获取所述视障人士的实际行动路径;
根据所述实际行动路径与所述目标行动路径之间的偏差发出导航提示。
可选的,所述导航模块,用于:
获取分别在视障人士行动路径上的第一位置和第二位置拍摄的第一图像和第二图像;
根据所述第一图像和所述第二图像中相应关键点在图像中的特征差别,计算所述第一位置和所述第二位置的相对方位信息;
根据所述相对方位信息确定所述实际行动路径。
可选的,所述导航模块,还用于:
获取所述视障人士周围的障碍物信息;
根据所述行动指引信息以及所述障碍物信息进行导航。
可选的,所述装置还包括:
在完成根据所述行动指引信息执行的导航操作后,再次发出用于提示视障人士向目标对象语音求助的提示消息。
可选的,所述预设事件包括以下一种或多种事件:
检测到GPS信号的强度小于预设信号强度阈值;
获取的当前环境图像与预先存储的环境特征图像匹配度低于预设匹配度阈值;
检测到当前连接到的无线设备的定位与预设无线设备定位不符。
第三方面,本公开提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一项所述导盲方法的步骤。
第四方面,本公开提供一种电子设备,包括:
存储器,其上存储有计算机程序;
处理器,用于执行所述存储器中的所述计算机程序,以实现任一项所述导盲方法的步骤。
上述技术方案,至少能够达到以下技术效果:
通过响应于预设事件发出用于提示视障人士向目标对象语音求助的提示消息,并从获取到的语音消息中提取行动指引信息进行导航,这样,能够在常用导航本地化失效的情况下,持续为视障人士导航,提升了导航的可靠性。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1是根据一示例性实施例示出的一种导盲方法的流程图。
图2是根据一示例性实施例示出的另一种导盲方法的流程图。
图3是根据一示例性实施例示出的另一种原理示意图。
图4是根据一示例性实施例示出的一种导盲装置的框图。
图5是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
相关技术中,所述导盲系统可以应用于导盲可穿戴设备,例如,导盲头盔,导盲服等;还可以用于其他类型的导盲设备,例如,导盲手杖,导盲自行设备等等。这些导盲设备在常规导盲状态下可以使用GPS、WIFI、蓝牙、NFC等技术获取视障人士当前的位置信息,并与预置的地图信息进行比对,从而根据比对结果指引视障人士行动。
然而导盲系统在本地化失败的情况下存在失效的可能。下面,首先分析下这些可能会本地化失败的场景:
场景1:在室外环境中,建筑会导致GPS信号降级或无法接收,这样就会导致导航设备定位VIP的位置不准确或者定位失败。
场景2:在室内环境中,使用VSLAM(Visual Simultaneous Localization and Mapping)技术通过获取得到的图像特征来实现导航本地化。然而当外界环境发生改变时,例如,例如,购物中心调整灯光,改变装修,或者用另一个商店替换原有商店时,则当前视觉特征将不能与预先存储的特征图匹配,因此不能提供当前的准确定位信息。
场景3:在室内环境中,由于无线设备的变化,无线信号的分布可能会发生变化,例如,某些无线路由器被移除或者设置了一些新的无线路由器。因此,有时无线定位方法可能失效。
场景4:在室内或室外环境中,导航定位分辨率不足,会导致在区分两个相邻的购物店或者两个相邻的门洞时出现错误。
为解决上述场景下常规导盲技术存在失效可能的问题,本公开实施例提供一种导盲方法,以提升为视觉障碍人士导航的可靠性。
该导盲方法可以应用于导盲头盔,导盲头盔可以包括多种传感器,例如,位置传感器,惯性测量单元,图像传感器,语音感知装置等等,以获取穿戴该导盲头盔的视障人士周围的环境数据;还可以包括扬声器,触觉振动单元等等,以便于视障人士进行语音或者触觉交互。
如图1的流程图所示,所述方法包括:
S11,响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息。
所述预设事件可以包括用于表征导航本地化失败的事件。例如,上述预设事件可以是检测到GPS信号的强度小于预设信号强度阈值,这一事件可以对应上文所述的场景1。再比如,上述预设事件可以是获取的当前环境图像与预先存储的环境特征图像匹配度低于预设匹配度阈值,这一事件可以对应上文所述的场景2。再比如,上述预设事件可以是检测到当前连接到的无线设备的定位与预设无线设备定位不符,这一事件可以对应上文所述的场景3。
上述预设事件还可以是接收到视障人士发出的用于反馈导航出错的语音指令,这一事件可以对应于上文所述的场景4。比如说导航在区分两个相邻的购物店或者两个相邻的门洞时出现错误,视障人士发觉门洞无法进入,可以通过语音指令反馈错误信息,从而进一步触发执行上述方法流程。
除此之外,上述预设事件还可以接收视障人士的其他语音指令,例如,在视障人士行动途中,接收到视障人士发出的问询洗手间在哪里的语音指令,进一步的,可以发出用于提示视障人士向目标对象语音求助的提示消息。
上述目标对象可以是人物对象,例如,道路上行走的其他行人,警察。也可以是警亭,交通指挥站等等。具体的,可以通过图像识别技术搜寻到具有特定外在特征的目标对象,然后发出用于提示视障人士向目标对象语音求助的提示消息。
S12,获取语音消息,并从获取到的语音消息中提取行动指引信息。
其中,所述行动执行信息包括行动距离信息和转向方位信息。行动距离信息具体指直线行走的距离;转向方位具体指要执行的转向动作,例如,左转,右转或者向后转。
在一种可选的实施方式中,可以监测所述视障人士与所述目标对象之间的语音交互,并从交互的语音消息中提取所述行动指引信息。
也就是说,可以感知视障人士与不同人的互动。当视障人士向行人问路时,可以自动提取行人告知的行动指引信息,例如,“直走50米,向左转,直走100米”。通过这种可选实施方式,能够更加智能的获取行动指引消息,对于视障人士来说,交互体验感觉更加的自然。
在另一种可选的实施方式中,可以接收所述视障人士发出的语音指令消息,并从所述语音指令消息中提取所述行动指引信息。
在这种实施方式中,不需要感知视障人士与其目标对象的互动,而只是接收视障人士的语音指令消息。例如在视障人士向行人问路之后,视障人士获取到的行动指引,并通过语音指令消息告知导航设备,该语音指令可以是“小田,前进约100米,然后左转”。进一步的,从该语音指令中提取出的行动指引信息可以是“直走100米,左转”。在本可选实施方式中,可以预先通过学习视障人士的语音训练指令接收模型,这样,能够提升从语音指令消息中获取行动指引信息的准确度。
此外,还可以对上述两种可选实施方式进行结合。例如,在从交互的语音信息中获取到行动指引信息后,可以向视障人士发送携带有行动指引信息的提示消息,以进一步获取视障人士对行动指引信息准确性的确认。
S13,根据所述行动指引信息进行导航。
具体的,所述根据所述行动指引信息进行导航,包括:根据所述行动指引信息生成目标行动路径;获取所述视障人士的实际行动路径;根据所述实际行动路径与所述目标行动路径之间的偏差发出导航提示。
此外,还可以获取所述视障人士周围的障碍物信息,并根据所述行动指引信息以及所述障碍物信息进行导航。
例如,通过RANSAC(Random Sample Consensus)算法检测地面,然后,生成可行走的路径;再进一步的通过超声波传感器和深度摄像机识别这些路径中的障碍物,对可行走的路径进行优化。通过超声波传感器可以补偿深度摄像机的测量误差。
上述技术方案,至少能够达到以下技术效果:
通过响应于预设事件发出用于提示视障人士向目标对象语音求助的提示消息,并从获取到的语音消息中提取行动指引信息进行导航,这样,能够在常用导航本地化失效的情况下,持续为视障人士导航,提升了导航的可靠性。
图2是根据一示例性实施例示出的另一种导盲方法的流程图。该导盲方法可以应用于导盲头盔,导盲头盔可以包括多种传感器,例如,位置传感器,惯性测量单元,图像传感器,语音感知装置等等,以获取穿戴该导盲头盔的视障人士周围的环境数据;还可以包括扬声器,触觉振动单元等等,以便于视障人士进行语音或者触觉交互。
如图2所示,所述方法包括:
S21,响应于所述预设事件,在所述视障人士周围搜寻目标对象。
所述预设事件可以包括用于表征导航本地化失败的事件。例如,上述预设事件可以是检测到GPS信号的强度小于预设信号强度阈值,这一事件可以对应上文所述的场景1。再比如,上述预设事件可以是获取的当前环境图像与预先存储的环境特征图像匹配度低于预设匹配度阈值,这一事件可以对应上文所述的场景2。再比如,上述预设事件可以是检测到当前连接到的无线设备的定位与预设无线设备定位不符,这一事件可以对应上文所述的场景3。
上述预设事件还可以是接收到视障人士发出的用于反馈导航出错的语音指令,这一事件可以对应于上文所述的场景4。比如说导航在区分两个相邻的购物店或者两个相邻的门洞时出现错误,视障人士发觉门洞无法进入,可以通过语音指令反馈错误信息。
除此之外,上述预设事件还可以接收视障人士的其他语音指令,例如,在视障人士行动途中,接收到视障人士发出的问询洗手间在哪里的语音指令,进一步的,可以发出用于提示视障人士向目标对象语音求助的提示消息。
S22,获取搜寻到的目标对象与所述视障人士之间的相对位置。
上述目标对象可以是人物对象,例如,道路上行走的其他行人,警察。也可以是警亭,交通指挥站等等。具体的,可以通过图像识别技术搜寻到具有特定外在特征的目标对象。进一步的,再通过测距传感器获取上述相对位置。
S23,根据所述相对位置发出用于提示视障人士向目标对象语音求助的提示消息。
例如,根据相对位置发出的提示消息可以是“向前走5步,向女士询问目标地址的具体方位”。
S24,获取语音消息,并从获取到的语音消息中提取行动指引信息。
所述获取语音消息,并从获取到的语音消息中提取行动指引信息,包括:监测所述视障人士与所述目标对象之间的语音交互,并从交互的语音消息中提取所述行动指引信息;和/或,接收所述视障人士发出的语音指令消息,并从所述语音指令消息中提取所述行动指引信息。
其中,所述行动指引信息包括行动距离信息和转向方位信息。
具体的,所述从获取到的语音消息中提取行动指引信息,包括:将所述语音消息转换为文本消息;根据预设的行动指引指令模板将所述文本消息解析为命令序列,其中,所述行动指引信息包括所述命令序列。
例如,语音消息为“小田,前行约100米,然后左转”,实际上语音消息从语义上即包含了两种信息,前者为行动距离信息,后者为转向方位信息。对语义进行解析并套用预设的行动指令模板,可得到命令序列。示例的,得到的命令序列可以是“<straight,100>,<left,NULL>”。进一步的,可以根据该命令序列进行导航。
值得说明的是,为保证解析得到命令序列符合语音消息的真实表意,可以再根据命令序列生成确认命令的提示消息,并在接收到视障人士的确认指示后执行后续操作。
S25,根据所述行动指引信息生成目标行动路径。
S26,获取所述视障人士的实际行动路径。
在一种可选的实施方式中,所述获取所述视障人士的实际行动路径,包括:获取分别在视障人士行动路径上的第一位置和第二位置拍摄的第一图像和第二图像;根据所述第一图像和所述第二图像中相应关键点在图像中的特征差别,计算所述第一位置和所述第二位置的相对方位信息;根据所述相对方位信息确定所述实际行动路径。
图3是上述可选实施方式的原理示意图,如图3所示,坐标系(1)对应的为视障人士行动时对应的实际坐标位置,其中,Xc,Yc,Zc表示三轴坐标系的三个轴向。在第一位置拍摄得到第一图像,在第二位置拍摄得到第二图片。为保证后续图像处理的精确度,上述两个图像均可以从视障人士正面的视角拍摄。
具体的,可以通过视觉里程计(VIO,Visual-Inertial Odometry)搜寻图像中的关键点,并匹配两个图像中对应的关键点,再通过关键点的在第一图像和第二图像中的像素位置差别,确定第一位置与第二位置之间的相对距离参数和/或相对旋转角度参数。
此外,还可以通过关键点构建图像中关键对象的影像在图像中的占比的差别,比如,楼房在画幅中的占比,道路标记线在图像中的显示长度等等,来确定定第一位置与第二位置之间的相对距离参数和/或相对旋转角度参数。
进一步的,还可以通过IMU(Inertial measurement unit,惯性测量单元)技术测量视障人士的运动姿态,对上述相对距离参数和/或相对旋转角度参数进行补偿,从而得到更为准确的第一位置和所述第二位置的相对方位信息。随着实时采样更新数据信息,在对应的坐标系(2)上逐步构建视障人士的实际行动路径,其中,X,Y,Z 表示三轴坐标系的三个轴向。
一般情况,一个IMU包含了三个单轴的加速度计和三个单轴的陀螺仪,加速度计检测物体在载体坐标系统独立三轴的加速度信号,而陀螺仪检测载体相对于导航坐标系的角速度信号,测量物体在三维空间中的角速度和加速度,并以此解算出物体的姿态,这样,能够提高确定实际行动路径的准确度,从而提升导航的精度。
S27,根据所述实际行动路径与所述目标行动路径之间的偏差发出导航提示。
具体实施时,步骤S26和步骤S27被迭代执行,以实时地检测到视障人士的实际行动路径,并得到与目标行动路径之间的偏差。此外,还可以结合检测到的障碍物信息优化导航决策。
S28,在完成根据所述行动指引信息执行的导航操作后,再次发出用于提示视障人士向目标对象语音求助的提示消息。
值得说明的是,在一些情况下通过语音求助获知的行动指引并不一定准确。例如,目标地在前方150米的位置,若行人告知视障人士目标地在向前行走100米位置,那么在由导航指引完成前行100米后,可以通过视觉传感器检测周围是否存在目标地的特征标识,比如,商店的铭牌,景观样式等特征标识,以判断是否到达目标地。
若未检测到目标地的特征标识,可以再次发出用于提示视障人士向目标对象语音求助的提示消息。这样,在诸如基于GPS、WIFI技术的导航失效的情况下,仍然能够通过指引视障人士完成多次语音求助,并根据求助的语音进一步完成导航。
图4是根据一示例性实施例示出的一种导盲装置的框图。所述装置可以通过软硬件结合的方式应用于导盲头盔,导盲手杖,或者其他电子设备。所述装置包括:
提示模块410,用于响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息;
获取模块420,用于获取语音消息,并从获取到的语音消息中提取行动指引信息;
导航模块430,用于根据所述行动指引信息进行导航。
上述技术方案,至少能够达到以下技术效果:
通过响应于预设事件发出用于提示视障人士向目标对象语音求助的提示消息,并从获取到的语音消息中提取行动指引信息进行导航,这样,能够在常用导航本地化失效的情况下,持续为视障人士导航,提升了导航的可靠性。
可选的,所述行动指引信息包括行动距离信息和转向方位信息。
可选的,所述获取模块,用于监测所述视障人士与所述目标对象之间的语音交互,并从交互的语音消息中提取所述行动指引信息;和/或,用于接收所述视障人士发出的语音指令消息,并从所述语音指令消息中提取所述行动指引信息。
可选的,所述提示模块,用于:
响应于所述预设事件,在所述视障人士周围搜寻目标对象;
获取搜寻到的目标对象与所述视障人士之间的相对位置;
根据所述相对位置生成所述提示消息。
可选的,所述获取模块,用于:
将所述语音消息转换为文本消息;
根据预设的行动指引指令模板将所述文本消息解析为命令序列,其中,所述行动指引信息包括所述命令序列。
可选的,所述导航模块,用于:
根据所述行动指引信息生成目标行动路径;
获取所述视障人士的实际行动路径;
根据所述实际行动路径与所述目标行动路径之间的偏差发出导航提示。
可选的,所述导航模块,用于:
获取分别在视障人士行动路径上的第一位置和第二位置拍摄的第一图像和第二图像;
根据所述第一图像和所述第二图像中相应关键点在图像中的特征差别,计算所述第一位置和所述第二位置的相对方位信息;
根据所述相对方位信息确定所述实际行动路径。
可选的,所述导航模块,还用于:
获取所述视障人士周围的障碍物信息;
根据所述行动指引信息以及所述障碍物信息进行导航。
可选的,所述装置还包括:
在完成根据所述行动指引信息执行的导航操作后,再次发出用于提示视障人士向目标对象语音求助的提示消息。
可选的,所述预设事件包括以下一种或多种事件:
检测到GPS信号的强度小于预设信号强度阈值;
获取的当前环境图像与预先存储的环境特征图像匹配度低于预设匹配度阈值;
检测到当前连接到的无线设备的定位与预设无线设备定位不符。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一项所述导盲方法的步骤。
本公开实施例提供一种电子设备,包括:存储器,其上存储有计算机程序;处理器,用于执行所述存储器中的所述计算机程序,以实现任一项所述导盲方法的步骤。
图5是根据一示例性实施例示出的一种电子设备500的框图。所述电子设备可以是导盲头盔,导盲手杖,还可以是其他电子设备,例如智能手机,个人医疗设备等等。
如图5所示,该电子设备500可以包括:处理器501,存储器502。该电子设备500还可以包括多媒体组件503,输入/输出(I/O)接口504,以及通信组件505中的一者或多者。
其中,处理器501用于控制该电子设备500的整体操作,以完成上述的导盲方法中的全部或部分步骤。存储器502用于存储各种类型的数据以支持在该电子设备500的操作,这些数据例如可以包括用于在该电子设备500上操作的任何应用程序或方法的指令,以及应用程序相关的数据,例如,地图,指令模型,提示消息库,还可以是联系人数据、收发的消息、图片、音频、视频等等。该存储器502可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。多媒体组件503可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器502或通过通信组件505发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口504为处理器501和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。通信组件505用于该电子设备500与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G或4G,或它们中的一种或几种的组合,因此相应的该通信组件505可以包括:Wi-Fi模块,蓝牙模块,NFC模块。
在一示例性实施例中,电子设备500可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的导盲方法。
在一示例性实施例中,电子设备500还可以包括多种传感器,例如,位置传感器,惯性测量单元,图像传感器,语音感知装置等等,以获取穿戴该导盲头盔的视障人士周围的环境数据;还可以包括扬声器,触觉振动单元等等,以便于视障人士进行语音或者触觉交互。
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的导盲方法的步骤。例如,该计算机可读存储介质可以为上述包括程序指令的存储器502,上述程序指令可由电子设备500的处理器501执行以完成上述的导盲方法。
以上结合附图详细描述了本公开的优选实施方式,但是,本公开并不限于上述实施方式中的具体细节,在本公开的技术构思范围内,可以对本公开的技术方案进行多种简单变型,这些简单变型均属于本公开的保护范围。
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本公开对各种可能的组合方式不再另行说明。
此外,本公开的各种不同的实施方式之间也可以进行任意组合,只要其不违背本公开的思想,其同样应当视为本公开所公开的内容。

Claims (13)

  1. 一种导盲方法,其特征在于,所述方法包括:
    响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息;
    获取语音消息,并从获取到的语音消息中提取行动指引信息;
    根据所述行动指引信息进行导航。
  2. 根据权利要求1所述的方法,其特征在于,所述行动指引信息包括行动距离信息和转向方位信息。
  3. 根据权利要求1所述的方法,其特征在于,所述获取语音消息,并从获取到的语音消息中提取行动指引信息,包括:
    监测所述视障人士与所述目标对象之间的语音交互,并从交互的语音消息中提取所述行动指引信息;和/或,
    接收所述视障人士发出的语音指令消息,并从所述语音指令消息中提取所述行动指引信息。
  4. 根据权利要求1所述的方法,其特征在于,所述响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息,包括:
    响应于所述预设事件,在所述视障人士周围搜寻目标对象;
    获取搜寻到的目标对象与所述视障人士之间的相对位置;
    根据所述相对位置生成所述提示消息。
  5. 根据权利要求1所述的方法,其特征在于,所述从获取到的语音消息中提取行动指引信息,包括:
    将所述语音消息转换为文本消息;
    根据预设的行动指引指令模板将所述文本消息解析为命令序列,其中,所述行动指引信息包括所述命令序列。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述行动指引信息进行导航,包括:
    根据所述行动指引信息生成目标行动路径;
    获取所述视障人士的实际行动路径;
    根据所述实际行动路径与所述目标行动路径之间的偏差发出导航提示。
  7. 根据权利要求6所述的方法,其特征在于,所述获取所述视障人士的实际行动路径,包括:
    获取分别在视障人士行动路径上的第一位置和第二位置拍摄的第一图像和第二图像;
    根据所述第一图像和所述第二图像中相应关键点在图像中的特征差别,计算所述第一位置和所述第二位置的相对方位信息;
    根据所述相对方位信息确定所述实际行动路径。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述视障人士周围的障碍物信息;
    所述根据所述行动指引信息进行导航,包括:
    根据所述行动指引信息以及所述障碍物信息进行导航。
  9. 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:
    在完成根据所述行动指引信息执行的导航操作后,再次发出用于提示视障人士向目标对象语音求助的提示消息。
  10. 根据权利要求1-7中任一项所述的方法,其特征在于,所述预设事件包括以下一种或多种事件:
    检测到GPS信号的强度小于预设信号强度阈值;
    获取的当前环境图像与预先存储的环境特征图像匹配度低于预设匹配度阈值;
    检测到当前连接到的无线设备的定位与预设无线设备定位不符。
  11. 一种导盲装置,其特征在于,所述装置包括:
    提示模块,用于响应于预设事件,发出用于提示视障人士向目标对象语音求助的提示消息;
    获取模块,用于获取语音消息,并从获取到的语音消息中提取行动指引信息;
    导航模块,用于根据所述行动指引信息进行导航。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-10中任一项所述方法的步骤。
  13. 一种电子设备,其特征在于,包括:
    存储器,其上存储有计算机程序;
    处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求1-10中任一项所述方法的步骤。
PCT/CN2019/118110 2018-12-06 2019-11-13 导盲方法和装置,存储介质和电子设备 WO2020114214A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811489856.4 2018-12-06
CN201811489856.4A CN109764889A (zh) 2018-12-06 2018-12-06 导盲方法和装置,存储介质和电子设备

Publications (1)

Publication Number Publication Date
WO2020114214A1 true WO2020114214A1 (zh) 2020-06-11

Family

ID=66451295

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118110 WO2020114214A1 (zh) 2018-12-06 2019-11-13 导盲方法和装置,存储介质和电子设备

Country Status (2)

Country Link
CN (1) CN109764889A (zh)
WO (1) WO2020114214A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109764889A (zh) * 2018-12-06 2019-05-17 深圳前海达闼云端智能科技有限公司 导盲方法和装置,存储介质和电子设备
CN112669679B (zh) * 2020-11-26 2023-08-15 厦门理工学院 视障人员社交装置、方法及移动终端
CN113274257A (zh) * 2021-05-18 2021-08-20 北京明略软件系统有限公司 一种智能视障引导方法、系统、电子设备及存储介质
CN114125138B (zh) * 2021-10-29 2022-11-01 歌尔科技有限公司 音量调整优化方法、装置、电子设备及可读存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1830408A (zh) * 2006-03-28 2006-09-13 陈安平 一种采用定位技术的导盲方法
KR100847288B1 (ko) * 2007-03-16 2008-07-18 주식회사 나루기술 시각 장애우 보행 경로 안내시스템 및 그 방법
CN105324792A (zh) * 2013-04-11 2016-02-10 奥尔德巴伦机器人公司 用于估计移动元件相对于参考方向的角偏差的方法
CN106782492A (zh) * 2017-02-17 2017-05-31 安徽金猫数字科技有限公司 一种基于Android的盲人语音导航系统
CN107071160A (zh) * 2017-03-29 2017-08-18 暨南大学 一种基于移动智能终端的紧急求助方法
CN107080674A (zh) * 2017-06-12 2017-08-22 刘家祺 具有求助功能的障碍物检测提示装置
CN107820562A (zh) * 2017-07-18 2018-03-20 深圳前海达闼云端智能科技有限公司 一种导航方法、装置及电子设备
CN108347646A (zh) * 2018-03-20 2018-07-31 百度在线网络技术(北京)有限公司 多媒体内容播放方法和装置
CN109764889A (zh) * 2018-12-06 2019-05-17 深圳前海达闼云端智能科技有限公司 导盲方法和装置,存储介质和电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102973395B (zh) * 2012-11-30 2015-04-08 中国舰船研究设计中心 一种多功能智能导盲方法、处理器及其装置
JP2016112230A (ja) * 2014-12-16 2016-06-23 コニカミノルタ株式会社 誘導支援器具
CN106377401A (zh) * 2016-09-14 2017-02-08 上海高智科技发展有限公司 导盲前端设备、导盲后端设备及导盲系统
CN108458706A (zh) * 2017-12-25 2018-08-28 达闼科技(北京)有限公司 一种导航方法、装置、云端服务器及计算机程序产品
CN108387917A (zh) * 2018-01-16 2018-08-10 达闼科技(北京)有限公司 导盲方法、电子设备和计算机程序产品

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1830408A (zh) * 2006-03-28 2006-09-13 陈安平 一种采用定位技术的导盲方法
KR100847288B1 (ko) * 2007-03-16 2008-07-18 주식회사 나루기술 시각 장애우 보행 경로 안내시스템 및 그 방법
CN105324792A (zh) * 2013-04-11 2016-02-10 奥尔德巴伦机器人公司 用于估计移动元件相对于参考方向的角偏差的方法
CN106782492A (zh) * 2017-02-17 2017-05-31 安徽金猫数字科技有限公司 一种基于Android的盲人语音导航系统
CN107071160A (zh) * 2017-03-29 2017-08-18 暨南大学 一种基于移动智能终端的紧急求助方法
CN107080674A (zh) * 2017-06-12 2017-08-22 刘家祺 具有求助功能的障碍物检测提示装置
CN107820562A (zh) * 2017-07-18 2018-03-20 深圳前海达闼云端智能科技有限公司 一种导航方法、装置及电子设备
CN108347646A (zh) * 2018-03-20 2018-07-31 百度在线网络技术(北京)有限公司 多媒体内容播放方法和装置
CN109764889A (zh) * 2018-12-06 2019-05-17 深圳前海达闼云端智能科技有限公司 导盲方法和装置,存储介质和电子设备

Also Published As

Publication number Publication date
CN109764889A (zh) 2019-05-17

Similar Documents

Publication Publication Date Title
US20220057226A1 (en) Navigation methods and apparatus for the visually impaired
WO2020114214A1 (zh) 导盲方法和装置,存储介质和电子设备
CN107990899B (zh) 一种基于slam的定位方法和系统
CN107145578B (zh) 地图构建方法、装置、设备和系统
CN110019580B (zh) 地图显示方法、装置、存储介质及终端
WO2017168899A1 (ja) 情報処理方法および情報処理装置
CN106292657B (zh) 可移动机器人及其巡逻路径设置方法
WO2016131279A1 (zh) 运动轨迹记录方法和用户设备
WO2021077941A1 (zh) 机器人定位方法、装置、智能机器人和存储介质
He et al. Wearable ego-motion tracking for blind navigation in indoor environments
WO2015113330A1 (zh) 一种利用图像信息码提供校正信息的自主导航系统
Zhang et al. A slam based semantic indoor navigation system for visually impaired users
WO2022193508A1 (zh) 位姿优化方法、装置、电子设备、计算机可读存储介质、计算机程序及程序产品
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
CN106574836A (zh) 用于在定位平面中定位机器人的方法
Chen et al. CCNY smart cane
CN109213144A (zh) 人机接口(hmi)架构
KR102190743B1 (ko) 로봇과 인터랙션하는 증강현실 서비스 제공 장치 및 방법
US20220329988A1 (en) System and method for real-time indoor navigation
US20210190529A1 (en) Adaptive, imitative navigational assistance
JP2009178782A (ja) 移動体、環境地図生成装置、及び環境地図生成方法
JP2023075236A (ja) 軌跡表示装置
CN110631586A (zh) 基于视觉slam的地图构建的方法、导航系统及装置
Cai et al. Heads-up lidar imaging with sensor fusion
Waris et al. Indoor navigation approach for the visually impaired

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19892276

Country of ref document: EP

Kind code of ref document: A1