CN108337876A - Blind-guiding method, device and guide equipment - Google Patents

Blind-guiding method, device and guide equipment Download PDF

Info

Publication number
CN108337876A
CN108337876A CN201780003288.XA CN201780003288A CN108337876A CN 108337876 A CN108337876 A CN 108337876A CN 201780003288 A CN201780003288 A CN 201780003288A CN 108337876 A CN108337876 A CN 108337876A
Authority
CN
China
Prior art keywords
prompting
depth
earphone body
passable
passing direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780003288.XA
Other languages
Chinese (zh)
Inventor
刘兆祥
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Inc
Original Assignee
Cloudminds Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Inc filed Critical Cloudminds Inc
Publication of CN108337876A publication Critical patent/CN108337876A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/01Constructive details
    • A61H2201/0173Means for preventing injuries
    • A61H2201/0184Means for preventing injuries by raising an alarm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present embodiments relate to a kind of blind-guiding method, device and guide equipment.The method includes:Obtain depth image;Based on the depth image determine can traffic areas, according to it is described can traffic areas determine candidate current direction;Prompt the candidate current direction;The distance of barrier on the candidate current direction is measured by distance measuring sensor, if the distance is more than pre-determined distance threshold values, the candidate current direction is the direction that can pass through;Can pass through direction described in prompt.For the embodiment of the present invention by depth image technological incorporation distance detection sensor, detection accuracy is high, by intuitively indicating current direction, the safety trip of guiding dysopia personage to dysopia personage.

Description

Blind guiding method and device and blind guiding equipment
Technical Field
The embodiment of the invention relates to the technical field of intelligent blind guiding, in particular to a blind guiding method, a blind guiding device and blind guiding equipment.
Background
In recent years, people with vision disorder are assisted to go out by themselves with more and more attention, along with the development of electronic and computer technologies, intelligent, efficient and portable electronic blind guiding auxiliary equipment becomes a main research object, and the electronic blind guiding auxiliary equipment is beneficial to improving the independent living ability of blind people and improving the living quality.
At present, common electronic blind guiding auxiliary equipment comprises an electronic blind guiding stick, wearable blind guiding equipment and the like, and the wearable blind guiding equipment is more and more popular with people with visual impairment due to the convenience in use. Most wearable blind guiding devices adopt a camera to sense the environment, and detect ground obstacles by using an image processing technology, which is generally divided into a monocular vision-based method and a binocular vision-based method.
In the course of studying the prior art, the inventors found that there are at least the following problems in the related art: the distance of the obstacle is difficult to determine by an image processing algorithm based on monocular vision; the method based on binocular vision can determine the distance of the obstacle by adopting a characteristic point matching mode, but reliable characteristic points are difficult to find under the conditions of low texture, high illumination or low illumination, so that the calculation error of the distance of the obstacle is large. Errors in distance calculation result in false alarm or missed alarm, and affect safe travel of people with visual impairment.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a new blind guiding method, device and blind guiding apparatus, which can help visually impaired people to safely go out.
In a first aspect, an embodiment of the present invention provides a blind guiding method, where the blind guiding method is applied to a blind guiding device, and the method includes:
acquiring a depth image;
determining a passable area based on the depth image, and determining candidate passing directions according to the passable area;
prompting the candidate passing direction;
measuring the distance of the barrier in the candidate passing direction through a distance measuring sensor, and if the distance is greater than a preset distance threshold value, determining that the candidate passing direction is a passable direction;
and prompting the passable direction.
In a second aspect, an embodiment of the present invention further provides a blind guiding apparatus, where the blind guiding apparatus is applied to blind guiding equipment, and the apparatus includes:
the image acquisition module is used for acquiring a depth image;
a candidate passing direction determining module, configured to determine a passable region based on the depth image, and determine a candidate passing direction according to the passable region;
the passing direction determining module is used for measuring the distance of the barrier in the candidate passing direction through a distance measuring sensor, and if the distance is greater than a preset distance threshold value, the candidate passing direction is a passable direction;
and the prompting module is used for prompting the candidate passing direction and prompting the passable direction.
In a third aspect, an embodiment of the present invention further provides a blind guiding device, including:
a display screen;
the depth camera device is used for acquiring a depth image;
a distance measuring sensor for measuring a distance to an obstacle;
a controller, the controller comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to the blind guiding method, the blind guiding device and the blind guiding equipment provided by the embodiment of the invention, the passable area in the depth image is obtained by obtaining the depth image in front of the person with visual disorder and analyzing the depth image, and the candidate passage direction is obtained through the passable area. And determining the final passable direction by combining the detection of the distance of the obstacle in the candidate passing direction by the ranging sensor, and prompting the passable direction to the visually-impaired people. The embodiment of the invention integrates the distance detection sensor through the depth image technology, has high detection accuracy, and guides the safe trip of the visually impaired people by intuitively indicating the passing direction to the visually impaired people.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic view of an application scenario of the blind guiding method and apparatus of the present invention;
FIG. 2 is a flow chart of one embodiment of a blind guiding method of the present invention;
FIG. 3 is a flow chart of some steps in an embodiment of a blind guiding method of the present invention;
FIG. 4 is an exemplary diagram of a visual cue traffic direction in one embodiment of a blind guiding method of the present invention;
fig. 5 is a schematic structural diagram of one embodiment of the blind guiding device of the present invention;
fig. 6 is a schematic structural diagram of one embodiment of the blind guiding device of the present invention;
fig. 7 is a schematic hardware structure diagram of a blind guiding device according to an embodiment of the present invention;
fig. 8 is a schematic hardware structure diagram of a blind guiding device according to an embodiment of the present invention; and
fig. 9 is a schematic hardware structure diagram of the blind guiding glasses according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The blind guiding method and device provided by the embodiment of the invention are suitable for the application scene shown in fig. 1. In the application scenario shown in fig. 1, a blind guiding device 10 and a user 20 are included, wherein the user 20 may be a visually impaired person, such as a person with amblyopia or a totally blind person. The blind guiding device 10 may be a wearable blind guiding device such as a blind guiding helmet, blind guiding glasses, etc., and the blind guiding device 10 is used to assist the user 20 in traveling. The blind guiding device 10 includes a depth camera for acquiring a depth image and a distance measuring sensor for measuring the distance to an obstacle, and may further include a display screen, an earphone, and a wireless communication module for transmitting audio to the earphone. Wherein, degree of depth camera such as binocular degree of depth camera, based on the degree of depth camera of structured light etc. range finding sensor can adopt the sensor that utilizes the sound wave to carry out the range finding such as ultrasonic ranging sensor etc. wireless communication module such as bluetooth communication module.
The blind guiding device 10 acquires a depth image in front of the user 20, analyzes the depth image to obtain a passable area in the depth image, and obtains a candidate passing direction through the passable area. In combination with the detection of the obstacle distance in the candidate traffic direction by the ranging sensor, the final passable direction is determined and prompted to the user 20.
An embodiment of the present invention provides a blind guiding method, which may be executed by blind guiding device 10 in fig. 1, as shown in fig. 2, and the blind guiding method includes:
101: a depth image is acquired.
The depth image may be obtained by a depth camera device disposed in the blind guiding apparatus 10, and a pixel point in the depth image includes a distance from the depth camera device to a nearest obstacle in each point of the scene in front of the user 20, that is, a depth value.
102: determining a passable area based on the depth image, and determining a candidate passing direction according to the passable area.
An area where the user 20 is in front of the scene without obstacles, i.e., a passable area, is obtained based on the depth image, and the traveling direction of the user 20 is determined according to the passable area. Specifically, in some embodiments of the blind guiding method, the determining a passable area based on the depth image includes:
and determining whether one or more sections of continuous pixel points exist in the depth image, wherein the depth of each pixel point in the continuous pixel points is greater than or equal to a preset depth threshold value, the physical width corresponding to the continuous pixel points is greater than or equal to a preset safe physical width threshold value, and if the continuous pixel points exist, the areas corresponding to the continuous pixel points are passable areas.
Presetting a preset depth threshold T1 (corresponding to an obstacle distance threshold) and a safe physical width threshold W (corresponding to a safe avoidance width of a person), selecting a row of pixel points from a depth image, taking the row of pixel points as input, searching a section of continuous pixel points in the row of pixel points, and if the depth value of each pixel point in the section is greater than T1 and the width of a physical space corresponding to the image section is greater than or equal to W, determining that the image section corresponds to a passable area. That is, if the user 20 is present with an area, the obstacles in the area are all sufficiently distant, and the width of the area is sufficiently wide, the area can be regarded as a passable area. Specifically, determining whether the continuous pixel points exist in the depth image includes:
1021: and selecting a row of pixel points from the depth image.
In practical use, the line of pixel points is usually selected from the lower part of the depth image, because the part is closer to the ground, and the measured distance information of the obstacle can more accurately reflect the actual situation of the obstacle.
1022: searching the line of pixel points, selecting one point from the line of pixel points as a first endpoint, obtaining a three-dimensional physical coordinate corresponding to the first endpoint according to the pixel coordinate and the depth value of the first endpoint, obtaining a physical coordinate point which only has a horizontal distance with the three-dimensional physical coordinate and the horizontal distance is the safe width threshold value, converting the physical coordinate point into a corresponding pixel point on the depth image, and taking the pixel point as a second endpoint.
For the line of pixel points, a search may be performed from one endpoint of the line of pixel points to another direction (e.g., a search from the left to the right or a search from the right to the left), or a search may be performed from a midpoint to the directions of two endpoints, respectively.
Taking the way of searching from left to right as an example, taking the extreme point at the leftmost end as the first extreme point, assuming that the pixel coordinate of the point is p0(u, v) and the depth value is depth, the three-dimensional physical coordinate (X, Y, Z) of the point in the depth camera coordinate system can be obtained according to the pixel coordinate and the depth value and the internal reference matrix of the depth camera. Another boundary point (X + W, Y, Z) of the safety area (i.e., the passable area) can be obtained according to the safety width threshold W, and the boundary point is mapped onto the depth image, so as to obtain a second endpoint p1(u + delt, v).
1023: if the pixel point depth between the first end point and the second end point is larger than the preset depth threshold, the area corresponding to the pixel segment determined by the first end point and the second end point is a passable area, otherwise, the searching is continued from the pixel point of which the first depth between the first end point and the second end point is smaller than the preset depth threshold.
And judging whether the depth corresponding to the pixel point between p0 and p1 is greater than a preset depth threshold value T1 point by point, if so, regarding the area between p0 and p1 as a passable area, otherwise, discarding the area, and continuing to execute the step 1022 from the next point of the pixel point which does not meet the preset depth threshold value condition until the pixel search of the row is finished.
Specifically, determining a candidate passing direction according to the passable area includes:
if the number of the passable areas is one, determining that the direction corresponding to the passable area is a candidate passing direction;
and if the passable areas are at least two, determining the direction corresponding to the passable area with the minimum steering angle as the candidate passing direction.
If the determined passable areas are one, the direction corresponding to the area is taken as a candidate passing direction, and if the determined passable areas are multiple, the direction corresponding to the area with the smallest steering angle is selected from all the passable areas to be taken as the candidate passing direction.
103: and prompting the candidate passing direction.
104: and measuring the distance of the barrier in the candidate passing direction through a distance measuring sensor, and if the distance is greater than a preset distance threshold value, determining that the candidate passing direction is a passable direction.
In actual use, after the candidate passing direction is determined, the blind guiding device 10 prompts the user 20 about the candidate passing direction, and the user 20 needs to turn to the candidate passing direction according to the prompt. The ranging sensor then measures the distance of the obstacle in that direction. If the distance of the obstacle is large enough and is larger than the preset distance threshold value, the front passing is indicated, and if the distance of the obstacle is smaller than the preset distance threshold value, the passing is indicated, and the passing direction needs to be searched again through the depth image.
In some transparent scenes, such as a glass door or a glass floor window, the laser light emitted by the depth camera can penetrate through the transparent objects, resulting in errors in distance measurement. The distance of the obstacle is measured by combining the distance measuring sensor, so that the errors can be avoided, and the misdetection caused by the defects of the depth camera can be eliminated.
105: and prompting the passable direction.
103, and 105, the traffic direction can be prompted by visual prompting or by audio prompting. The visual cue modes include:
and obtaining a visual enhancement image based on the depth image, displaying the visual enhancement image, and prompting the passing direction in the visual enhancement image.
The visual enhancement image may use AR technology, please refer to fig. 4, and left and right views with binocular parallax are generated according to the monocular depth image. In practical applications, a first sign and a second sign may be highlighted in the visually enhanced image to indicate a traffic direction, for example, the first sign is displayed within the second sign to indicate a forward direction; the first identification is displayed on the left side of the second identification and used for prompting to turn left, the first identification is displayed on the right side of the second identification and used for prompting to turn right, the first identification is hidden and only the second identification is displayed and used for prompting that no passable area exists in the front, and the user 20 needs to rotate left and right to find the passable area. As shown in fig. 4, the first identifier and the second identifier may be represented by a highlighted circle and a highlighted box, respectively.
The above-mentioned visual prompt mode may be adopted when the user 20 is a visually impaired person, the audio prompt mode may be adopted when the user 20 is a totally blind person, and of course, the audio prompt mode may be adopted when the user 20 is a visually impaired person, or a combination of the visual prompt and the audio prompt mode may be adopted. The voice prompt method comprises the following steps:
and prompting the passing direction by using audio. In particular, the user may be prompted by playing audio at the ear of the user 20. For example, audio is played simultaneously to the left and right ears of the user 20 at a first frequency to indicate a forward direction, audio is played to the left ear of the user 20 at a second frequency to indicate a left turn, audio is played to the right ear of the user 20 at a second frequency to indicate a right turn, audio is played simultaneously to the left and right ears of the user 20 at a second frequency to indicate a no passable area in front, requiring the user to rotate left and right to find a passable area. The frequency value of the first frequency is different from the frequency value of the second frequency, and the frequency value of the second frequency can be larger than the frequency value of the first frequency, for example, slow dripping sound is played on the left ear and the right ear when the user advances, rapid dripping sound is played on the left ear or the right ear when the user turns left or right, and rapid dripping sound is played on the left ear and the right ear when the user does not have a passable area in front. Optionally, the angle of the right or left turn may be matched to the frequency value of the second frequency, for example, when a large angle rotation is required, a very sharp droplet sound is played, and when a small angle rotation is required, a general sharp droplet sound is played.
In other embodiments, the audio may not be played in either the left ear or the right ear of the user 20 to prompt that the user can proceed, and the audio prompt may not be played in the left ear and/or the right ear, requiring the user 20 to rotate left and right to find the proceeding direction. That is, if there is audio playback at the ear, the user 20 cannot advance forward and needs to rotate left or right to find the advancing direction, and when there is no audio playback at the ear, the current direction is the advancing direction, and the user 20 can advance forward.
According to the embodiment of the invention, the passable area in the depth image is obtained by obtaining the depth image in front of the person with visual impairment and analyzing the depth image, and the candidate passing direction is obtained through the passable area. And determining the final passable direction by combining the detection of the distance of the obstacle in the candidate passing direction by the ranging sensor, and prompting the passable direction to the visually-impaired people. The embodiment of the invention integrates the distance detection sensor through the depth image technology, has high detection accuracy, and guides the safe trip of the visually impaired people by intuitively indicating the passing direction to the visually impaired people.
Correspondingly, an embodiment of the present invention further provides a blind guiding apparatus, where the blind guiding apparatus is used in blind guiding device 10 shown in fig. 1, and as shown in fig. 5, the blind guiding apparatus 200 includes:
an image acquisition module 201, configured to acquire a depth image;
a candidate passing direction determining module 202, configured to determine a passable region based on the depth image, and determine a candidate passing direction according to the passable region;
a passing direction determining module 203, configured to measure, by a ranging sensor, a distance of an obstacle in the candidate passing direction, where if the distance is greater than a preset distance threshold, the candidate passing direction is a passable direction;
and the prompting module 204 is used for prompting the candidate passing direction and prompting the passable direction.
According to the embodiment of the invention, the passable area in the depth image is obtained by obtaining the depth image in front of the person with visual impairment and analyzing the depth image, and the candidate passing direction is obtained through the passable area. And determining the final passable direction by combining the detection of the distance of the obstacle in the candidate passing direction by the ranging sensor, and prompting the passable direction to the visually-impaired people. The embodiment of the invention integrates the distance detection sensor through the depth image technology, has high detection accuracy, and guides the safe trip of the visually impaired people by intuitively indicating the passing direction to the visually impaired people.
Optionally, in some embodiments of the apparatus, referring to fig. 6, the prompting module 204 includes:
the first prompting submodule 2041 is configured to obtain a visually enhanced image based on the depth image, display the visually enhanced image, and prompt a passing direction in the visually enhanced image;
and/or the presence of a gas in the gas,
and a second prompting submodule 2042 for prompting the passing direction by using audio.
Optionally, in some embodiments of the apparatus, the first prompting sub-module 2041 is specifically configured to:
highlighting a first logo and a second logo in the visually enhanced image;
displaying the first identification in the second identification for prompting that the current direction is a passing direction;
displaying the first identifier on the left side of the second identifier for prompting to turn left;
displaying the first mark on the right side of the second mark for prompting to turn right;
hiding the first mark and displaying the second mark for prompting that no direction can pass ahead.
Optionally, in some embodiments of the apparatus, the second prompting sub-module 2042 is specifically configured to:
the method comprises the steps of playing audio to a first earphone body and a second earphone body at a first frequency at the same time to prompt the current direction as a passing direction, wherein the first earphone body is used for playing the audio to the left ear of a user, and the second earphone body is used for playing the audio to the right ear of the user;
playing audio at a second frequency to the first earphone body to prompt a left turn;
playing audio at a second frequency to the second earphone body to prompt a right turn;
and simultaneously playing audio at a second frequency to the first earphone body and the second earphone body so as to prompt that no direction is available in front.
Optionally, in some embodiments of the apparatus, the second prompting sub-module is specifically configured to:
playing audio to the first earphone body and/or the second earphone body to prompt the current direction to be a non-passing direction;
and stopping playing audio to the first earphone body and the second earphone body so as to prompt that the current direction is the passing direction.
Optionally, in some embodiments of the apparatus, the frequency value of the second frequency is greater than the frequency value of the first frequency, and the frequency value of the second frequency is matched to a rotation angle of a left or right turn.
Optionally, in some embodiments of the apparatus, referring to fig. 6, the candidate passing direction determining module 202 includes a first candidate passing direction determining sub-module 2021, configured to:
and determining whether one or more sections of continuous pixel points exist in the depth image, wherein the depth of each pixel point in the continuous pixel points is greater than or equal to a preset depth threshold value, the physical width corresponding to the continuous pixel points is greater than or equal to a preset safe physical width threshold value, and if the continuous pixel points exist, the areas corresponding to the continuous pixel points are passable areas.
Optionally, in some embodiments of the apparatus, the first candidate passing direction determining sub-module 2021 is specifically configured to:
selecting a row of pixel points from the depth image;
searching the line of pixel points, selecting one point from the line of pixel points as a first endpoint, obtaining a three-dimensional physical coordinate corresponding to the first endpoint according to the pixel coordinate and the depth value of the first endpoint, obtaining a physical coordinate point which only has a water surface distance with the three-dimensional physical coordinate and the horizontal distance of which is the safe width threshold value, converting the physical coordinate point into a corresponding pixel point on the depth image, and taking the pixel point as a second endpoint;
if the pixel point depth between the first end point and the second end point is larger than the preset depth threshold, the area corresponding to the pixel segment determined by the first end point and the second end point is a passable area, otherwise, the searching is continued from the next point of the pixel point with the first depth between the first end point and the second end point smaller than the preset depth threshold.
Optionally, in some embodiments of the apparatus, the candidate passing direction determining module 202 includes a second candidate passing direction determining sub-module 2022, configured to:
if the number of the passable areas is one, determining that the direction corresponding to the passable area is a candidate passing direction;
and if the passable areas are at least two, determining the direction corresponding to the passable area with the minimum steering angle as the candidate passing direction.
It should be noted that the blind guiding device can execute the blind guiding method provided by the embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in the embodiment of the blind guiding device, reference may be made to the blind guiding method provided in the embodiment of the present invention.
Fig. 7 is a schematic hardware structure diagram of a blind guide device 10 according to an embodiment of the present invention, and as shown in fig. 7, the blind guide device 10 includes:
a display screen 11, a depth camera 12 for acquiring a depth image, a distance measuring sensor 13 for detecting an obstacle distance, and a controller 14. Among them, the depth camera 12 is, for example, a binocular depth camera, a depth camera based on structured light, or the like, and the distance measuring sensor 13 is, for example, an ultrasonic distance measuring sensor, or the like.
The controller 14 includes one or more processors 141 and a memory 142, and one processor 141 is illustrated in fig. 7 as an example.
The processor 141 and the memory 142 may be connected by a bus or other means, and fig. 7 illustrates the connection by a bus as an example.
The memory 142, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the blind guiding method in the embodiment of the present invention (for example, the image acquisition module 201, the candidate passing direction determination module 202, the passing direction determination module 203, and the prompting module 204 shown in fig. 5). The processor 141 executes various functional applications of the server and data processing by running nonvolatile software programs, instructions and modules stored in the memory 142, that is, implements the blind guiding method of the above-described method embodiment.
The memory 142 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the blind guide device, and the like. Further, the memory 142 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 142 optionally includes memory located remotely from processor 141, which may be connected to a blind guide device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules stored in the memory 142, when executed by the one or more processors 141, perform the blind guiding method in any of the above-described method embodiments, e.g., performing the above-described method steps 101 to 105 in fig. 2, and method steps 1021 to 1023 in fig. 3; the functions of the modules 201-204 in fig. 5, the modules 201-204 in fig. 6, the sub-modules 2021-2022, and the sub-modules 2041-2042 are realized.
The blind guiding device 10 can be externally connected with an earphone through a jack to realize the function of playing audio beside the ear of the user 20, and optionally, in some embodiments of the blind guiding device 10, as shown in fig. 8, the blind guiding device 10 may further integrate an earphone 15 and a wireless communication module 16. The wireless communication module 16 is configured to transmit audio to the earphone 15 for playing, where the earphone 15 includes a first earphone body and a second earphone body, the first earphone body is configured to play audio to a left ear of the user 20, and the second earphone body is configured to play audio to a right ear of the user 20. Wherein, the wireless communication module 16 can be a bluetooth communication module, and the earphone 15 can be a bone conduction bluetooth earphone.
Wherein, the blind guiding device 10 can be a wearable blind guiding device such as a blind guiding helmet and blind guiding glasses, and fig. 9 shows a situation where the blind guiding device 10 is a blind guiding glasses. The blind guiding glasses comprise a first lens 18a and a second lens 18b, wherein the first display screen 11a is arranged on the first lens 18a, and the second display screen 11b is arranged on the second lens 18 b. The depth camera 12 includes a first camera component 12a and a second camera component 12b, the first camera component 12a being located on a side of the first lens 18a, and the second camera component 12b being located on a side of the second lens 18 b. The first camera component 12a may be a transmitting component of the depth camera, and the second camera component 12b may be a receiving component of the depth camera. The distance measuring sensor 13 is located above the first lens 18a and the second lens 18b, and spans the first lens 18a and the second lens 18 b. The controller 14 is arranged in any one of the glasses legs, the wireless communication module 16 is also arranged in any one of the glasses legs, and a power supply can be arranged in the glasses legs to supply power to the whole blind guiding glasses. The earphone 15 includes a first earphone body 151 and a second earphone body 152.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as the processor 141 in fig. 7, to enable the one or more processors to perform the blind guiding method in any of the above method embodiments, such as the method steps 101 to 105 in fig. 2, and the method steps 1021 to 1023 in fig. 3; the functions of the modules 201-204 in fig. 5, the modules 201-204 in fig. 6, the sub-modules 2021-2022, and the sub-modules 2041-2042 are realized.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (23)

1. A blind guiding method is applied to blind guiding equipment, and is characterized by comprising the following steps:
acquiring a depth image;
determining a passable area based on the depth image, and determining candidate passing directions according to the passable area;
prompting the candidate passing direction;
measuring the distance of the barrier in the candidate passing direction through a distance measuring sensor, and if the distance is greater than a preset distance threshold value, determining that the candidate passing direction is a passable direction;
and prompting the passable direction.
2. The method of claim 1, wherein prompting a direction of traffic comprises:
obtaining a visual enhancement image based on the depth image, displaying the visual enhancement image, and prompting the passing direction in the visual enhancement image;
and/or the presence of a gas in the gas,
and prompting the passing direction by using audio.
3. The method of claim 2, wherein said prompting the traffic direction in the visually enhanced image comprises:
highlighting a first logo and a second logo in the visually enhanced image;
displaying the first identification in the second identification for prompting that the current direction is a passing direction;
displaying the first identifier on the left side of the second identifier for prompting to turn left;
displaying the first mark on the right side of the second mark for prompting to turn right;
hiding the first mark and displaying the second mark for prompting that no direction can pass ahead.
4. The method of claim 2, wherein the prompting the traffic direction with audio comprises:
the method comprises the steps of playing audio to a first earphone body and a second earphone body at a first frequency at the same time to prompt the current direction as a passing direction, wherein the first earphone body is used for playing the audio to the left ear of a user, and the second earphone body is used for playing the audio to the right ear of the user;
playing audio at a second frequency to the first earphone body to prompt a left turn;
playing audio at a second frequency to the second earphone body to prompt a right turn;
and simultaneously playing audio at a second frequency to the first earphone body and the second earphone body so as to prompt that no direction is available in front.
5. The method of claim 2, wherein the prompting the traffic direction with audio comprises:
playing audio to the first earphone body and/or the second earphone body to prompt that the current direction is a non-passing direction;
and stopping playing audio to the first earphone body and the second earphone body so as to prompt that the current direction is the passing direction.
6. The method of claim 4, wherein the frequency value of the second frequency is greater than the frequency value of the first frequency, the frequency value of the second frequency matching a corner angle of turning left or right.
7. The method of claim 1, wherein determining a passable region based on the depth image comprises:
and determining whether one or more sections of continuous pixel points exist in the depth image, wherein the depth of each pixel point in the continuous pixel points is greater than or equal to a preset depth threshold value, the physical width corresponding to the continuous pixel points is greater than or equal to a preset safe physical width threshold value, and if the continuous pixel points exist, the areas corresponding to the continuous pixel points are passable areas.
8. The method of claim 7, wherein the determining whether one or more consecutive pixel points exist in the depth image, a depth of each pixel point in the consecutive pixel points is greater than or equal to a preset depth threshold, and a physical width corresponding to the consecutive pixel points is greater than or equal to a preset safe physical width threshold comprises:
selecting a row of pixel points from the depth image;
searching the line of pixel points, selecting one point from the line of pixel points as a first endpoint, obtaining a three-dimensional physical coordinate corresponding to the first endpoint according to the pixel coordinate and the depth value of the first endpoint, obtaining a physical coordinate point which only has a horizontal distance with the three-dimensional physical coordinate and the horizontal distance is the safe width threshold value, converting the physical coordinate point into a corresponding pixel point on the depth image, and taking the pixel point as a second endpoint;
if the pixel point depth between the first end point and the second end point is larger than the preset depth threshold, the area corresponding to the pixel segment determined by the first end point and the second end point is a passable area, otherwise, the searching is continued from the next point of the pixel point with the first depth between the first end point and the second end point smaller than the preset depth threshold.
9. The method of claim 1, wherein determining a candidate traffic direction from the passable region comprises:
if the number of the passable areas is one, determining that the direction corresponding to the passable area is a candidate passing direction;
and if the passable areas are at least two, determining the direction corresponding to the passable area with the minimum steering angle as the candidate passing direction.
10. A blind guiding device is applied to blind guiding equipment, and is characterized by comprising:
the image acquisition module is used for acquiring a depth image;
a candidate passing direction determining module, configured to determine a passable region based on the depth image, and determine a candidate passing direction according to the passable region;
the passing direction determining module is used for measuring the distance of the barrier in the candidate passing direction through a distance measuring sensor, and if the distance is greater than a preset distance threshold value, the candidate passing direction is a passable direction;
and the prompting module is used for prompting the candidate passing direction and prompting the passable direction.
11. The apparatus of claim 10, wherein the prompting module comprises:
the first prompting submodule is used for obtaining a visual enhancement image based on the depth image, displaying the visual enhancement image and prompting a passing direction in the visual enhancement image;
and/or the presence of a gas in the gas,
and the second prompting submodule is used for prompting the passing direction by using audio.
12. The apparatus of claim 11, wherein the first prompt submodule is specifically configured to:
highlighting a first logo and a second logo in the visually enhanced image;
displaying the first identification in the second identification for prompting that the current direction is a passing direction;
displaying the first identifier on the left side of the second identifier for prompting to turn left;
displaying the first mark on the right side of the second mark for prompting to turn right;
hiding the first mark and displaying the second mark for prompting that no direction can pass ahead.
13. The apparatus of claim 11, wherein the second prompt submodule is specifically configured to:
the method comprises the steps of playing audio to a first earphone body and a second earphone body at a first frequency at the same time to prompt the current direction as a passing direction, wherein the first earphone body is used for playing the audio to the left ear of a user, and the second earphone body is used for playing the audio to the right ear of the user;
playing audio at a second frequency to the first earphone body to prompt a left turn;
playing audio at a second frequency to the second earphone body to prompt a right turn;
and simultaneously playing audio at a second frequency to the first earphone body and the second earphone body so as to prompt that no direction is available in front.
14. The apparatus of claim 11, wherein the second prompt submodule is specifically configured to:
playing audio to the first earphone body and/or the second earphone body to prompt the current direction to be a non-passing direction;
and stopping playing audio to the first earphone body and the second earphone body so as to prompt that the current direction is the passing direction.
15. The apparatus of claim 13, wherein the frequency value of the second frequency is greater than the frequency value of the first frequency, the frequency value of the second frequency matching a corner angle of turning left or right.
16. The apparatus of claim 10, wherein the candidate traffic direction determining module comprises a first candidate traffic direction determining sub-module configured to:
and determining whether one or more sections of continuous pixel points exist in the depth image, wherein the depth of each pixel point in the continuous pixel points is greater than or equal to a preset depth threshold value, the physical width corresponding to the continuous pixel points is greater than or equal to a preset safe physical width threshold value, and if the continuous pixel points exist, the areas corresponding to the continuous pixel points are passable areas.
17. The apparatus of claim 16, wherein the first candidate traffic direction determining sub-module is specifically configured to:
selecting a row of pixel points from the depth image;
searching the line of pixel points, selecting one point from the line of pixel points as a first endpoint, obtaining a three-dimensional physical coordinate corresponding to the first endpoint according to the pixel coordinate and the depth value of the first endpoint, obtaining a physical coordinate point which only has a water surface distance with the three-dimensional physical coordinate and the horizontal distance of which is the safe width threshold value, converting the physical coordinate point into a corresponding pixel point on the depth image, and taking the pixel point as a second endpoint;
if the pixel point depth between the first end point and the second end point is larger than the preset depth threshold, the area corresponding to the pixel segment determined by the first end point and the second end point is a passable area, otherwise, the searching is continued from the next point of the pixel point with the first depth between the first end point and the second end point smaller than the preset depth threshold.
18. The apparatus of claim 10, wherein the candidate traffic direction determining module comprises a second candidate traffic direction determining sub-module configured to:
if the number of the passable areas is one, determining that the direction corresponding to the passable area is a candidate passing direction;
and if the passable areas are at least two, determining the direction corresponding to the passable area with the minimum steering angle as the candidate passing direction.
19. A blind guiding apparatus, comprising:
a display screen;
the depth camera device is used for acquiring a depth image;
a distance measuring sensor for measuring a distance to an obstacle;
a controller, the controller comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. The blind guide apparatus of claim 19, further comprising an earphone and a wireless communication module, wherein the wireless communication module is configured to transmit audio to the earphone for playing, and the earphone comprises a first earphone body and a second earphone body, the first earphone body is configured to play audio to the left ear of the user, and the second earphone body is configured to play audio to the right ear of the user.
21. Blind guiding device according to claim 19 or 20, wherein the blind guiding device is a blind guiding glasses comprising a first lens and a second lens;
the display screen comprises a first display screen and a second display screen, the first display screen is arranged on the first lens, and the second display screen is arranged on the second lens;
the depth camera device comprises a first camera component and a second camera component, wherein the first camera component is positioned on one side of the first lens, and the second camera component is positioned on one side of the second lens;
the distance measuring sensor is positioned above the first lens and the second lens and spans the first lens and the second lens;
the controller is arranged in any one of the glasses legs.
22. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a blind guiding device, cause the blind guiding device to perform performing the method of any one of claims 1-9.
23. A computer program product, characterized in that the computer program product comprises a computer program stored on a non-volatile computer-readable storage medium, the computer program comprising program instructions that, when executed by a blind guiding device, cause the blind guiding device to perform the method of any one of claims 1-9.
CN201780003288.XA 2017-12-07 2017-12-07 Blind-guiding method, device and guide equipment Pending CN108337876A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/115002 WO2019109301A1 (en) 2017-12-07 2017-12-07 Blind guidance method and apparatus, and blind guidance device

Publications (1)

Publication Number Publication Date
CN108337876A true CN108337876A (en) 2018-07-27

Family

ID=62924215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780003288.XA Pending CN108337876A (en) 2017-12-07 2017-12-07 Blind-guiding method, device and guide equipment

Country Status (2)

Country Link
CN (1) CN108337876A (en)
WO (1) WO2019109301A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110470307A (en) * 2019-08-28 2019-11-19 中国科学院长春光学精密机械与物理研究所 A kind of visually impaired patient navigation system and method
CN112640447A (en) * 2018-08-30 2021-04-09 韦奥机器人股份有限公司 Depth sensing computer vision system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104055657A (en) * 2014-06-18 2014-09-24 浙江师范大学 Blind guide crutch based on Kinect and realization method of blind guide crutch
CN104287946A (en) * 2014-10-24 2015-01-21 中国科学院计算技术研究所 Device and method for prompting blind persons to avoid obstacles
CN104899869A (en) * 2015-05-14 2015-09-09 浙江大学 Plane and barrier detection method based on RGB-D camera and attitude sensor
CN106109188A (en) * 2015-05-08 2016-11-16 丁麒木 Active hand push guide dolly
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
CN107007437A (en) * 2017-03-31 2017-08-04 北京邮电大学 Interactive blind person's householder method and equipment
CN107225570A (en) * 2017-04-20 2017-10-03 深圳前海勇艺达机器人有限公司 The barrier-avoiding method and device of intelligent robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012040703A2 (en) * 2010-09-24 2012-03-29 Mesa Imaging Ag White cane with integrated electronic travel aid using 3d tof sensor
CN106214437B (en) * 2016-07-22 2018-05-29 杭州视氪科技有限公司 A kind of intelligent blind auxiliary eyeglasses

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104055657A (en) * 2014-06-18 2014-09-24 浙江师范大学 Blind guide crutch based on Kinect and realization method of blind guide crutch
CN104287946A (en) * 2014-10-24 2015-01-21 中国科学院计算技术研究所 Device and method for prompting blind persons to avoid obstacles
CN106109188A (en) * 2015-05-08 2016-11-16 丁麒木 Active hand push guide dolly
CN104899869A (en) * 2015-05-14 2015-09-09 浙江大学 Plane and barrier detection method based on RGB-D camera and attitude sensor
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
CN107007437A (en) * 2017-03-31 2017-08-04 北京邮电大学 Interactive blind person's householder method and equipment
CN107225570A (en) * 2017-04-20 2017-10-03 深圳前海勇艺达机器人有限公司 The barrier-avoiding method and device of intelligent robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112640447A (en) * 2018-08-30 2021-04-09 韦奥机器人股份有限公司 Depth sensing computer vision system
CN112640447B (en) * 2018-08-30 2022-04-29 韦奥机器人股份有限公司 Depth sensing computer vision system
CN110470307A (en) * 2019-08-28 2019-11-19 中国科学院长春光学精密机械与物理研究所 A kind of visually impaired patient navigation system and method

Also Published As

Publication number Publication date
WO2019109301A1 (en) 2019-06-13

Similar Documents

Publication Publication Date Title
US11226413B2 (en) Apparatus for acquiring 3-dimensional maps of a scene
US11620835B2 (en) Obstacle recognition method and apparatus, storage medium, and electronic device
US10032082B2 (en) Method and apparatus for detecting abnormal situation
CN109727271B (en) Method and apparatus for tracking objects
US20200334843A1 (en) Information processing apparatus, control method for same, non-transitory computer-readable storage medium, and vehicle driving support system
US20210280026A1 (en) Setting assistance device, setting assistance method, and program recording medium
JP6744679B2 (en) Human-machine hybrid decision making method and apparatus
KR102485480B1 (en) A method and apparatus of assisting parking by creating virtual parking lines
KR102284744B1 (en) Wearable device using stereo camera and infrared sensor for the visually impaired
JP4263737B2 (en) Pedestrian detection device
US8831337B2 (en) Method, system and computer program product for identifying locations of detected objects
CN107990902B (en) Air navigation aid, navigation system based on cloud, electronic equipment
CN109730910B (en) Visual auxiliary system for trip, auxiliary equipment, method and readable storage medium thereof
JP2015133113A (en) Detection of visual inattention based on eye convergence
US10304250B2 (en) Danger avoidance support program
CN109191513B (en) Power equipment stereo matching method based on global optimization
CN111060074A (en) Navigation method, device, computer equipment and medium based on computer vision
US20160093234A1 (en) Method and apparatus for dimensional proximity sensing for the visually impaired
WO2022198637A1 (en) Point cloud noise filtering method and system, and movable platform
CN110245567B (en) Obstacle avoidance method and device, storage medium and electronic equipment
CN110751836A (en) Vehicle driving early warning method and system
CN108337876A (en) Blind-guiding method, device and guide equipment
KR20190111262A (en) Portable device for measuring distance from obstacle for blind person
KR101494395B1 (en) Guided flight object having detection apparatus using stereo vision
CN111959526B (en) Unmanned vehicle-based control method and device, unmanned vehicle and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180727

RJ01 Rejection of invention patent application after publication