WO2022068193A1 - 可穿戴设备、智能引导方法及装置、引导系统、存储介质 - Google Patents
可穿戴设备、智能引导方法及装置、引导系统、存储介质 Download PDFInfo
- Publication number
- WO2022068193A1 WO2022068193A1 PCT/CN2021/091150 CN2021091150W WO2022068193A1 WO 2022068193 A1 WO2022068193 A1 WO 2022068193A1 CN 2021091150 W CN2021091150 W CN 2021091150W WO 2022068193 A1 WO2022068193 A1 WO 2022068193A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene image
- information
- scene
- component
- wearable device
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000001514 detection method Methods 0.000 claims abstract description 71
- 238000003384 imaging method Methods 0.000 claims description 217
- 238000012545 processing Methods 0.000 claims description 88
- 230000004044 response Effects 0.000 claims description 67
- 230000008054 signal transmission Effects 0.000 claims description 21
- 238000005259 measurement Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 238000012937 correction Methods 0.000 claims description 12
- 238000002604 ultrasonography Methods 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 6
- 238000005286 illumination Methods 0.000 abstract description 23
- 230000007613 environmental effect Effects 0.000 abstract description 10
- 230000005540 biological transmission Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 208000029257 vision disease Diseases 0.000 description 3
- 230000004393 visual impairment Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000007812 deficiency Effects 0.000 description 2
- 230000017525 heat dissipation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010047571 Visual impairment Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/08—Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/705—Pixels for depth measurement, e.g. RGBZ
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/024—Guidance services
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/16—Physical interface with patient
- A61H2201/1602—Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
- A61H2201/165—Wearable interfaces
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/50—Control means thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/50—Control means thereof
- A61H2201/5023—Interfaces to the user
- A61H2201/5048—Audio interfaces, e.g. voice or music controlled
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/50—Control means thereof
- A61H2201/5058—Sensors or detectors
Definitions
- the present disclosure is based on the Chinese patent application with the application number of 202011060870.X, the application date of September 30, 2020, and the application name of "wearable device, intelligent guidance method and device, guidance system", and requires the Chinese patent application Priority, the entire content of this Chinese patent application is hereby incorporated by reference into the present disclosure.
- the present application relates to the technical field of image processing, and in particular, to a wearable device, an intelligent guidance method and device, a wearable device, a guidance system, and a storage medium.
- the existing blind guidance technology can be used for the existence of cameras that have poor adaptability to the environment and cannot adapt well to scenes with large changes in light intensity. In the case of large changes in light intensity, rich environmental information cannot be obtained; and Obstacles that lack texture, smooth appearance, high transparency, or long distances cannot be effectively detected. Therefore, the existing blind guidance technology cannot accurately detect obstacles, and cannot provide high-safety blind guidance services for people with visual impairments.
- the embodiments of the present application provide at least a wearable device-based intelligent guidance method and device, a wearable device, a guidance system, and a storage medium, so as to improve the accuracy of depth information detection, and the accuracy and safety of guidance.
- an embodiment of the present application provides an intelligent guidance method based on a wearable device.
- the wearable device includes a first visible light camera component, a second visible light camera component, and a depth camera component, and the method includes:
- Guidance information for the target object wearing the wearable device is generated based on the depth information.
- the second scene image captured by the second visible light imaging component is used to determine the depth information of the object in the scene;
- the light intensity is low that is, when the brightness of the first scene image is low, the depth information of the object in the scene is determined by using the second scene image captured by the depth imaging component.
- the acquiring the second scene image collected by the second visible light imaging component in response to the brightness of the first scene image satisfying a preset brightness includes:
- a first exposure instruction is sent to the second visible light imaging component, and the exposure control performed by the second visible light imaging component based on the first exposure instruction is acquired. the collected second scene image;
- the acquiring in response to the brightness of the first scene image not meeting the preset brightness, acquires the second scene image collected by the depth imaging component, including:
- the second visible light imaging component when the light intensity is high, the second visible light imaging component is controlled to perform exposure, and when the light intensity is low, the depth imaging component is controlled to perform exposure, so as to switch between different light sources according to different ambient light intensity.
- the camera component collects the second scene image used to determine the depth information of the object, which can actively adapt to the change of the light intensity and obtain richer environmental information. At the same time, it can detect the obstacles lacking texture and improve the obstacle Accuracy of detection and safety of guidance.
- the determining the depth information of the object in the scene based on the second scene image includes:
- the center distance between the first visible light imaging component and the second visible light imaging component, and the focal length of the first visible light imaging component determine the distance of the object in the scene. in-depth information.
- the depth information of the objects in the scene can be more accurately determined by combining the first scene image collected by the first visible light imaging component and the second scene image collected by the second visible light imaging component.
- the determining the depth information of the object in the scene based on the second scene image includes:
- the depth information of the target pixel in the second scene image is determined.
- the depth of the object in the scene can be more accurately determined by using the depth image collected by the depth camera, that is, the depth information of the pixels in the second scene image collected by the depth camera. information, can detect obstacles lacking texture, and improve the accuracy of obstacle detection in the case of weak light intensity.
- the generating guide information for the target object wearing the wearable device based on the depth information includes:
- prompt information is used to prompt the depth information to the target object wearing the wearable device.
- the depth information of the object in the scene is prompted to the target object, which can effectively guide the operation exclusive to the target, and improve the guiding efficiency and the guiding safety.
- it also includes:
- the generating guidance information for the target object wearing the wearable device based on the depth information includes:
- the guidance information for the target object wearing the wearable device is generated.
- guidance information with a larger amount of information and richer content can be generated for the target object, so that the efficiency of guidance and the safety of guidance can be further improved.
- the wearable device further includes an ultrasonic detection component
- the method also includes:
- depth information of objects in the scene is updated.
- the second scene image captured by the visible light camera component or the depth camera component cannot accurately determine the depth information of the object in the scene, or it is detected that the first scene image includes transparency.
- the use of ultrasonic detection components for depth information detection improves the applicability of wearable devices and can more accurately detect the depth information of objects in the scene in a more complex environment.
- the wearable device further includes a posture measurement component
- the method also includes:
- posture correction prompt information is generated.
- the posture information of the wearable device measured by the posture measuring component can generate posture correction prompt information for reminding the target object to correct the posture when the posture of the target object is in the first preset posture, so that the wearable device can be It is assumed that the device captures objects that affect the running of the target object, which further improves the accuracy of guidance information and the safety of guidance.
- the method further includes:
- the generating guidance information for the target object wearing the wearable device based on the depth information includes:
- guidance information and/or gesture correction prompt information for the target object wearing the wearable device is generated.
- the orientation information of the object in the scene relative to the wearable device is converted into the orientation information of the object in the scene relative to the wearable device in the second preset posture, that is, the operation of the target object is generated.
- Effective azimuth information more accurate and effective guidance information can be generated by using the effective azimuth information.
- the wearable device further includes a sound-emitting component
- the method also includes:
- voice navigation information is generated and sent to the sounding component, so that the sounding component plays the voice guidance information to the target object.
- the voice navigation information and the sounding component can be used to effectively guide the target object to avoid obstacles, thereby improving the guiding efficiency and safety.
- the wearable device further includes a frequency dividing circuit
- the acquiring, in response to the brightness of the first scene image satisfying a preset brightness, the acquiring of the second scene image collected by the second visible light imaging component includes:
- an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains a third frequency dividing process.
- the acquiring in response to the brightness of the first scene image not meeting the preset brightness, acquires the second scene image collected by the depth imaging component, including:
- an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains the first frequency obtained by the frequency dividing processing.
- Four exposure instructions are sent to the depth imaging component, and a second scene image collected by the depth imaging component for performing exposure control based on the fourth exposure instruction is acquired.
- the frequency division circuit is used to perform frequency division processing on the exposure command, and the exposure command obtained by the frequency division processing is used to control the second visible light imaging component or the depth imaging component to capture images, so that the imaging components with different frame rates are controlled to be exposed at the same time, Energy consumption is saved.
- an embodiment of the present application provides an intelligent guidance device based on a wearable device, the wearable device includes a first visible light camera component, a second visible light camera component, and a depth camera component, and the device includes:
- a first image acquisition module configured to acquire the first scene image collected by the first visible light imaging component
- a brightness detection module configured to detect the brightness of the first scene image
- a second image acquisition module configured to acquire a second scene image collected by the second visible light imaging component in response to the brightness of the first scene image satisfying a preset brightness, and/or, in response to the first scene If the brightness of the image does not meet the preset brightness, acquire the second scene image collected by the depth imaging component;
- a detection module configured to determine depth information of objects in the scene based on the second scene image
- a guidance information generation module is configured to generate guidance information for the target object wearing the wearable device based on the depth information.
- an embodiment of the present application provides a wearable device, including a processing component, a first visible light imaging component, a second visible light imaging component, and a depth imaging component;
- the first visible light imaging component is configured to collect a first scene image
- the second visible light imaging component and the depth imaging component are configured to collect a second scene image
- the processing component is configured to execute the above-mentioned smart guidance method based on a wearable device.
- the above-mentioned wearable device further includes a signal serialization component, a signal transmission cable and a signal deserialization component;
- the signal serial component is communicatively connected to the first visible light imaging component, the second visible light imaging component, and the depth imaging component; both ends of the signal transmission cable are respectively connected to the signal serial component and the signal deserialization component ;
- the signal deserialization unit is communicatively connected to the processing unit;
- the first visible light imaging component is further configured to send the first scene image to the signal serial component;
- the second visible light imaging component and the depth imaging component are further configured to send the second scene image to the signal serial component;
- the signal serial component is configured to convert the received first scene image and the second scene image into serial signals, and send them to the signal deserialization component through the signal transmission cable;
- the signal deserialization unit is configured to perform deserialization processing on the received signal, and send the signal obtained by the deserialization processing to the processing unit.
- the signal serial component is used to convert the image captured by the imaging component into a serial signal, such as a twisted pair high-speed differential signal for transmission, and the signal can be transmitted using only two wires, and the transmission speed is faster and the cost is lower. Lower, the transmission distance is longer, and the size of the part is smaller.
- the depth camera component includes a TOF camera.
- the TOF camera can more accurately acquire the depth information of the objects in the scene when the illumination intensity is weak.
- an embodiment of the present application provides a guidance system, including a wearable device and a host;
- the wearable device includes a first visible light imaging component, a second visible light imaging component and a depth imaging component;
- the host includes a processing part, the processing part is connected with the first visible light imaging part, the second visible light imaging part and the depth imaging part through a signal transmission cable, the processing part is configured to execute the above wearable device-based intelligent guide method.
- the host is provided with at least one of the following connected to the processing component: a positioning module, a network module, a micro-control unit configured to detect working status and/or charge management, audio module.
- embodiments of the present application provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, all The processor and the memory communicate through a bus, and when the machine-readable instructions are executed by the processor, the steps of the above-mentioned intelligent booting method are performed.
- an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the above-mentioned intelligent booting method are executed.
- an embodiment of the present application provides a computer program product, where the computer program product includes one or more instructions, and the one or more instructions are suitable for being loaded by a processor and executing the steps of the above-mentioned intelligent booting method.
- FIG. 1 shows a flowchart of a wearable device-based smart guidance method provided by an embodiment of the present application
- FIG. 2 shows a schematic structural diagram of a wearable device provided by an embodiment of the present application
- FIG. 3 shows a schematic structural diagram of a guidance system provided by an embodiment of the present application
- FIG. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present application
- FIG. 5 shows a schematic structural diagram of a wearable device-based intelligent guidance device provided by an embodiment of the present application.
- Embodiments of the present application provide a wearable device-based intelligent guidance method and device, a wearable device, a guidance system, an electronic device, and a storage medium, wherein, when the illumination intensity is high, that is, the brightness of the first scene image
- the depth information of the object in the scene is determined by using the second scene image captured by the second visible light imaging component; in the case where the illumination intensity is small, that is, when the brightness of the first scene image is small, the depth information is used.
- the second scene image captured by the camera unit determines depth information of objects in the scene.
- Embodiments of the wearable device-based intelligent guidance method and device, the wearable device, the guidance system, the electronic device, and the storage medium provided by the present application are described below.
- the wearable device-based intelligent guidance method provided in the embodiment of the present application may be applied to a processing component, and the processing component may be a component on the wearable device, or may be separately located on a host.
- the wearable device includes a first visible light imaging part, a second visible light imaging part, and a depth imaging part.
- the processing unit is connected in communication with the first visible light imaging unit, the second visible light imaging unit, and the depth imaging unit.
- the wearable device is worn on the head of the target object as a head-mounted device.
- the first visible light imaging component, the second visible light imaging component, and the depth imaging component may also be combined into a head-mounted device, which is worn on the head of the target object.
- the processing component is worn or fixed to other parts of the target object, for example, on the target object's arm and the like. This embodiment of the present application does not limit the positions of the above components on the target object.
- the above-mentioned target object may be a visually impaired object.
- the target object can be guided to avoid obstacles and walk safely.
- the intelligent guidance method of the embodiment of the present application is used to detect depth information of an object in a scene, and generate guidance information based on the detected depth information.
- the above-mentioned smart guidance method based on a wearable device may include the following steps:
- the first visible light imaging component is configured to capture a first scene image around the wearable device, and send the image to the processing component.
- the frame rate of the scene image captured by the first visible light imaging component is relatively high, and the position information of the object in the scene relative to the wearable device can be determined according to the position of the object in the first scene image captured by the first visible light imaging component.
- the first visible light imaging component may be a red green blue (Red Green Blue, RGB) imaging component.
- RGB Red Green Blue
- a variety of image brightness detection methods can be used for detection, for example, a pre-trained neural network can be used to detect the brightness of the first scene image, or the brightness of each region or each pixel in the first scene image can be distributed statistics or Calculate the average to obtain the brightness of the first scene image.
- the brightness of the first scene image can reflect the illumination intensity of the current scene, the stronger the illumination intensity, the higher the brightness of the first scene image, and vice versa, the weaker the illumination intensity, the lower the brightness of the first scene image. Therefore, based on the brightness of the first scene image, the intensity of the illumination intensity of the current scene can be judged, and then it is determined to select the image collected by the second visible light imaging component or the image collected by the depth imaging component to further calculate the depth information of the scene, so as to adapt to the variation of the illumination intensity. changes to improve the accuracy of object depth information detection and generated guidance information.
- S130 In response to the brightness of the first scene image meeting a preset brightness, acquire a second scene image collected by the second visible light imaging component, and/or, in response to the brightness of the first scene image not meeting a preset brightness
- the set brightness is used to acquire the second scene image collected by the depth imaging component.
- the illumination intensity of the current scene is relatively large, and at this time, the second scene image can be acquired from the second visible light imaging component.
- the visible light imaging component has a better imaging effect, and the scene images captured by the two visible light imaging components can be used to determine the depth information of the objects in the scene.
- the imaging effect of using the two visible light imaging components is poor, and accurate depth information cannot be obtained.
- the depth information can be acquired by using the depth camera component to collect the depth image.
- the illumination intensity of the current scene is relatively weak, and at this time, the second scene image can be acquired from the depth imaging component from the depth imaging component.
- the scene image captured by the depth camera component can be used to detect the depth information of objects in the scene.
- the problem that obstacles cannot be detected due to weak light intensity can be overcome.
- the scene images captured by the two visible light camera components cannot effectively detect the depth information of obstacles lacking texture, such as sky, desert, white walls, etc.; however, the scene images captured by the depth camera components can effectively detect the above-mentioned lack of texture. Depth information for obstacles.
- the second visible light imaging component may be an RGB imaging component.
- the depth camera component may be a camera based on Time of Flight (TOF) imaging, that is, a TOF camera. Using the TOF camera can accurately obtain the depth information of the objects in the scene in the case of weak light intensity.
- TOF Time of Flight
- the depth information of the object in the scene corresponding to the object in the image can be calculated based on the first scene image and the second scene image according to the principle of binocular vision.
- the illumination intensity of the current scene is relatively high, the depth information of the objects in the scene can be more accurately determined by using the two scene images collected by the two visible light imaging components.
- the pixels representing the objects in the scene in the first scene image are matched with the pixels representing the objects in the scene in the second scene image to obtain pixel pairs;
- the center distance between the first visible light imaging component and the second visible light imaging component, and the focal length of the first visible light imaging component the depth information of the object in the scene is determined.
- the depth information of the object in the scene may be directly determined based on the depth information perceived by the depth imaging component.
- the TOF camera can more accurately obtain the depth information of the objects in the scene under the condition of weak illumination intensity.
- the position of the object in the scene can be detected from the second scene image through target detection, and then the depth information of the pixel at the corresponding position in the second scene image is determined as the depth information of the object according to the detection result.
- prompt information may be generated based on the depth information, where the prompt information is used to prompt the depth information to the target object wearing the wearable device. Prompting the target object with the depth information of the object in the scene can effectively guide the exclusive operation of the target, and improve the guiding efficiency and the guiding safety.
- the following steps can be used to generate the guidance information:
- the guidance information generated based on the orientation information and the depth information can further improve the efficiency of guidance and the safety of guidance.
- the orientation information of the object relative to the wearable device may be determined according to the position of the object in the first scene image.
- the orientation information may include an orientation angle, or the orientation information may represent one of pre-divided orientations, and the pre-divided orientations may include, for example, front left, front right, and straight ahead.
- the orientation information of the object relative to the wearable device is the front left
- the object A being located on the right side in the first scene image it is determined that the object relative to the wearable device
- the orientation information of the device is right front
- the orientation information of the object relative to the wearable device is right in front
- switching different camera components to capture the second scene image according to the light intensity specifically includes the following steps:
- the processing component sends a first exposure instruction to the second visible light imaging component, and the second visible light imaging component is based on the first exposure instruction. Exposure is performed with an exposure instruction, and a second scene image is collected. After that, the processing unit acquires the second scene image from the second visible light imaging unit.
- the second visible light imaging component may also collect the second scene image all the time, and only send the second scene image collected at the corresponding moment to the processing component in the case of receiving the first exposure instruction sent by the processing component.
- the processing component sends a second exposure instruction to the depth imaging component, and the depth imaging component based on the second exposure instruction Exposure is performed, and a second scene image is acquired. After that, the processing unit acquires the second scene image from the depth imaging unit.
- the depth imaging component may also collect the second scene image all the time, and only send the second scene image collected at the corresponding moment to the processing component in the case of receiving the second exposure instruction sent by the processing component.
- the second visible light imaging component is controlled to perform exposure
- the depth imaging component is controlled to perform exposure, so as to actively switch different imaging components according to the different ambient light intensity.
- the second scene image used to determine the depth information of the object is collected, so that the intelligent guidance method of the embodiment of the present application can adapt to the change of the light intensity, obtain relatively abundant environmental information, and at the same time, can detect obstacles lacking texture , which improves the accuracy of obstacle detection and the safety of guidance.
- the scene image captured by the second visible light imaging component is not used to determine the orientation information of the object in the scene relative to the wearable device and/or the type information of the object, etc.
- its frame rate is lower than that of the first visible light imaging component, In this way, the requirement of depth information detection can be met, and the power consumption of the second visible light imaging component can be reduced, and the heat dissipation of the wearable device can be reduced.
- a frequency dividing circuit can be used to realize synchronous exposure of the first visible light imaging component and the second visible light imaging component at different frame rates; when the brightness of the first scene image meets the preset brightness, the processing component sends the information to the The frequency dividing circuit sends the exposure command, and the frequency dividing circuit performs frequency dividing processing on the received exposure command, and sends the third exposure command obtained by the frequency dividing processing to the second visible light imaging component. Then, the processing unit acquires the second scene image collected by the second visible light imaging unit performing exposure control based on the third exposure instruction.
- the first visible light capturing unit can directly perform exposure by sending an exposure command to the frequency dividing circuit based on the processing unit, and the processing unit sends an exposure command to the frequency dividing circuit at the same time. , and send an exposure instruction to the first visible light imaging component.
- the depth imaging component can be controlled to be exposed at a frequency lower than that of the first visible light, so the frame rate of the image captured by the depth imaging component is lower than that of the first visible light imaging component, which not only satisfies the
- the demand for depth information detection can also reduce the power consumption of the depth camera component and reduce the heat dissipation of the wearable device.
- a frequency dividing circuit can be used to realize synchronous exposure of the first visible light imaging component and the depth imaging component at different frame rates; in the case that the brightness of the first scene image does not meet the preset brightness, the processing component sends the frequency dividing circuit to the frequency dividing circuit.
- the frequency division circuit sends an exposure instruction, and the frequency division circuit performs frequency division processing on the received exposure instruction, and sends the fourth exposure instruction obtained by the frequency division processing to the depth imaging component. After that, the processing component obtains the depth imaging component based on the The second scene image collected by the fourth exposure instruction for exposure control.
- the first visible light capturing unit can directly perform exposure by sending an exposure command to the frequency dividing circuit based on the processing unit, and the processing unit sends an exposure command to the frequency dividing circuit at the same time. , and send an exposure instruction to the first visible light imaging component.
- the frequency division circuit is used to perform frequency division processing on the exposure command, and the exposure command obtained by the frequency division processing is used to control the second visible light imaging component or the depth imaging component to capture images, which realizes the synchronous exposure of the imaging components that control different frame rates and saves energy. consume.
- the images captured by the first visible light imaging component and the depth imaging component can be used to detect the depth information of the objects in the scene, but this method is not suitable for the detection of the depth information of distant objects, At the same time, this method cannot effectively detect the depth information of objects with high transparency and smooth appearance, such as glass and water surface. At this time, the detected depth information may be wrong, so the processing unit determines whether the depth information is obviously unreasonable after the detected depth information and the category of the object in the scene, that is, whether the depth corresponding to the depth information is greater than the preset depth threshold. , and/or determine whether objects of preset categories such as glass and water surfaces are detected in the scene.
- the processing component In response to the processing component determining that the depth corresponding to the depth information is greater than a preset depth threshold and/or detecting that the first scene image contains objects of a preset category, the processing component sends an ultrasonic detection instruction to the ultrasonic detection component, and the ultrasonic detection component The ultrasonic wave is sent based on the ultrasonic detection instruction, and the detected ultrasonic feedback signal is sent to the processing unit.
- the processing component updates depth information of objects in the scene based on the received ultrasonic feedback signal.
- the detection accuracy of the depth information of the object by the ultrasonic detection component is relatively high. Under the circumstance that the depth information of the object in the scene cannot be accurately determined by the second scene image captured by the visible light camera component or the depth camera component due to the influence of the complex environment or the characteristics of the object itself, the ultrasonic detection component is used to detect the depth information, which improves the performance of the scene.
- the applicability of wearable devices can more accurately detect the depth information of objects in the scene in more complex environments.
- the target object wearing the above-mentioned wearable device may perform some actions during the movement process, which makes it impossible to accurately detect the object that may hinder the movement of the target object when the object detection is performed according to the captured scene image, such as the target object with a large angle.
- the scene captured by the wearable device is concentrated on the opposite side, and the obtained scene image is the image of the ground, which cannot accurately detect the obstacles in front of or on the side of the target object that may affect its movement.
- the above-mentioned wearable device may further include an attitude measurement component, and the attitude measurement component is communicatively connected with the processing component.
- the attitude measurement component measures the attitude information of the wearable device, where the attitude information of the wearable device and the attitude information of the target object are considered to be the same.
- the attitude measurement component sends the measured attitude information to the processing component.
- the above method further includes: based on the received posture information, judging whether the posture of the target object wearing the wearable device is in a first preset posture, and if so, generating posture correction prompt information.
- the above-mentioned first preset posture is a posture in which an object that does not affect the running of the target object can be photographed, for example, a relatively large angle of the head is raised.
- the posture correction prompt information is used to prompt the target object to correct the current posture, which further improves the accuracy of the guidance information and the safety of the guidance.
- guidance information may also be generated based on the gesture information. For example, based on the gesture information of the wearable device, the orientation information of the object relative to the wearable device is converted into the position information of the object relative to the second preset Orientation information of the wearable device in the posture; based on the converted orientation information and the depth information of the object in the scene, generate guidance information and/or posture correction prompt information for the target object wearing the wearable device.
- the orientation information can be converted by the following steps: determining the attitude difference information between the second preset attitude and the current attitude information of the wearable device; The orientation information of the wearable device is converted to obtain the transformed orientation information. For example, in response to the attitude difference information indicating that the wearable device is in a head-up posture of 80 degrees, the determined orientation information indicates that the object is located directly in front of the wearable device, and the converted orientation information indicates that the object is located above the wearable device; in response to The attitude difference information indicates that the wearable device is in a posture of turning its head 60 degrees to the left, the determined orientation information indicates that the object is located directly in front of the wearable device, and the converted orientation information indicates that the object is located in the front left of the wearable device.
- the depth information of the object in the scene is specifically used to represent the distance between the object and the wearable device. After the posture of the wearable device changes, since the position of the wearable device does not change significantly, the object and the wearable device do not change significantly. Therefore, the depth information of the objects in the scene does not need to be converted, and the depth information of the objects in the scene can more accurately represent the distance between the object and the wearable device.
- the above-mentioned second preset posture is, for example, a posture of the wearable device toward the traveling direction of the target object.
- the above converted orientation information can be used to determine whether the corresponding object is located on the running path of the target object, that is, to determine whether the object in the scene is an obstacle for the running of the target object, in the case that the corresponding object is an obstacle of the target object. , it is necessary to generate guidance information based on the converted orientation information to guide the target object to avoid obstacles.
- the orientation information of the object in the scene relative to the wearable device is converted into the orientation information of the object in the scene relative to the wearable device in the second preset posture, that is, the orientation information effective for the operation of the target object is generated. , more accurate and effective guidance information can be generated by using the effective orientation information.
- the attitude measurement component may be a gyroscope, such as a nine-axis gyroscope.
- the amount of image information collected by each of the above-mentioned camera components is relatively large.
- the signal deserialization part to realize the transmission of the signal.
- the signal serial component is communicatively connected to the first visible light imaging component, the second visible light imaging component, and the depth imaging component; both ends of the signal transmission cable are respectively connected to the signal serial component and the signal deserialization component ; the signal deserialization part is connected in communication with the processing part.
- the first visible light imaging part sends the first scene image to the signal serial part; the second visible light imaging part and the depth imaging part send the second scene image to the signal serial part component; the signal serial component converts the received first scene image and the second scene image into serial signals, and sends them to the signal deserialization component through the signal transmission cable; the signal deserialization component sends The received signal is deserialized, and the deserialized signal is sent to the processing unit.
- the above-mentioned signal transmission using the signal serial component, the signal transmission cable and the signal deserialization component may be, for example, the V-BY-ONE twisted pair technology.
- V-BY-ONE is a 21x serialization interface technology for video signal transmission.
- V-BY-ONE twisted pair technology has fewer transmission lines, only two lines are required, and it is lighter; it has lower requirements for transmission lines, no shielding, and saves costs; higher transmission bandwidth, up to 3.75 gigabits per second (Gbps); the transmission distance is longer, and the high-quality transmission distance can reach 15 meters; the chip size is smaller, which is more conducive to the design of portable wearable products, for example, it can be packaged as 5 millimeters (mm) by 5 millimeters (mm).
- the above signal transmission cable adopts V-BY-ONE twisted pair connection, which is resistant to bending, stretching, light and soft.
- the signal serial component to convert the image captured by the camera component into a twisted pair high-speed differential signal for transmission, the signal can be transmitted using only two wires, and the transmission speed is faster, the cost is lower, and the transmission distance is longer. smaller in size.
- the signal serial component to convert the image captured by the camera component into a serial signal, such as a twisted pair high-speed differential signal for transmission, the signal can be transmitted using only two wires, and the transmission speed is faster and the cost is lower. The greater the distance, the smaller the size of the part.
- the wearable device may further include a sound-emitting part, and the sound-emitting part is connected in communication with the processing part.
- the playing of the guidance information may be implemented by the following steps: generating voice guidance information based on the guidance information, and sending it to the sounding component, and the sounding component plays the voice guidance information to the target object.
- voice navigation information and sound components can effectively guide the target object to avoid obstacles, which improves the guidance efficiency and safety.
- the above-mentioned processing component may be a system on chip (System on Chip, SOC), and the sound-generating component may be an audio bone conduction earphone, and the above-mentioned audio bone conduction earphone is also used for man-machine dialogue.
- SOC System on Chip
- the embodiment of the present application integrates the first visible light imaging component, the second visible light imaging component, and the depth The camera component can actively switch the camera component after judging the intensity of the light, so as to improve the detection accuracy and adapt to the changeable lighting environment.
- the embodiment of the present application also integrates an ultrasonic detection component and an attitude measurement component, uses ultrasonic waves to obtain relatively accurate depth information to make up for the deficiencies of the camera component, and uses the attitude measurement component to obtain attitude information to optimize detection accuracy.
- the intelligent guidance method in the above embodiment can perform functions such as path planning, positioning, obstacle detection, voice prompt, etc., with higher accuracy and stronger environmental adaptability, enabling people with visual impairment to travel independently, and is more convenient and safe. .
- an embodiment of the present application also provides a wearable device-based intelligent guidance device, the functions implemented by each module in the device are the same as the steps in the above intelligent guidance method The same applies to processing components.
- the wearable device includes a first visible light imaging component, a second visible light imaging component, and a depth imaging component.
- the intelligent guidance device may include:
- the first image acquisition module 510 is configured to acquire the first scene image acquired by the first visible light imaging component.
- the brightness detection module 520 is configured to detect the brightness of the first scene image.
- the second image acquisition module 530 is configured to acquire, in response to the brightness of the first scene image satisfying a preset brightness, acquire a second scene image acquired by the second visible light imaging component, and/or, in response to the first scene image If the brightness of the scene image does not meet the preset brightness, acquire the second scene image collected by the depth imaging component.
- the detection module 540 is configured to determine depth information of objects in the scene based on the second scene image.
- the guidance information generation module 550 is configured to generate guidance information for the target object wearing the wearable device based on the depth information.
- the second image acquisition module 530 configures the second image acquisition module 530 to acquire the second scene image acquired by the second visible light imaging component in response to the brightness of the first scene image satisfying a preset brightness. for:
- a first exposure instruction is sent to the second visible light imaging component, and the exposure control performed by the second visible light imaging component based on the first exposure instruction is acquired. the collected second scene image;
- the second image acquisition module 530 acquires the second scene image collected by the depth imaging component in response to the brightness of the first scene image not meeting the preset brightness, it is configured to:
- the detection module 540 determines the depth information of the object in the scene based on the second scene image, it is configured to:
- the center distance between the first visible light imaging component and the second visible light imaging component, and the focal length of the first visible light imaging component determine the distance of the object in the scene. in-depth information.
- the detection module 540 determines depth information of objects in the scene based on the second scene image, including:
- the depth information of the target pixel in the second scene image is determined.
- the guidance information generation module 550 generates guidance information for the target object wearing the wearable device based on the depth information, including:
- prompt information is used to prompt the depth information to the target object wearing the wearable device.
- the detection module 540 is further configured to:
- the guidance information generation module 550 is configured to: in the case of generating guidance information for the target object wearing the wearable device based on the depth information:
- the guidance information for the target object wearing the wearable device is generated.
- the wearable device further includes an ultrasonic detection component
- the detection module 540 is further configured to:
- depth information of objects in the scene is updated.
- the wearable device further includes an attitude measurement component
- the detection module 540 is further configured to:
- posture correction prompt information is generated.
- the detection module 540 is further configured to:
- the guidance information generation module 550 is configured to: in the case of generating guidance information for the target object wearing the wearable device based on the depth information:
- guidance information and/or gesture correction prompt information for the target object wearing the wearable device is generated.
- the wearable device further includes a sound-emitting component
- the guidance information generation module 550 is further configured to:
- voice navigation information is generated and sent to the sounding component, so that the sounding component plays the voice guidance information to the target object.
- the wearable device further includes a frequency dividing circuit
- the second image acquisition module 530 acquires the second scene image collected by the second visible light imaging component in response to the brightness of the first scene image meeting the preset brightness
- the second image acquisition module 530 is configured to:
- an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains a third frequency dividing process.
- the second image acquisition module 530 acquires the second scene image collected by the depth imaging component in response to the brightness of the first scene image not meeting the preset brightness, it is configured to:
- an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains the first frequency obtained by the frequency dividing processing.
- Four exposure instructions are sent to the depth imaging component, and a second scene image collected by the depth imaging component for performing exposure control based on the fourth exposure instruction is acquired.
- an embodiment of the present application further provides a wearable device, and the functions implemented by each component in the wearable device are the same as the corresponding components in the above-mentioned embodiments.
- numeral 21 represents a block diagram of a head mounted device
- numeral 22 represents a block diagram of a host side
- the block diagram of the head mounted device and the host side block diagram are connected by a signal transmission cable 208 .
- the wearable device may include: a processing part 201 , a first visible light imaging part 202 , a second visible light imaging part 203 and a depth imaging part 204 .
- the first visible light camera component 202 is configured to collect the first scene image; the second visible light camera component 203 and the depth camera component 204 are configured to collect the second scene image; the processing component 201 is configured to perform the above-mentioned A smart guidance method for wearable devices.
- the above-mentioned wearable device may further include an ultrasonic detection part 205 , an attitude measurement part 206 , a signal serial part 207 , a signal transmission cable 208 , a signal deserialization part 209 , and a sounding part 210 .
- the functions implemented by the ultrasonic detection part 205 , the attitude measurement part 206 , the signal serial part 207 , the signal transmission cable 208 , the signal deserialization part 209 , and the sounding part 210 are the same as the corresponding parts in the intelligent guidance method of the above-mentioned embodiment.
- the wearable device may further include a microcontroller part MCU211 , a WIFI part 212 , and a GPS part 213 .
- the microcontroller unit MCU211 is configured to manage the charging of the wearable device and detect the state of the whole machine
- the GPS unit 213 is configured to locate the wearable device
- the WIFI unit 212 is configured to store the first scene image and the second scene in the wearable device Images, depth information, etc. are sent to the remote server.
- the block diagram 21 of the head mounted device includes two visible light camera components 202 and 203, and a depth camera component 204.
- One of the RGB camera components is used as the main camera component, and outputs a synchronization trigger signal to the other.
- One RGB camera component and at the same time output a synchronous trigger signal to the depth camera component through a frequency divider chip.
- the two RGB camera components 202 and 203 and the depth camera component 204 can achieve three-way camera components at the same time even under different frame rates. exposure.
- the image signal of the three-way camera unit is converted into a twisted pair high-speed differential signal through the signal serial unit 207 (V-BY-ONE serial chip), and is connected to the host-side block diagram 22 through the signal transmission cable 208; the attitude measurement unit 206
- the (gyroscope) is used to monitor the user's attitude information at all times, and the ultrasonic detection component 205 is used to detect the distance information such as the glass wall and the water surface.
- the host side block diagram 22 includes a processing unit 201 (SOC unit), a global positioning system (Global Positioning System, GPS) unit 213, a WIFI unit 212, an integrated V-BY-ONE deserialization chip (signal deserialization unit) 209, an MCU (micro A control part) 211 and a sounding part 210, wherein, the processing part 201 adopts an artificial intelligence processing module with strong computing power.
- the integrated V-BY-ONE deserialization chip 209 sends the data contacting the mobile industry processor interface camera interface standard (Mobile Industry Processor Interface-Camera Serial Interface, MIPI-CSI) to the mobile industry processor interface physical layer (Mobile Industry Processor Interface) of the processing unit 201.
- MIPI PHY Industry Processor Interface Physical, MIPI PHY
- MCU211 is used to detect the working state of the whole machine and Carry out charging management, etc.
- GPS component 213 uses real-time kinematic (RTK) technology to perform precise positioning
- WIFI component 212 is used for data upload and synchronization network.
- the signal transmission cable 208 is connected by a V-BY-ONE twisted pair.
- an embodiment of the present application provides a guidance system 300 , including a wearable device 301 and a host 302 .
- the wearable device 301 includes a first visible light imaging component, a second visible light imaging component, and a depth imaging component;
- the host 302 includes a processing component, the processing component is associated with the first visible light imaging component, the second visible light imaging component, and the
- the depth camera component is connected through a signal transmission cable, and the processing component is configured to execute the above-mentioned smart guidance method based on a wearable device.
- the host 302 may be provided with at least one of the following connected to the processing component: a positioning module, a network module, a micro-control unit configured to detect working status and/or charge management, and an audio module.
- the embodiment of the present application also provides an electronic device 400.
- the schematic structural diagram of the electronic device 400 provided by the embodiment of the present application includes:
- the following instructions acquire a first scene image collected by a first visible light imaging component; detect the brightness of the first scene image; and acquire the second visible light imaging component in response to the brightness of the first scene image meeting a preset brightness the collected second scene image, and/or, in response to the brightness of the first scene image not meeting the preset brightness, acquiring the second scene image collected by the depth camera; determining a scene based on the second scene image depth information of the object in the device; and generating guidance information for the target object wearing the wearable device based on
- Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the intelligent booting method described in the foregoing method embodiments are executed .
- the storage medium may be a volatile or non-volatile computer-readable storage medium.
- the computer program product of the intelligent guidance method provided by the embodiments of the present application includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the steps of the intelligent guidance method described in the above method embodiments. , for example, see the above method embodiments.
- the embodiments of the present application further provide a computer program, which implements any one of the methods in the foregoing embodiments when the computer program is executed by a processor.
- the computer program product may be implemented, for example, in hardware, software, or a combination thereof.
- the computer program product is embodied as a computer storage medium, for example, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
- the second scene image captured by the second visible light imaging component is used to determine the depth information of the object in the scene;
- the depth information of the object in the scene is determined by using the second scene image captured by the depth imaging component.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Veterinary Medicine (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Vascular Medicine (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biomedical Technology (AREA)
- Ophthalmology & Optometry (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Rehabilitation Therapy (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (30)
- 一种基于可穿戴设备的智能引导方法,所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件,所述方法包括:获取第一可见光摄像部件采集的第一场景图像;检测所述第一场景图像的亮度;响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;基于所述第二场景图像确定场景中的对象的深度信息;基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
- 根据权利要求1所述的方法,其中,所述响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,包括:响应于所述第一场景图像的亮度满足预设的亮度,向所述第二可见光摄像部件发送第一曝光指令,并获取所述第二可见光摄像部件基于所述第一曝光指令进行曝光控制而采集到的第二场景图像;所述响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像,包括:响应于所述第一场景图像的亮度不满足预设的亮度,向所述深度摄像部件发送第二曝光指令,获取所述深度摄像部件基于所述第二曝光指令进行曝光控制而采集到的第二场景图像。
- 根据权利要求1或2所述的方法,其中,所述基于所述第二场景图像确定场景中的对象的深度信息,包括:响应于所述第一场景图像的亮度满足预设的亮度,将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
- 根据权利要求1或2所述的方法,其中,所述基于所述第二场景图像确定场景中的对象的深度信息,包括:响应于所述第一场景图像的亮度不满足预设的亮度,确定场景中的对象在所述第二场景图像中的目标像素点;根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。
- 根据权利要求1至4任一项所述的方法,其中,所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。
- 根据权利要求1至5任一项所述的方法,其中,还包括:基于所述第一场景图像确定场景中的对象相对于可穿戴设备的方位信息;所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
- 根据权利要求1至6任一项所述的方法,其中,所述可穿戴设备还包括超声探测部件;所述方法还包括:响应于所述深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,向所述超声探测部件发送超声探测指令,并接收所述超声探测部件基于所述超声探测指令探测到的超声波反馈信号;基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
- 根据权利要求1至7任一项所述的方法,其中,所述可穿戴设备还包括姿态测量部件;所述方法还包括:获取所述姿态测量部件采集的可穿戴设备的姿态信息;响应于根据所述姿态信息确定所佩戴所述可穿戴设备的目标对象的姿态处于第一预设姿态,生成姿态纠正提示信息。
- 根据权利要求8所述的方法,其中,所述方法还包括:基于所述第一场景图像确定场景中的对象相对于所述可穿戴设备的方位信息;基于所述可穿戴设备的姿态信息,将所述方位信息转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息;所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
- 根据权利要求1至9任一项所述的方法,其中,所述可穿戴设备还包括发声部件;所述方法还包括:基于所述引导信息生成语音导航信息,并发送至所述发声部件,得到所述发声部件向所述目标对象播放所述语音导航信息。
- 根据权利要求1至10任一项所述的方法,其中,所述可穿戴设备还包括分频电路;所述响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,包括:响应于所述第一场景图像的亮度满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第三曝光指令发送给第二可见光摄像部件,以及,获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像;和/或所述响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像,包括:响应于所述第一场景图像的亮度不满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第四曝光指令发送给深度摄像部件,以及,获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。
- 一种基于可穿戴设备的智能引导装置,所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件,所述装置包括:第一图像获取模块,配置为获取第一可见光摄像部件采集的第一场景图像;亮度检测模块,配置为检测所述第一场景图像的亮度;第二图像获取模块,配置为响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;检测模块,配置为基于所述第二场景图像确定场景中的对象的深度信息;引导信息生成模块,配置为基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
- 根据权利要求12所述的智能引导装置,所述第二图像获取模块在响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度满足预设的亮度,向所述第二可见光摄像部件发送第一曝光指令,并获取所述第二可见光摄像部件基于所述第一曝光指令进行曝光控制而采集到的第二场景图像;所述第二图像获取模块在响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度不满足预设的亮度,向所述深度摄像部件发送第二曝光指令,获取所述深度摄像部件基于所述第二曝光指令进行曝光控制而采集到的第二场景图像。
- 根据权利要求12或13所述的智能引导装置,所述检测模块在基于所述第二场景图像确定场景中的对象的深度信息的情况下,配置为:响应于所述第一场景图像的亮度满足预设的亮度,将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
- 根据权利要求12或13所述的智能引导装置,所述检测模块在基于所述第二场景图像确定场景中的对象的深度信息,包括:响应于所述第一场景图像的亮度不满足预设的亮度,确定场景中的对象在所述第二场景图像中的目标像素点;根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。
- 根据权利要求12至15任一项所述的智能引导装置,所述引导信息生成模块在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。
- 根据权利要求12至16任一项所述的智能引导装置,所述检测模块还配置为:基于所述第 一场景图像确定场景中的对象相对于可穿戴设备的方位信息;所述引导信息生成模块在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息的情况下,配置为:基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
- 根据权利要求12至17任一项所述的智能引导装置,所述可穿戴设备还包括超声探测部件;所述检测模块还配置为:响应于所述深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,向所述超声探测部件发送超声探测指令,并接收所述超声探测部件基于所述超声探测指令探测到的超声波反馈信号;基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
- 根据权利要求12至18任一项所述的智能引导装置,所述可穿戴设备还包括姿态测量部件;所述检测模块还配置为:获取所述姿态测量部件采集的可穿戴设备的姿态信息;响应于根据所述姿态信息确定所佩戴所述可穿戴设备的目标对象的姿态处于第一预设姿态,生成姿态纠正提示信息。
- 根据权利要求19所述的智能引导装置,所述检测模块还配置为:基于所述第一场景图像确定场景中的对象相对于所述可穿戴设备的方位信息;基于所述可穿戴设备的姿态信息,将所述方位信息转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息;所述引导信息生成模块在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息的情况下,配置为:基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
- 根据权利要求12至20任一项所述的智能引导装置,所述可穿戴设备还包括发声部件;所述引导信息生成模块还配置为:基于所述引导信息生成语音导航信息,并发送至所述发声部件,得到所述发声部件向所述目标对象播放所述语音导航信息。
- 根据权利要求12至21任一项所述的智能引导装置,所述可穿戴设备还包括分频电路;所述第二图像获取模块在响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第三曝光指令发送给第二可见光摄像部件,以及,获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像;和/或所述第二图像获取模块530在响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度不满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第四曝光指令发送给深度摄像部件,以及,获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。
- 一种可穿戴设备,包括处理部件、第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;所述第一可见光摄像部件,配置为采集第一场景图像;所述第二可见光摄像部件和所述深度摄像部件配置为采集第二场景图像;所述处理部件,配置为执行权利要求1至11任一项所述的基于可穿戴设备的智能引导方法。
- 根据权利要求23所述的可穿戴设备,其中,还包括信号串行部件、信号传输线缆和信号解串部件;所述信号串行部件与所述第一可见光摄像部件、第二可见光摄像部件、深度摄像部件通信连接;所述信号传输线缆的两端分别与所述信号串行部件和信号解串部件连接;所述信号解串部件与所述处理部件通信连接;所述第一可见光摄像部件,还配置为将所述第一场景图像发送给所述信号串行部件;所述第二可见光摄像部件和所述深度摄像部件,还配置为将所述第二场景图像发送给所述信号串行部件;所述信号串行部件,配置为将接收的第一场景图像和第二场景图像转换为串行信号,并通过所述信号传输线缆发送给所述信号解串部件;所述信号解串部件,配置为将接收的信号进行解串行处理,并将解串行处理得到的信号发送给所述处理部件。
- 根据权利要求23或24所述的可穿戴设备,其中,所述深度摄像部件包括TOF摄像头。
- 一种引导系统,包括可穿戴设备和主机;所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;所述主机包括处理部件,所述处理部件与所述第一可见光摄像部件、第二可见光摄像部件和深度摄像部件通过信号传输线缆连接,所述处理部件用于执行权利要求1至11任一项所述的基于可穿戴设备的智能引导方法。
- 根据权利要求26所述的引导系统,其中,所述主机设有与所述处理部件连接的以下至少一项:定位模组、网络模组、用于检测工作状态和/或充电管理的微控单元、音频模组。
- 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行的情况下,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行的情况下执行如权利要求1至11任一项所述的智能引导方法的步骤。
- 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行的情况下执行如权利要求1至11任一项所述的智能引导方法的步骤。
- 一种计算机程序产品,所述计算机程序产品包括一条或多条指令,所述一条或多条指令适于由处理器加载并执行如权利要求1至11任一项所述的智能引导方法的步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217036427A KR20220044897A (ko) | 2020-09-30 | 2021-04-29 | 웨어러블 기기, 스마트 가이드 방법 및 장치, 가이드 시스템, 저장 매체 |
JP2021564133A JP2023502552A (ja) | 2020-09-30 | 2021-04-29 | ウェアラブルデバイス、インテリジェントガイド方法及び装置、ガイドシステム、記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011060870.X | 2020-09-30 | ||
CN202011060870.XA CN112188059B (zh) | 2020-09-30 | 2020-09-30 | 可穿戴设备、智能引导方法及装置、引导系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022068193A1 true WO2022068193A1 (zh) | 2022-04-07 |
Family
ID=73948406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/091150 WO2022068193A1 (zh) | 2020-09-30 | 2021-04-29 | 可穿戴设备、智能引导方法及装置、引导系统、存储介质 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2023502552A (zh) |
KR (1) | KR20220044897A (zh) |
CN (1) | CN112188059B (zh) |
WO (1) | WO2022068193A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024114175A1 (zh) * | 2022-11-30 | 2024-06-06 | 微智医疗器械有限公司 | 双目视差估计方法、视觉假体以及计算机可读存储介质 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112188059B (zh) * | 2020-09-30 | 2022-07-15 | 深圳市商汤科技有限公司 | 可穿戴设备、智能引导方法及装置、引导系统 |
CN114690120A (zh) * | 2021-01-06 | 2022-07-01 | 杭州嘉澜创新科技有限公司 | 一种定位方法、装置和系统、计算机可读存储介质 |
CN112950699A (zh) * | 2021-03-30 | 2021-06-11 | 深圳市商汤科技有限公司 | 深度测量方法、装置、电子设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011250928A (ja) * | 2010-06-01 | 2011-12-15 | Chuo Univ | 視覚障害者用空間認識装置、方法およびプログラム |
WO2016047890A1 (ko) * | 2014-09-26 | 2016-03-31 | 숭실대학교산학협력단 | 보행 보조 방법 및 시스템, 이를 수행하기 위한 기록매체 |
CN106937910A (zh) * | 2017-03-20 | 2017-07-11 | 杭州视氪科技有限公司 | 一种障碍物和坡道检测系统及方法 |
CN107888896A (zh) * | 2017-10-20 | 2018-04-06 | 宁波天坦智慧电子科技股份有限公司 | 一种用于导盲眼镜的障碍判断与提醒方法及一种导盲眼镜 |
CN109120861A (zh) * | 2018-09-29 | 2019-01-01 | 成都臻识科技发展有限公司 | 一种极低照度下的高质量成像方法及系统 |
CN109831660A (zh) * | 2019-02-18 | 2019-05-31 | Oppo广东移动通信有限公司 | 深度图像获取方法、深度图像获取模组及电子设备 |
CN112188059A (zh) * | 2020-09-30 | 2021-01-05 | 深圳市商汤科技有限公司 | 可穿戴设备、智能引导方法及装置、引导系统 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012137434A1 (ja) * | 2011-04-07 | 2012-10-11 | パナソニック株式会社 | 立体撮像装置 |
JP2014067320A (ja) * | 2012-09-27 | 2014-04-17 | Hitachi Automotive Systems Ltd | ステレオカメラ装置 |
CN106038183A (zh) * | 2016-06-29 | 2016-10-26 | 冯伟林 | 一种盲人穿戴设备及导航系统 |
CN106210536A (zh) * | 2016-08-04 | 2016-12-07 | 深圳众思科技有限公司 | 一种屏幕亮度调节方法、装置及终端 |
CN108055452B (zh) * | 2017-11-01 | 2020-09-18 | Oppo广东移动通信有限公司 | 图像处理方法、装置及设备 |
US20190320102A1 (en) * | 2018-04-13 | 2019-10-17 | Qualcomm Incorporated | Power reduction for dual camera synchronization |
WO2020037575A1 (zh) * | 2018-08-22 | 2020-02-27 | 深圳市大疆创新科技有限公司 | 图像深度估计方法、装置、可读存储介质及电子设备 |
CN111698494B (zh) * | 2018-08-22 | 2022-10-28 | Oppo广东移动通信有限公司 | 电子装置 |
CN109712192B (zh) * | 2018-11-30 | 2021-03-23 | Oppo广东移动通信有限公司 | 摄像模组标定方法、装置、电子设备及计算机可读存储介质 |
CN110664593A (zh) * | 2019-08-21 | 2020-01-10 | 重庆邮电大学 | 基于HoloLens的盲人导航系统及方法 |
CN111553862B (zh) * | 2020-04-29 | 2023-10-13 | 大连海事大学 | 一种海天背景图像去雾和双目立体视觉定位方法 |
-
2020
- 2020-09-30 CN CN202011060870.XA patent/CN112188059B/zh active Active
-
2021
- 2021-04-29 JP JP2021564133A patent/JP2023502552A/ja active Pending
- 2021-04-29 KR KR1020217036427A patent/KR20220044897A/ko not_active Application Discontinuation
- 2021-04-29 WO PCT/CN2021/091150 patent/WO2022068193A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011250928A (ja) * | 2010-06-01 | 2011-12-15 | Chuo Univ | 視覚障害者用空間認識装置、方法およびプログラム |
WO2016047890A1 (ko) * | 2014-09-26 | 2016-03-31 | 숭실대학교산학협력단 | 보행 보조 방법 및 시스템, 이를 수행하기 위한 기록매체 |
CN106937910A (zh) * | 2017-03-20 | 2017-07-11 | 杭州视氪科技有限公司 | 一种障碍物和坡道检测系统及方法 |
CN107888896A (zh) * | 2017-10-20 | 2018-04-06 | 宁波天坦智慧电子科技股份有限公司 | 一种用于导盲眼镜的障碍判断与提醒方法及一种导盲眼镜 |
CN109120861A (zh) * | 2018-09-29 | 2019-01-01 | 成都臻识科技发展有限公司 | 一种极低照度下的高质量成像方法及系统 |
CN109831660A (zh) * | 2019-02-18 | 2019-05-31 | Oppo广东移动通信有限公司 | 深度图像获取方法、深度图像获取模组及电子设备 |
CN112188059A (zh) * | 2020-09-30 | 2021-01-05 | 深圳市商汤科技有限公司 | 可穿戴设备、智能引导方法及装置、引导系统 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024114175A1 (zh) * | 2022-11-30 | 2024-06-06 | 微智医疗器械有限公司 | 双目视差估计方法、视觉假体以及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2023502552A (ja) | 2023-01-25 |
CN112188059B (zh) | 2022-07-15 |
CN112188059A (zh) | 2021-01-05 |
KR20220044897A (ko) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022068193A1 (zh) | 可穿戴设备、智能引导方法及装置、引导系统、存储介质 | |
US10764585B2 (en) | Reprojecting holographic video to enhance streaming bandwidth/quality | |
US11989350B2 (en) | Hand key point recognition model training method, hand key point recognition method and device | |
US20210209859A1 (en) | Cross reality system | |
CN107545302B (zh) | 一种人眼左右眼图像联合的视线方向计算方法 | |
EP3469458B1 (en) | Six dof mixed reality input by fusing inertial handheld controller with hand tracking | |
US9710973B2 (en) | Low-latency fusing of virtual and real content | |
US20130335405A1 (en) | Virtual object generation within a virtual environment | |
US9454006B2 (en) | Head mounted display and image display system | |
US10558260B2 (en) | Detecting the pose of an out-of-range controller | |
US20130326364A1 (en) | Position relative hologram interactions | |
US20120154277A1 (en) | Optimized focal area for augmented reality displays | |
TW201437688A (zh) | 頭部配戴型顯示裝置、頭部配戴型顯示裝置之控制方法及顯示系統 | |
KR20150143612A (ko) | 펄스형 광원을 이용한 근접 평면 분할 | |
WO2015093130A1 (ja) | 情報処理装置、情報処理方法およびプログラム | |
CN108919498A (zh) | 一种基于多模态成像和多层感知的增强现实眼镜 | |
KR20210052570A (ko) | 분리가능한 왜곡 불일치 결정 | |
CN108415875A (zh) | 深度成像移动终端及人脸识别应用的方法 | |
WO2019061466A1 (zh) | 一种飞行控制方法、遥控装置、遥控系统 | |
CN204258990U (zh) | 智能头戴显示装置 | |
US11727769B2 (en) | Systems and methods for characterization of mechanical impedance of biological tissues | |
CN113822174A (zh) | 视线估计的方法、电子设备及存储介质 | |
CN218045797U (zh) | 一种盲人穿戴智慧云眼镜及系统 | |
EP4231635A1 (en) | Efficient dynamic occlusion based on stereo vision within an augmented or virtual reality application | |
KR20230079618A (ko) | 인체를 3차원 모델링하는 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021564133 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21873856 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03-07-2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21873856 Country of ref document: EP Kind code of ref document: A1 |