WO2022068193A1 - 可穿戴设备、智能引导方法及装置、引导系统、存储介质 - Google Patents

可穿戴设备、智能引导方法及装置、引导系统、存储介质 Download PDF

Info

Publication number
WO2022068193A1
WO2022068193A1 PCT/CN2021/091150 CN2021091150W WO2022068193A1 WO 2022068193 A1 WO2022068193 A1 WO 2022068193A1 CN 2021091150 W CN2021091150 W CN 2021091150W WO 2022068193 A1 WO2022068193 A1 WO 2022068193A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene image
information
scene
component
wearable device
Prior art date
Application number
PCT/CN2021/091150
Other languages
English (en)
French (fr)
Inventor
曹莉
向许波
佘忠华
刘锦金
张英宜
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Priority to KR1020217036427A priority Critical patent/KR20220044897A/ko
Priority to JP2021564133A priority patent/JP2023502552A/ja
Publication of WO2022068193A1 publication Critical patent/WO2022068193A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/08Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/165Wearable interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5048Audio interfaces, e.g. voice or music controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors

Definitions

  • the present disclosure is based on the Chinese patent application with the application number of 202011060870.X, the application date of September 30, 2020, and the application name of "wearable device, intelligent guidance method and device, guidance system", and requires the Chinese patent application Priority, the entire content of this Chinese patent application is hereby incorporated by reference into the present disclosure.
  • the present application relates to the technical field of image processing, and in particular, to a wearable device, an intelligent guidance method and device, a wearable device, a guidance system, and a storage medium.
  • the existing blind guidance technology can be used for the existence of cameras that have poor adaptability to the environment and cannot adapt well to scenes with large changes in light intensity. In the case of large changes in light intensity, rich environmental information cannot be obtained; and Obstacles that lack texture, smooth appearance, high transparency, or long distances cannot be effectively detected. Therefore, the existing blind guidance technology cannot accurately detect obstacles, and cannot provide high-safety blind guidance services for people with visual impairments.
  • the embodiments of the present application provide at least a wearable device-based intelligent guidance method and device, a wearable device, a guidance system, and a storage medium, so as to improve the accuracy of depth information detection, and the accuracy and safety of guidance.
  • an embodiment of the present application provides an intelligent guidance method based on a wearable device.
  • the wearable device includes a first visible light camera component, a second visible light camera component, and a depth camera component, and the method includes:
  • Guidance information for the target object wearing the wearable device is generated based on the depth information.
  • the second scene image captured by the second visible light imaging component is used to determine the depth information of the object in the scene;
  • the light intensity is low that is, when the brightness of the first scene image is low, the depth information of the object in the scene is determined by using the second scene image captured by the depth imaging component.
  • the acquiring the second scene image collected by the second visible light imaging component in response to the brightness of the first scene image satisfying a preset brightness includes:
  • a first exposure instruction is sent to the second visible light imaging component, and the exposure control performed by the second visible light imaging component based on the first exposure instruction is acquired. the collected second scene image;
  • the acquiring in response to the brightness of the first scene image not meeting the preset brightness, acquires the second scene image collected by the depth imaging component, including:
  • the second visible light imaging component when the light intensity is high, the second visible light imaging component is controlled to perform exposure, and when the light intensity is low, the depth imaging component is controlled to perform exposure, so as to switch between different light sources according to different ambient light intensity.
  • the camera component collects the second scene image used to determine the depth information of the object, which can actively adapt to the change of the light intensity and obtain richer environmental information. At the same time, it can detect the obstacles lacking texture and improve the obstacle Accuracy of detection and safety of guidance.
  • the determining the depth information of the object in the scene based on the second scene image includes:
  • the center distance between the first visible light imaging component and the second visible light imaging component, and the focal length of the first visible light imaging component determine the distance of the object in the scene. in-depth information.
  • the depth information of the objects in the scene can be more accurately determined by combining the first scene image collected by the first visible light imaging component and the second scene image collected by the second visible light imaging component.
  • the determining the depth information of the object in the scene based on the second scene image includes:
  • the depth information of the target pixel in the second scene image is determined.
  • the depth of the object in the scene can be more accurately determined by using the depth image collected by the depth camera, that is, the depth information of the pixels in the second scene image collected by the depth camera. information, can detect obstacles lacking texture, and improve the accuracy of obstacle detection in the case of weak light intensity.
  • the generating guide information for the target object wearing the wearable device based on the depth information includes:
  • prompt information is used to prompt the depth information to the target object wearing the wearable device.
  • the depth information of the object in the scene is prompted to the target object, which can effectively guide the operation exclusive to the target, and improve the guiding efficiency and the guiding safety.
  • it also includes:
  • the generating guidance information for the target object wearing the wearable device based on the depth information includes:
  • the guidance information for the target object wearing the wearable device is generated.
  • guidance information with a larger amount of information and richer content can be generated for the target object, so that the efficiency of guidance and the safety of guidance can be further improved.
  • the wearable device further includes an ultrasonic detection component
  • the method also includes:
  • depth information of objects in the scene is updated.
  • the second scene image captured by the visible light camera component or the depth camera component cannot accurately determine the depth information of the object in the scene, or it is detected that the first scene image includes transparency.
  • the use of ultrasonic detection components for depth information detection improves the applicability of wearable devices and can more accurately detect the depth information of objects in the scene in a more complex environment.
  • the wearable device further includes a posture measurement component
  • the method also includes:
  • posture correction prompt information is generated.
  • the posture information of the wearable device measured by the posture measuring component can generate posture correction prompt information for reminding the target object to correct the posture when the posture of the target object is in the first preset posture, so that the wearable device can be It is assumed that the device captures objects that affect the running of the target object, which further improves the accuracy of guidance information and the safety of guidance.
  • the method further includes:
  • the generating guidance information for the target object wearing the wearable device based on the depth information includes:
  • guidance information and/or gesture correction prompt information for the target object wearing the wearable device is generated.
  • the orientation information of the object in the scene relative to the wearable device is converted into the orientation information of the object in the scene relative to the wearable device in the second preset posture, that is, the operation of the target object is generated.
  • Effective azimuth information more accurate and effective guidance information can be generated by using the effective azimuth information.
  • the wearable device further includes a sound-emitting component
  • the method also includes:
  • voice navigation information is generated and sent to the sounding component, so that the sounding component plays the voice guidance information to the target object.
  • the voice navigation information and the sounding component can be used to effectively guide the target object to avoid obstacles, thereby improving the guiding efficiency and safety.
  • the wearable device further includes a frequency dividing circuit
  • the acquiring, in response to the brightness of the first scene image satisfying a preset brightness, the acquiring of the second scene image collected by the second visible light imaging component includes:
  • an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains a third frequency dividing process.
  • the acquiring in response to the brightness of the first scene image not meeting the preset brightness, acquires the second scene image collected by the depth imaging component, including:
  • an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains the first frequency obtained by the frequency dividing processing.
  • Four exposure instructions are sent to the depth imaging component, and a second scene image collected by the depth imaging component for performing exposure control based on the fourth exposure instruction is acquired.
  • the frequency division circuit is used to perform frequency division processing on the exposure command, and the exposure command obtained by the frequency division processing is used to control the second visible light imaging component or the depth imaging component to capture images, so that the imaging components with different frame rates are controlled to be exposed at the same time, Energy consumption is saved.
  • an embodiment of the present application provides an intelligent guidance device based on a wearable device, the wearable device includes a first visible light camera component, a second visible light camera component, and a depth camera component, and the device includes:
  • a first image acquisition module configured to acquire the first scene image collected by the first visible light imaging component
  • a brightness detection module configured to detect the brightness of the first scene image
  • a second image acquisition module configured to acquire a second scene image collected by the second visible light imaging component in response to the brightness of the first scene image satisfying a preset brightness, and/or, in response to the first scene If the brightness of the image does not meet the preset brightness, acquire the second scene image collected by the depth imaging component;
  • a detection module configured to determine depth information of objects in the scene based on the second scene image
  • a guidance information generation module is configured to generate guidance information for the target object wearing the wearable device based on the depth information.
  • an embodiment of the present application provides a wearable device, including a processing component, a first visible light imaging component, a second visible light imaging component, and a depth imaging component;
  • the first visible light imaging component is configured to collect a first scene image
  • the second visible light imaging component and the depth imaging component are configured to collect a second scene image
  • the processing component is configured to execute the above-mentioned smart guidance method based on a wearable device.
  • the above-mentioned wearable device further includes a signal serialization component, a signal transmission cable and a signal deserialization component;
  • the signal serial component is communicatively connected to the first visible light imaging component, the second visible light imaging component, and the depth imaging component; both ends of the signal transmission cable are respectively connected to the signal serial component and the signal deserialization component ;
  • the signal deserialization unit is communicatively connected to the processing unit;
  • the first visible light imaging component is further configured to send the first scene image to the signal serial component;
  • the second visible light imaging component and the depth imaging component are further configured to send the second scene image to the signal serial component;
  • the signal serial component is configured to convert the received first scene image and the second scene image into serial signals, and send them to the signal deserialization component through the signal transmission cable;
  • the signal deserialization unit is configured to perform deserialization processing on the received signal, and send the signal obtained by the deserialization processing to the processing unit.
  • the signal serial component is used to convert the image captured by the imaging component into a serial signal, such as a twisted pair high-speed differential signal for transmission, and the signal can be transmitted using only two wires, and the transmission speed is faster and the cost is lower. Lower, the transmission distance is longer, and the size of the part is smaller.
  • the depth camera component includes a TOF camera.
  • the TOF camera can more accurately acquire the depth information of the objects in the scene when the illumination intensity is weak.
  • an embodiment of the present application provides a guidance system, including a wearable device and a host;
  • the wearable device includes a first visible light imaging component, a second visible light imaging component and a depth imaging component;
  • the host includes a processing part, the processing part is connected with the first visible light imaging part, the second visible light imaging part and the depth imaging part through a signal transmission cable, the processing part is configured to execute the above wearable device-based intelligent guide method.
  • the host is provided with at least one of the following connected to the processing component: a positioning module, a network module, a micro-control unit configured to detect working status and/or charge management, audio module.
  • embodiments of the present application provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, all The processor and the memory communicate through a bus, and when the machine-readable instructions are executed by the processor, the steps of the above-mentioned intelligent booting method are performed.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the above-mentioned intelligent booting method are executed.
  • an embodiment of the present application provides a computer program product, where the computer program product includes one or more instructions, and the one or more instructions are suitable for being loaded by a processor and executing the steps of the above-mentioned intelligent booting method.
  • FIG. 1 shows a flowchart of a wearable device-based smart guidance method provided by an embodiment of the present application
  • FIG. 2 shows a schematic structural diagram of a wearable device provided by an embodiment of the present application
  • FIG. 3 shows a schematic structural diagram of a guidance system provided by an embodiment of the present application
  • FIG. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present application
  • FIG. 5 shows a schematic structural diagram of a wearable device-based intelligent guidance device provided by an embodiment of the present application.
  • Embodiments of the present application provide a wearable device-based intelligent guidance method and device, a wearable device, a guidance system, an electronic device, and a storage medium, wherein, when the illumination intensity is high, that is, the brightness of the first scene image
  • the depth information of the object in the scene is determined by using the second scene image captured by the second visible light imaging component; in the case where the illumination intensity is small, that is, when the brightness of the first scene image is small, the depth information is used.
  • the second scene image captured by the camera unit determines depth information of objects in the scene.
  • Embodiments of the wearable device-based intelligent guidance method and device, the wearable device, the guidance system, the electronic device, and the storage medium provided by the present application are described below.
  • the wearable device-based intelligent guidance method provided in the embodiment of the present application may be applied to a processing component, and the processing component may be a component on the wearable device, or may be separately located on a host.
  • the wearable device includes a first visible light imaging part, a second visible light imaging part, and a depth imaging part.
  • the processing unit is connected in communication with the first visible light imaging unit, the second visible light imaging unit, and the depth imaging unit.
  • the wearable device is worn on the head of the target object as a head-mounted device.
  • the first visible light imaging component, the second visible light imaging component, and the depth imaging component may also be combined into a head-mounted device, which is worn on the head of the target object.
  • the processing component is worn or fixed to other parts of the target object, for example, on the target object's arm and the like. This embodiment of the present application does not limit the positions of the above components on the target object.
  • the above-mentioned target object may be a visually impaired object.
  • the target object can be guided to avoid obstacles and walk safely.
  • the intelligent guidance method of the embodiment of the present application is used to detect depth information of an object in a scene, and generate guidance information based on the detected depth information.
  • the above-mentioned smart guidance method based on a wearable device may include the following steps:
  • the first visible light imaging component is configured to capture a first scene image around the wearable device, and send the image to the processing component.
  • the frame rate of the scene image captured by the first visible light imaging component is relatively high, and the position information of the object in the scene relative to the wearable device can be determined according to the position of the object in the first scene image captured by the first visible light imaging component.
  • the first visible light imaging component may be a red green blue (Red Green Blue, RGB) imaging component.
  • RGB Red Green Blue
  • a variety of image brightness detection methods can be used for detection, for example, a pre-trained neural network can be used to detect the brightness of the first scene image, or the brightness of each region or each pixel in the first scene image can be distributed statistics or Calculate the average to obtain the brightness of the first scene image.
  • the brightness of the first scene image can reflect the illumination intensity of the current scene, the stronger the illumination intensity, the higher the brightness of the first scene image, and vice versa, the weaker the illumination intensity, the lower the brightness of the first scene image. Therefore, based on the brightness of the first scene image, the intensity of the illumination intensity of the current scene can be judged, and then it is determined to select the image collected by the second visible light imaging component or the image collected by the depth imaging component to further calculate the depth information of the scene, so as to adapt to the variation of the illumination intensity. changes to improve the accuracy of object depth information detection and generated guidance information.
  • S130 In response to the brightness of the first scene image meeting a preset brightness, acquire a second scene image collected by the second visible light imaging component, and/or, in response to the brightness of the first scene image not meeting a preset brightness
  • the set brightness is used to acquire the second scene image collected by the depth imaging component.
  • the illumination intensity of the current scene is relatively large, and at this time, the second scene image can be acquired from the second visible light imaging component.
  • the visible light imaging component has a better imaging effect, and the scene images captured by the two visible light imaging components can be used to determine the depth information of the objects in the scene.
  • the imaging effect of using the two visible light imaging components is poor, and accurate depth information cannot be obtained.
  • the depth information can be acquired by using the depth camera component to collect the depth image.
  • the illumination intensity of the current scene is relatively weak, and at this time, the second scene image can be acquired from the depth imaging component from the depth imaging component.
  • the scene image captured by the depth camera component can be used to detect the depth information of objects in the scene.
  • the problem that obstacles cannot be detected due to weak light intensity can be overcome.
  • the scene images captured by the two visible light camera components cannot effectively detect the depth information of obstacles lacking texture, such as sky, desert, white walls, etc.; however, the scene images captured by the depth camera components can effectively detect the above-mentioned lack of texture. Depth information for obstacles.
  • the second visible light imaging component may be an RGB imaging component.
  • the depth camera component may be a camera based on Time of Flight (TOF) imaging, that is, a TOF camera. Using the TOF camera can accurately obtain the depth information of the objects in the scene in the case of weak light intensity.
  • TOF Time of Flight
  • the depth information of the object in the scene corresponding to the object in the image can be calculated based on the first scene image and the second scene image according to the principle of binocular vision.
  • the illumination intensity of the current scene is relatively high, the depth information of the objects in the scene can be more accurately determined by using the two scene images collected by the two visible light imaging components.
  • the pixels representing the objects in the scene in the first scene image are matched with the pixels representing the objects in the scene in the second scene image to obtain pixel pairs;
  • the center distance between the first visible light imaging component and the second visible light imaging component, and the focal length of the first visible light imaging component the depth information of the object in the scene is determined.
  • the depth information of the object in the scene may be directly determined based on the depth information perceived by the depth imaging component.
  • the TOF camera can more accurately obtain the depth information of the objects in the scene under the condition of weak illumination intensity.
  • the position of the object in the scene can be detected from the second scene image through target detection, and then the depth information of the pixel at the corresponding position in the second scene image is determined as the depth information of the object according to the detection result.
  • prompt information may be generated based on the depth information, where the prompt information is used to prompt the depth information to the target object wearing the wearable device. Prompting the target object with the depth information of the object in the scene can effectively guide the exclusive operation of the target, and improve the guiding efficiency and the guiding safety.
  • the following steps can be used to generate the guidance information:
  • the guidance information generated based on the orientation information and the depth information can further improve the efficiency of guidance and the safety of guidance.
  • the orientation information of the object relative to the wearable device may be determined according to the position of the object in the first scene image.
  • the orientation information may include an orientation angle, or the orientation information may represent one of pre-divided orientations, and the pre-divided orientations may include, for example, front left, front right, and straight ahead.
  • the orientation information of the object relative to the wearable device is the front left
  • the object A being located on the right side in the first scene image it is determined that the object relative to the wearable device
  • the orientation information of the device is right front
  • the orientation information of the object relative to the wearable device is right in front
  • switching different camera components to capture the second scene image according to the light intensity specifically includes the following steps:
  • the processing component sends a first exposure instruction to the second visible light imaging component, and the second visible light imaging component is based on the first exposure instruction. Exposure is performed with an exposure instruction, and a second scene image is collected. After that, the processing unit acquires the second scene image from the second visible light imaging unit.
  • the second visible light imaging component may also collect the second scene image all the time, and only send the second scene image collected at the corresponding moment to the processing component in the case of receiving the first exposure instruction sent by the processing component.
  • the processing component sends a second exposure instruction to the depth imaging component, and the depth imaging component based on the second exposure instruction Exposure is performed, and a second scene image is acquired. After that, the processing unit acquires the second scene image from the depth imaging unit.
  • the depth imaging component may also collect the second scene image all the time, and only send the second scene image collected at the corresponding moment to the processing component in the case of receiving the second exposure instruction sent by the processing component.
  • the second visible light imaging component is controlled to perform exposure
  • the depth imaging component is controlled to perform exposure, so as to actively switch different imaging components according to the different ambient light intensity.
  • the second scene image used to determine the depth information of the object is collected, so that the intelligent guidance method of the embodiment of the present application can adapt to the change of the light intensity, obtain relatively abundant environmental information, and at the same time, can detect obstacles lacking texture , which improves the accuracy of obstacle detection and the safety of guidance.
  • the scene image captured by the second visible light imaging component is not used to determine the orientation information of the object in the scene relative to the wearable device and/or the type information of the object, etc.
  • its frame rate is lower than that of the first visible light imaging component, In this way, the requirement of depth information detection can be met, and the power consumption of the second visible light imaging component can be reduced, and the heat dissipation of the wearable device can be reduced.
  • a frequency dividing circuit can be used to realize synchronous exposure of the first visible light imaging component and the second visible light imaging component at different frame rates; when the brightness of the first scene image meets the preset brightness, the processing component sends the information to the The frequency dividing circuit sends the exposure command, and the frequency dividing circuit performs frequency dividing processing on the received exposure command, and sends the third exposure command obtained by the frequency dividing processing to the second visible light imaging component. Then, the processing unit acquires the second scene image collected by the second visible light imaging unit performing exposure control based on the third exposure instruction.
  • the first visible light capturing unit can directly perform exposure by sending an exposure command to the frequency dividing circuit based on the processing unit, and the processing unit sends an exposure command to the frequency dividing circuit at the same time. , and send an exposure instruction to the first visible light imaging component.
  • the depth imaging component can be controlled to be exposed at a frequency lower than that of the first visible light, so the frame rate of the image captured by the depth imaging component is lower than that of the first visible light imaging component, which not only satisfies the
  • the demand for depth information detection can also reduce the power consumption of the depth camera component and reduce the heat dissipation of the wearable device.
  • a frequency dividing circuit can be used to realize synchronous exposure of the first visible light imaging component and the depth imaging component at different frame rates; in the case that the brightness of the first scene image does not meet the preset brightness, the processing component sends the frequency dividing circuit to the frequency dividing circuit.
  • the frequency division circuit sends an exposure instruction, and the frequency division circuit performs frequency division processing on the received exposure instruction, and sends the fourth exposure instruction obtained by the frequency division processing to the depth imaging component. After that, the processing component obtains the depth imaging component based on the The second scene image collected by the fourth exposure instruction for exposure control.
  • the first visible light capturing unit can directly perform exposure by sending an exposure command to the frequency dividing circuit based on the processing unit, and the processing unit sends an exposure command to the frequency dividing circuit at the same time. , and send an exposure instruction to the first visible light imaging component.
  • the frequency division circuit is used to perform frequency division processing on the exposure command, and the exposure command obtained by the frequency division processing is used to control the second visible light imaging component or the depth imaging component to capture images, which realizes the synchronous exposure of the imaging components that control different frame rates and saves energy. consume.
  • the images captured by the first visible light imaging component and the depth imaging component can be used to detect the depth information of the objects in the scene, but this method is not suitable for the detection of the depth information of distant objects, At the same time, this method cannot effectively detect the depth information of objects with high transparency and smooth appearance, such as glass and water surface. At this time, the detected depth information may be wrong, so the processing unit determines whether the depth information is obviously unreasonable after the detected depth information and the category of the object in the scene, that is, whether the depth corresponding to the depth information is greater than the preset depth threshold. , and/or determine whether objects of preset categories such as glass and water surfaces are detected in the scene.
  • the processing component In response to the processing component determining that the depth corresponding to the depth information is greater than a preset depth threshold and/or detecting that the first scene image contains objects of a preset category, the processing component sends an ultrasonic detection instruction to the ultrasonic detection component, and the ultrasonic detection component The ultrasonic wave is sent based on the ultrasonic detection instruction, and the detected ultrasonic feedback signal is sent to the processing unit.
  • the processing component updates depth information of objects in the scene based on the received ultrasonic feedback signal.
  • the detection accuracy of the depth information of the object by the ultrasonic detection component is relatively high. Under the circumstance that the depth information of the object in the scene cannot be accurately determined by the second scene image captured by the visible light camera component or the depth camera component due to the influence of the complex environment or the characteristics of the object itself, the ultrasonic detection component is used to detect the depth information, which improves the performance of the scene.
  • the applicability of wearable devices can more accurately detect the depth information of objects in the scene in more complex environments.
  • the target object wearing the above-mentioned wearable device may perform some actions during the movement process, which makes it impossible to accurately detect the object that may hinder the movement of the target object when the object detection is performed according to the captured scene image, such as the target object with a large angle.
  • the scene captured by the wearable device is concentrated on the opposite side, and the obtained scene image is the image of the ground, which cannot accurately detect the obstacles in front of or on the side of the target object that may affect its movement.
  • the above-mentioned wearable device may further include an attitude measurement component, and the attitude measurement component is communicatively connected with the processing component.
  • the attitude measurement component measures the attitude information of the wearable device, where the attitude information of the wearable device and the attitude information of the target object are considered to be the same.
  • the attitude measurement component sends the measured attitude information to the processing component.
  • the above method further includes: based on the received posture information, judging whether the posture of the target object wearing the wearable device is in a first preset posture, and if so, generating posture correction prompt information.
  • the above-mentioned first preset posture is a posture in which an object that does not affect the running of the target object can be photographed, for example, a relatively large angle of the head is raised.
  • the posture correction prompt information is used to prompt the target object to correct the current posture, which further improves the accuracy of the guidance information and the safety of the guidance.
  • guidance information may also be generated based on the gesture information. For example, based on the gesture information of the wearable device, the orientation information of the object relative to the wearable device is converted into the position information of the object relative to the second preset Orientation information of the wearable device in the posture; based on the converted orientation information and the depth information of the object in the scene, generate guidance information and/or posture correction prompt information for the target object wearing the wearable device.
  • the orientation information can be converted by the following steps: determining the attitude difference information between the second preset attitude and the current attitude information of the wearable device; The orientation information of the wearable device is converted to obtain the transformed orientation information. For example, in response to the attitude difference information indicating that the wearable device is in a head-up posture of 80 degrees, the determined orientation information indicates that the object is located directly in front of the wearable device, and the converted orientation information indicates that the object is located above the wearable device; in response to The attitude difference information indicates that the wearable device is in a posture of turning its head 60 degrees to the left, the determined orientation information indicates that the object is located directly in front of the wearable device, and the converted orientation information indicates that the object is located in the front left of the wearable device.
  • the depth information of the object in the scene is specifically used to represent the distance between the object and the wearable device. After the posture of the wearable device changes, since the position of the wearable device does not change significantly, the object and the wearable device do not change significantly. Therefore, the depth information of the objects in the scene does not need to be converted, and the depth information of the objects in the scene can more accurately represent the distance between the object and the wearable device.
  • the above-mentioned second preset posture is, for example, a posture of the wearable device toward the traveling direction of the target object.
  • the above converted orientation information can be used to determine whether the corresponding object is located on the running path of the target object, that is, to determine whether the object in the scene is an obstacle for the running of the target object, in the case that the corresponding object is an obstacle of the target object. , it is necessary to generate guidance information based on the converted orientation information to guide the target object to avoid obstacles.
  • the orientation information of the object in the scene relative to the wearable device is converted into the orientation information of the object in the scene relative to the wearable device in the second preset posture, that is, the orientation information effective for the operation of the target object is generated. , more accurate and effective guidance information can be generated by using the effective orientation information.
  • the attitude measurement component may be a gyroscope, such as a nine-axis gyroscope.
  • the amount of image information collected by each of the above-mentioned camera components is relatively large.
  • the signal deserialization part to realize the transmission of the signal.
  • the signal serial component is communicatively connected to the first visible light imaging component, the second visible light imaging component, and the depth imaging component; both ends of the signal transmission cable are respectively connected to the signal serial component and the signal deserialization component ; the signal deserialization part is connected in communication with the processing part.
  • the first visible light imaging part sends the first scene image to the signal serial part; the second visible light imaging part and the depth imaging part send the second scene image to the signal serial part component; the signal serial component converts the received first scene image and the second scene image into serial signals, and sends them to the signal deserialization component through the signal transmission cable; the signal deserialization component sends The received signal is deserialized, and the deserialized signal is sent to the processing unit.
  • the above-mentioned signal transmission using the signal serial component, the signal transmission cable and the signal deserialization component may be, for example, the V-BY-ONE twisted pair technology.
  • V-BY-ONE is a 21x serialization interface technology for video signal transmission.
  • V-BY-ONE twisted pair technology has fewer transmission lines, only two lines are required, and it is lighter; it has lower requirements for transmission lines, no shielding, and saves costs; higher transmission bandwidth, up to 3.75 gigabits per second (Gbps); the transmission distance is longer, and the high-quality transmission distance can reach 15 meters; the chip size is smaller, which is more conducive to the design of portable wearable products, for example, it can be packaged as 5 millimeters (mm) by 5 millimeters (mm).
  • the above signal transmission cable adopts V-BY-ONE twisted pair connection, which is resistant to bending, stretching, light and soft.
  • the signal serial component to convert the image captured by the camera component into a twisted pair high-speed differential signal for transmission, the signal can be transmitted using only two wires, and the transmission speed is faster, the cost is lower, and the transmission distance is longer. smaller in size.
  • the signal serial component to convert the image captured by the camera component into a serial signal, such as a twisted pair high-speed differential signal for transmission, the signal can be transmitted using only two wires, and the transmission speed is faster and the cost is lower. The greater the distance, the smaller the size of the part.
  • the wearable device may further include a sound-emitting part, and the sound-emitting part is connected in communication with the processing part.
  • the playing of the guidance information may be implemented by the following steps: generating voice guidance information based on the guidance information, and sending it to the sounding component, and the sounding component plays the voice guidance information to the target object.
  • voice navigation information and sound components can effectively guide the target object to avoid obstacles, which improves the guidance efficiency and safety.
  • the above-mentioned processing component may be a system on chip (System on Chip, SOC), and the sound-generating component may be an audio bone conduction earphone, and the above-mentioned audio bone conduction earphone is also used for man-machine dialogue.
  • SOC System on Chip
  • the embodiment of the present application integrates the first visible light imaging component, the second visible light imaging component, and the depth The camera component can actively switch the camera component after judging the intensity of the light, so as to improve the detection accuracy and adapt to the changeable lighting environment.
  • the embodiment of the present application also integrates an ultrasonic detection component and an attitude measurement component, uses ultrasonic waves to obtain relatively accurate depth information to make up for the deficiencies of the camera component, and uses the attitude measurement component to obtain attitude information to optimize detection accuracy.
  • the intelligent guidance method in the above embodiment can perform functions such as path planning, positioning, obstacle detection, voice prompt, etc., with higher accuracy and stronger environmental adaptability, enabling people with visual impairment to travel independently, and is more convenient and safe. .
  • an embodiment of the present application also provides a wearable device-based intelligent guidance device, the functions implemented by each module in the device are the same as the steps in the above intelligent guidance method The same applies to processing components.
  • the wearable device includes a first visible light imaging component, a second visible light imaging component, and a depth imaging component.
  • the intelligent guidance device may include:
  • the first image acquisition module 510 is configured to acquire the first scene image acquired by the first visible light imaging component.
  • the brightness detection module 520 is configured to detect the brightness of the first scene image.
  • the second image acquisition module 530 is configured to acquire, in response to the brightness of the first scene image satisfying a preset brightness, acquire a second scene image acquired by the second visible light imaging component, and/or, in response to the first scene image If the brightness of the scene image does not meet the preset brightness, acquire the second scene image collected by the depth imaging component.
  • the detection module 540 is configured to determine depth information of objects in the scene based on the second scene image.
  • the guidance information generation module 550 is configured to generate guidance information for the target object wearing the wearable device based on the depth information.
  • the second image acquisition module 530 configures the second image acquisition module 530 to acquire the second scene image acquired by the second visible light imaging component in response to the brightness of the first scene image satisfying a preset brightness. for:
  • a first exposure instruction is sent to the second visible light imaging component, and the exposure control performed by the second visible light imaging component based on the first exposure instruction is acquired. the collected second scene image;
  • the second image acquisition module 530 acquires the second scene image collected by the depth imaging component in response to the brightness of the first scene image not meeting the preset brightness, it is configured to:
  • the detection module 540 determines the depth information of the object in the scene based on the second scene image, it is configured to:
  • the center distance between the first visible light imaging component and the second visible light imaging component, and the focal length of the first visible light imaging component determine the distance of the object in the scene. in-depth information.
  • the detection module 540 determines depth information of objects in the scene based on the second scene image, including:
  • the depth information of the target pixel in the second scene image is determined.
  • the guidance information generation module 550 generates guidance information for the target object wearing the wearable device based on the depth information, including:
  • prompt information is used to prompt the depth information to the target object wearing the wearable device.
  • the detection module 540 is further configured to:
  • the guidance information generation module 550 is configured to: in the case of generating guidance information for the target object wearing the wearable device based on the depth information:
  • the guidance information for the target object wearing the wearable device is generated.
  • the wearable device further includes an ultrasonic detection component
  • the detection module 540 is further configured to:
  • depth information of objects in the scene is updated.
  • the wearable device further includes an attitude measurement component
  • the detection module 540 is further configured to:
  • posture correction prompt information is generated.
  • the detection module 540 is further configured to:
  • the guidance information generation module 550 is configured to: in the case of generating guidance information for the target object wearing the wearable device based on the depth information:
  • guidance information and/or gesture correction prompt information for the target object wearing the wearable device is generated.
  • the wearable device further includes a sound-emitting component
  • the guidance information generation module 550 is further configured to:
  • voice navigation information is generated and sent to the sounding component, so that the sounding component plays the voice guidance information to the target object.
  • the wearable device further includes a frequency dividing circuit
  • the second image acquisition module 530 acquires the second scene image collected by the second visible light imaging component in response to the brightness of the first scene image meeting the preset brightness
  • the second image acquisition module 530 is configured to:
  • an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains a third frequency dividing process.
  • the second image acquisition module 530 acquires the second scene image collected by the depth imaging component in response to the brightness of the first scene image not meeting the preset brightness, it is configured to:
  • an exposure command is sent to the frequency dividing circuit, and the frequency dividing circuit performs frequency dividing processing on the received exposure command and obtains the first frequency obtained by the frequency dividing processing.
  • Four exposure instructions are sent to the depth imaging component, and a second scene image collected by the depth imaging component for performing exposure control based on the fourth exposure instruction is acquired.
  • an embodiment of the present application further provides a wearable device, and the functions implemented by each component in the wearable device are the same as the corresponding components in the above-mentioned embodiments.
  • numeral 21 represents a block diagram of a head mounted device
  • numeral 22 represents a block diagram of a host side
  • the block diagram of the head mounted device and the host side block diagram are connected by a signal transmission cable 208 .
  • the wearable device may include: a processing part 201 , a first visible light imaging part 202 , a second visible light imaging part 203 and a depth imaging part 204 .
  • the first visible light camera component 202 is configured to collect the first scene image; the second visible light camera component 203 and the depth camera component 204 are configured to collect the second scene image; the processing component 201 is configured to perform the above-mentioned A smart guidance method for wearable devices.
  • the above-mentioned wearable device may further include an ultrasonic detection part 205 , an attitude measurement part 206 , a signal serial part 207 , a signal transmission cable 208 , a signal deserialization part 209 , and a sounding part 210 .
  • the functions implemented by the ultrasonic detection part 205 , the attitude measurement part 206 , the signal serial part 207 , the signal transmission cable 208 , the signal deserialization part 209 , and the sounding part 210 are the same as the corresponding parts in the intelligent guidance method of the above-mentioned embodiment.
  • the wearable device may further include a microcontroller part MCU211 , a WIFI part 212 , and a GPS part 213 .
  • the microcontroller unit MCU211 is configured to manage the charging of the wearable device and detect the state of the whole machine
  • the GPS unit 213 is configured to locate the wearable device
  • the WIFI unit 212 is configured to store the first scene image and the second scene in the wearable device Images, depth information, etc. are sent to the remote server.
  • the block diagram 21 of the head mounted device includes two visible light camera components 202 and 203, and a depth camera component 204.
  • One of the RGB camera components is used as the main camera component, and outputs a synchronization trigger signal to the other.
  • One RGB camera component and at the same time output a synchronous trigger signal to the depth camera component through a frequency divider chip.
  • the two RGB camera components 202 and 203 and the depth camera component 204 can achieve three-way camera components at the same time even under different frame rates. exposure.
  • the image signal of the three-way camera unit is converted into a twisted pair high-speed differential signal through the signal serial unit 207 (V-BY-ONE serial chip), and is connected to the host-side block diagram 22 through the signal transmission cable 208; the attitude measurement unit 206
  • the (gyroscope) is used to monitor the user's attitude information at all times, and the ultrasonic detection component 205 is used to detect the distance information such as the glass wall and the water surface.
  • the host side block diagram 22 includes a processing unit 201 (SOC unit), a global positioning system (Global Positioning System, GPS) unit 213, a WIFI unit 212, an integrated V-BY-ONE deserialization chip (signal deserialization unit) 209, an MCU (micro A control part) 211 and a sounding part 210, wherein, the processing part 201 adopts an artificial intelligence processing module with strong computing power.
  • the integrated V-BY-ONE deserialization chip 209 sends the data contacting the mobile industry processor interface camera interface standard (Mobile Industry Processor Interface-Camera Serial Interface, MIPI-CSI) to the mobile industry processor interface physical layer (Mobile Industry Processor Interface) of the processing unit 201.
  • MIPI PHY Industry Processor Interface Physical, MIPI PHY
  • MCU211 is used to detect the working state of the whole machine and Carry out charging management, etc.
  • GPS component 213 uses real-time kinematic (RTK) technology to perform precise positioning
  • WIFI component 212 is used for data upload and synchronization network.
  • the signal transmission cable 208 is connected by a V-BY-ONE twisted pair.
  • an embodiment of the present application provides a guidance system 300 , including a wearable device 301 and a host 302 .
  • the wearable device 301 includes a first visible light imaging component, a second visible light imaging component, and a depth imaging component;
  • the host 302 includes a processing component, the processing component is associated with the first visible light imaging component, the second visible light imaging component, and the
  • the depth camera component is connected through a signal transmission cable, and the processing component is configured to execute the above-mentioned smart guidance method based on a wearable device.
  • the host 302 may be provided with at least one of the following connected to the processing component: a positioning module, a network module, a micro-control unit configured to detect working status and/or charge management, and an audio module.
  • the embodiment of the present application also provides an electronic device 400.
  • the schematic structural diagram of the electronic device 400 provided by the embodiment of the present application includes:
  • the following instructions acquire a first scene image collected by a first visible light imaging component; detect the brightness of the first scene image; and acquire the second visible light imaging component in response to the brightness of the first scene image meeting a preset brightness the collected second scene image, and/or, in response to the brightness of the first scene image not meeting the preset brightness, acquiring the second scene image collected by the depth camera; determining a scene based on the second scene image depth information of the object in the device; and generating guidance information for the target object wearing the wearable device based on
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the intelligent booting method described in the foregoing method embodiments are executed .
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the computer program product of the intelligent guidance method provided by the embodiments of the present application includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the steps of the intelligent guidance method described in the above method embodiments. , for example, see the above method embodiments.
  • the embodiments of the present application further provide a computer program, which implements any one of the methods in the foregoing embodiments when the computer program is executed by a processor.
  • the computer program product may be implemented, for example, in hardware, software, or a combination thereof.
  • the computer program product is embodied as a computer storage medium, for example, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • the second scene image captured by the second visible light imaging component is used to determine the depth information of the object in the scene;
  • the depth information of the object in the scene is determined by using the second scene image captured by the depth imaging component.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Vascular Medicine (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种可穿戴设备、智能引导方法及装置、引导系统,本申请的实施例在光照强度较大的情况下,即第一场景图像的亮度较大的情况下,利用第二可见光摄像部件拍摄的第二场景图像确定场景中对象的深度信息;在光照强度较小的情况下,即第一场景图像的亮度较小的情况下,利用深度摄像部件拍摄的第二场景图像确定场景中对象的深度信息。实现了根据环境光照强度的不同,选用不同的部件采集图像来为目标对象提供引导服务,能够有效适应将光照强度的变化,获取到较为丰富的环境信息,同时,能够检测到缺乏纹理的障碍物,提高了障碍物检测的精确度和引导的安全性。

Description

可穿戴设备、智能引导方法及装置、引导系统、存储介质
相关申请的交叉引用
本公开基于申请号为202011060870.X、申请日为2020年9月30日、申请名称为“可穿戴设备、智能引导方法及装置、引导系统”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本公开作为参考。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种可穿戴设备、智能引导方法及装置、可穿戴设备、引导系统、存储介质。
背景技术
存在视力障碍人的数量巨大,这部分人的出行范围受限制,活动范围很小,并且出行中存在很大的安全隐患,造成该部分人的生活质量差。
现有的导盲技术能够为存在摄像头对环境的适应能力较差,不能很好的适应光照强度变化较大的场景,在光照强度变化较大的情况下,无法获取到丰富的环境信息;并且对于缺乏纹理、外表光滑、透明度高或距离较远的障碍物无法有效检测到。因此,现有的导盲技术无法精确的检测障碍物,无法为存在视力障碍的人提供高安全度的导盲服务。
发明内容
本申请实施例至少提供一种基于可穿戴设备的智能引导方法及装置、可穿戴设备、引导系统、存储介质,以提高深度信息检测的准确性,以及引导的准确性和安全性。
第一方面,本申请实施例提供了一种基于可穿戴设备的智能引导方法,所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件,所述方法包括:
获取第一可见光摄像部件采集的第一场景图像;
检测所述第一场景图像的亮度;
响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;
基于所述第二场景图像确定场景中的对象的深度信息;
基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
该方案,在光照强度较大的情况下,即第一场景图像的亮度较大的情况下,利用第二可见光摄 像部件拍摄的第二场景图像确定场景中对象的深度信息;在光照强度较小的情况下,即第一场景图像的亮度较小的情况下,利用深度摄像部件拍摄的第二场景图像确定场景中对象的深度信息。由此,实现了根据环境光照强度的不同,选用不同的部件采集的图像来为目标对象提供引导服务,能够有效适应将光照强度的变化,获取到较为丰富的环境信息,同时,能够检测到缺乏纹理的障碍物,提高了障碍物检测的精确度和引导的安全性。
在一种可能的实施方式中,所述响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,包括:
响应于所述第一场景图像的亮度满足预设的亮度,向所述第二可见光摄像部件发送第一曝光指令,并获取所述第二可见光摄像部件基于所述第一曝光指令进行曝光控制而采集到的第二场景图像;
所述响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像,包括:
响应于所述第一场景图像的亮度不满足预设的亮度,向所述深度摄像部件发送第二曝光指令,获取所述深度摄像部件基于所述第二曝光指令进行曝光控制而采集到的第二场景图像。
该实施方式,在光照强度较大的情况下,控制第二可见光摄像部件进行曝光,在光照强度较小的情况下,控制深度摄像部件进行曝光,实现了根据环境光照强度的不同,切换不同的摄像部件来采集用于确定对象的深度信息的第二场景图像,能够主动地适应将光照强度的变化,获取到较为丰富的环境信息,同时,能够检测到缺乏纹理的障碍物,提高了障碍物检测的精确度和引导的安全性。
在一种可能的实施方式中,所述基于所述第二场景图像确定场景中的对象的深度信息,包括:
响应于所述第一场景图像的亮度满足预设的亮度,将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;
基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
该实施方式,在光照强度较强的情况下,结合第一可见光摄像部件采集的第一场景图像和第二可见光摄像部件采集的第二场景图像能够较为准确的确定场景中对象的深度信息。
在一种可能的实施方式中,所述基于所述第二场景图像确定场景中的对象的深度信息,包括:
响应于所述第一场景图像的亮度不满足预设的亮度,确定场景中的对象在所述第二场景图像中的目标像素点;
根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。
该实施方式,在光照强度较弱的情况下,利用深度摄像部件采集的深度图像,即深度摄像部件采集的第二场景图像中的像素点的深度信息,能够较为准确地确定场景中对象的深度信息,能够检测出缺乏纹理的障碍物,提高了在光照强度较弱的情况下障碍物检测的精确度。
在一种可能的实施方式中,所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。
该实施方式,向目标对象提示场景中对象的深度信息,能够对目标独享的运行进行有效的引导, 提高引导效率和引导的安全性。
在一种可能的实施方式中,还包括:
基于所述第一场景图像确定场景中的对象相对于可穿戴设备的方位信息;
所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
该实施方式,基于方位信息和深度信息,能够为目标对象生成信息量更大、内容更丰富的引导信息,从而能够进一步提高引导的效率和引导的安全性。
在一种可能的实施方式中,所述可穿戴设备还包括超声探测部件;
所述方法还包括:
响应于所述深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,向所述超声探测部件发送超声探测指令,并接收所述超声探测部件基于所述超声探测指令探测到的超声波反馈信号;
基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
该实施方式,由于环境复杂或物体自身特性的影响,造成通过可见光摄像部件或深度摄像部件拍摄的第二场景图像不能准确确定场景中对象的深度信息,或是检测到第一场景图像中包括透明度较高或外表比较光滑等特性的对象的情况下,利用超声探测部件进行深度信息检测,提高了可穿戴设备的适用性,能够在更加复杂的环境中更加精确的检测场景中对象的深度信息。
在一种可能的实施方式中,所述可穿戴设备还包括姿态测量部件;
所述方法还包括:
获取姿态测量部件采集的可穿戴设备的姿态信息;
响应于根据所述姿态信息确定佩戴所述可穿戴设备的目标对象的姿态处于第一预设姿态,生成姿态纠正提示信息。
该实施方式,通过姿态测量部件测量的可穿戴设备的姿态信息,能够在目标对象的姿态处于第一预设姿态的情况下,生成提醒目标对象纠正姿态的姿态纠正提示信息,从而能够使可穿戴设设备拍摄到影响目标对象运行的对象,进一步提高了引导信息的准确性和引导的安全性。
在一种可能的实施方式中,所述方法还包括:
基于所述第一场景图像确定场景中的对象相对于所述可穿戴设备的方位信息;
基于所述可穿戴设备的姿态信息,将所述方位信息转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息;
所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
该实施方式,将场景中的对象相对于所述可穿戴设备的方位信息,转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息,即生成了对目标对象的运行有效的方位信息,利用该 有效的方位信息能够生成更加准确和有效的引导信息。
在一种可能的实施方式中,所述可穿戴设备还包括发声部件;
所述方法还包括:
基于所述引导信息生成语音导航信息,并发送至所述发声部件,得到所述发声部件向所述目标对象播放所述语音导航信息。
该实施方式,利用语音导航信息和发声部件能够有效引导目标对象避开障碍物,提高了引导效率和安全性。
在一种可能的实施方式中,所述可穿戴设备还包括分频电路;
所述响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,包括:
响应于所述第一场景图像的亮度满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第三曝光指令发送给第二可见光摄像部件,以及,获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像;和/或
所述响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像,包括:
响应于所述第一场景图像的亮度不满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第四曝光指令发送给深度摄像部件,以及,获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。
该实施方式,利用分频电路对曝光指令进行分频处理,并利用分频处理得到的曝光指令控制第二可见光摄像部件或深度摄像部件采集图像,实现了控制不同帧频的摄像部件同时曝光,节省了能源消耗。
第二方面,本申请实施例提供了一种基于可穿戴设备的智能引导装置,所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件,所述装置包括:
第一图像获取模块,配置为获取第一可见光摄像部件采集的第一场景图像;
亮度检测模块,配置为检测所述第一场景图像的亮度;
第二图像获取模块,配置为响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;
检测模块,配置为基于所述第二场景图像确定场景中的对象的深度信息;
引导信息生成模块,配置为基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
第三方面,本申请实施例提供了一种可穿戴设备,包括处理部件、第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;
所述第一可见光摄像部件,配置为采集第一场景图像;
所述第二可见光摄像部件和所述深度摄像部件配置为采集第二场景图像;
所述处理部件,配置为执行上述基于可穿戴设备的智能引导方法。
在一种可能的实施方式中,上述可穿戴设备还包括信号串行部件、信号传输线缆和信号解串部件;
所述信号串行部件与所述第一可见光摄像部件、第二可见光摄像部件、深度摄像部件通信连接;所述信号传输线缆的两端分别与所述信号串行部件和信号解串部件连接;所述信号解串部件与所述处理部件通信连接;
所述第一可见光摄像部件,还配置为将所述第一场景图像发送给所述信号串行部件;
所述第二可见光摄像部件和所述深度摄像部件,还配置为将所述第二场景图像发送给所述信号串行部件;
所述信号串行部件,配置为将接收的第一场景图像和第二场景图像转换为串行信号,并通过所述信号传输线缆发送给所述信号解串部件;
所述信号解串部件,配置为将接收的信号进行解串行处理,并将解串行处理得到的信号发送给所述处理部件。
该实施方式,利用信号串行部件将摄像部件拍摄的图像转换为串行信号,例如双绞线高速差分信号来进行传输,能够只利用两根线就能传输信号,并且传输速度更快、成本更低,传输距离更远,部件的体积更小。
在一种可能的实施方式中,所述深度摄像部件包括TOF摄像头。
该实施方式,利用TOF摄像头能够在光照强度较弱的情况下,较为准确的获取到场景中对象的深度信息。
第四方面,本申请实施例提供了一种引导系统,包括可穿戴设备和主机;
所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;
所述主机包括处理部件,所述处理部件与所述第一可见光摄像部件、第二可见光摄像部件和深度摄像部件通过信号传输线缆连接,所述处理部件配置为执行上述基于可穿戴设备的智能引导方法。
在一种可能的实施方式中,所述主机设有与所述处理部件连接的以下至少一项:定位模组、网络模组、配置为检测工作状态和/或充电管理的微控单元、音频模组。
第五方面,本申请实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行的情况下,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行的情况下执行上述智能引导方法的步骤。
第六方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行的情况下执行上述智能引导方法的步骤。
第七方面,本申请实施例提供了一种计算机程序产品,该计算机程序产品包括一条或多条指令,该一条或多条指令适于由处理器加载并执行上述智能引导方法的步骤。
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本申请实施例所提供的一种基于可穿戴设备的智能引导方法的流程图;
图2示出了本申请实施例所提供的一种可穿戴设备的结构示意图;
图3示出了本申请实施例所提供的一种引导系统的结构示意图;
图4示出了本申请实施例所提供的一种电子设备的示意图;
图5示出了本申请实施例所提供的一种基于可穿戴设备的智能引导装置的结构示意图。
实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
本申请实施例提供了一种基于可穿戴设备的智能引导方法及装置、可穿戴设备、引导系统、电子设备及存储介质,其中,在光照强度较大的情况下,即第一场景图像的亮度较大的情况下,利用第二可见光摄像部件拍摄的第二场景图像确定场景中对象的深度信息;在光照强度较小的情况下,即第一场景图像的亮度较小的情况下,利用深度摄像部件拍摄的第二场景图像确定场景中对象的深度信息。由此实现了根据环境光照强度的不同,选用不同的部件采集的图像来为目标对象提供引导服务,能够有效适应将光照强度的变化,获取到较为丰富的环境信息,同时,能够检测到缺乏纹理的障碍物,提高了障碍物检测的精确度和引导的安全性。
下面对本申请提供的基于可穿戴设备的智能引导方法及装置、可穿戴设备、引导系统、电子设备及存储介质的实施例进行说明。
本申请实施例提供的基于可穿戴设备的智能引导方法可以应用于处理部件上,该处理部件既可以是可穿戴设备上的一个部件,也可以单独位于一个主机上。可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件。
所述处理部件与所述第一可见光摄像部件、第二可见光摄像部件以及深度摄像部件通信连接。可穿戴设备被佩戴于目标对象的头部,作为一个头戴设备。也可以将第一可见光摄像部件、第二可见光摄像部件和深度摄像部件组合成一个头戴设备,佩戴于目标对象的头部。处理部件佩戴或固定于目标对象的其他部位,例如佩戴在目标对象的手臂上等。本申请实施例对上述各个部件在目标对象上所处的位置并不进行限定。
上述目标对象可以是视力存在障碍的对象。利用本申请实施例提供的引导信息能够引导目标对象避开障碍物,安全行走。
本申请实施例的智能引导方法用于检测场景中对象的深度信息,并基于检测得到的深度信息生成引导信息。如图1所示,上述基于可穿戴设备的智能引导方法可以包括如下步骤:
S110、获取第一可见光摄像部件采集的第一场景图像。
所述第一可见光摄像部件配置为拍摄所述可穿戴设备周围的第一场景图像,并发送给所述处理部件。第一可见光摄像部件拍摄场景图像的帧率较高,根据其拍摄的第一场景图像中对象的位置,能够确定场景中对象相对于可穿戴设备的方位信息。
第一可见光摄像部件可以是红绿蓝(Red Green Blue,RGB)摄像部件。
S120、检测所述第一场景图像的亮度。
在这里,可以采用多种图像亮度检测方法进行检测,例如可以利用预先训练好的神经网络检测第一场景图像的亮度,或者针对第一场景图像中各区域或各像素点的亮度进行分布统计或计算平均值,得到第一场景图像的亮度。
第一场景图像的亮度能够反映当前场景的光照强度,光照强度越强,第一场景图像的亮度越高,反之,光照强度越弱,第一场景图像的亮度越低。因此,可以基于第一场景图像的亮度判断当前场景光照强度的强弱,进而确定选择第二可见光摄像部件采集的图像或深度摄像部件采集的图像以进一步计算场景的深度信息,以适应光照强度的变化,提高对象深度信息检测以及生成的引导信息的精确度。
S130、响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像。
在第一场景图像的亮度满足预设的亮度的情况下,当前场景的光照强度较大,此时可以从第二可见光摄像部件处获取第二场景图像。
在光照强度较强的情况下,可见光摄像部件成像效果较好,可以利用两个可见光摄像部件拍摄的场景图像确定场景中对象的深度信息。在光照强度较弱的情况下,利用两个可见光摄像部件的成像效果较差,无法获取准确的深度信息。这时可以利用深度摄像部件采集深度图像来获取深度信息。
在第一场景图像的亮度不满足预设的亮度的情况下,当前场景的光照强度较弱,此时可以从深 度摄像部件获取所述深度摄像部件处获取第二场景图像。
在场景的光照强度较弱的情况下,利用两个可见光摄像部件的成像效果较差,无法获取准确的深度信息,此时可以利用深度摄像部件拍摄的场景图像来检测场景中物体的深度信息,由此可以克服由于光照强度弱造成的障碍物检测不到的问题。另外,两个可见光摄像部件拍摄的场景图像无法有效检测缺乏纹理的障碍物的深度信息,例如,天空、沙漠、白墙等;但是利用深度摄像部件采集的场景图像能够有效检测出上述缺乏纹理的障碍物的深度信息。
第二可见光摄像部件可以是RGB摄像部件。所述深度摄像部件可以是基于飞行时间(Time of Flight,TOF)成像的摄像头,即TOF摄像头。利用TOF摄像头能够在光照强度较弱的情况下,较为准确的获取到场景中对象的深度信息。
S140、基于所述第二场景图像确定场景中的对象的深度信息。
当第二场景图像为第二可见光摄像部件采集的图像的情况下,可以根据双目视觉的原理,基于第一场景图像和第二场景图像计算图像中物体对应的场景中的对象的深度信息。当前场景的光照强度较大的情况下,利用两个可见光摄像部件采集的两幅场景图像能够较为准确的确定场景中对象的深度信息。
例如,首先将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;之后基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
当第二场景图像为深度摄像部件采集的深度图像的情况下,可以直接基于深度摄像部件感知的深度信息确定场景中的对象的深度信息。
当前场景的光照强度较弱的情况下,利用TOF摄像头能够在光照强度较弱的情况下,较为准确的获取到场景中对象的深度信息。
例如,首先确定场景中的对象在所述第二场景图像中的目标像素点;之后根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。例如,可以通过目标检测从第二场景图像中检测出场景中的对象的位置,进而根据检测结果将第二场景图像中对应位置像素点的深度信息确定为该对象的深度信息。
S150、基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
在这里,可以基于深度信息生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。向目标对象提示场景中对象的深度信息,能够对目标独享的运行进行有效的引导,提高引导效率和引导的安全性。
另外,为了为目标对象提供更加丰富的引导信息,例如为目标对象提供障碍物的方向信息和深度信息的引导信息,这里,可以利用如下步骤生成引导信息:
基于所述第一场景图像确定场景中的对象相对于可穿戴设备的方位信息;基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
基于方位信息和深度信息生成的引导信息能够进一步提高引导的效率和引导的安全性。
在具体用于实施时,可以根据对象在第一场景图像中的位置,确定对象相对于可穿戴设备的方位信息。在这里,方位信息可以包括方位角,或者,方位信息可以表征预先划分的方位中的一个,预先划分的方位可以例如包括:左前方、右前方、正前方。例如,响应于对象A位于第一场景图像中的左侧,确定对象相对于可穿戴设备的方位信息为左前方,响应于对象A位于第一场景图像中的右侧,确定对象相对于可穿戴设备的方位信息为右前方,响应于对象A位于第一场景图像中的中部,确定对象相对于可穿戴设备的方位信息为正前方。
在一些实施例中,根据光照强度切换不同的摄像部件来采集第二场景图像具体包括可以利用如下步骤实现:
在所述第一场景图像的亮度满足预设的亮度,即当前场景的光照强度较强的情况下,处理部件向所述第二可见光摄像部件发送第一曝光指令,第二可见光摄像部件基于第一曝光指令进行曝光,采集第二场景图像。之后,处理部件从第二可见光摄像部件处获取第二场景图像。
当然,第二可见光摄像部件也可以是一直采集第二场景图像,只是在接收到处理部件发送的第一曝光指令的情况下,将对应时刻采集的第二场景图像发送给处理部件。
在所述第一场景图像的亮度不满足预设的亮度,即当前场景的光照强度较弱的情况下,处理部件向所述深度摄像部件发送第二曝光指令,深度摄像部件基于第二曝光指令进行曝光,采集第二场景图像。之后,处理部件从深度摄像部件处获取第二场景图像。
当然,深度摄像部件也可以是一直采集第二场景图像,只是在接收到处理部件发送的第二曝光指令的情况下,将对应时刻采集的第二场景图像发送给处理部件。
在光照强度较大的情况下,控制第二可见光摄像部件进行曝光,在光照强度较小的情况下,控制深度摄像部件进行曝光,实现了根据环境光照强度的不同,主动切换不同的摄像部件来采集用于确定对象的深度信息的第二场景图像,从而使本申请实施例的智能引导方法能够适应将光照强度的变化,获取到较为丰富的环境信息,同时,能够检测到缺乏纹理的障碍物,提高了障碍物检测的精确度和引导的安全性。
另外,由于第二可见光摄像部件拍摄的场景图像并不用于确定场景中对象相对于可穿戴设备的方位信息和/或对象的种类信息等,因此其帧率相对于第一可见光摄像部件较低,这样既能够满足深度信息检测的需求,又能降低第二可见光摄像部件的功耗,减少可穿戴设备的散热量。
例如,可以通过分频电路实现第一可见光摄像部件与第二可见光摄像部件在不同帧率下同步曝光;在所述第一场景图像的亮度满足预设的亮度的情况下,处理部件向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理,并将分频处理得到的第三曝光指令发送给第二可见光摄像部件。之后处理部件获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像。由于第一可见光摄像部件采集图像的帧率较高,第一可见光采集部件可以直接基于处理部件向所述分频电路发送曝光指令进行曝光,处理部件在向所述分频电路发送曝光指令的同时,向第一可见光摄像部件发送曝光指令。
为了节约深度摄像部件的功耗,可以控制深度摄像部件以低于第一可见光设想不见的频率曝光, 因而深度摄像部件拍摄的图像的帧率相对于第一可见光摄像部件较低,这样既能够满足深度信息检测的需求,又能降低深度摄像部件的功耗,减少可穿戴设备的散热量。
例如,可以通过分频电路实现第一可见光摄像部件与深度摄像部件在不同帧率下同步曝光;在所述第一场景图像的亮度不满足预设的亮度的情况下,处理部件向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理,并将分频处理得到的第四曝光指令发送给深度摄像部件,之后,处理部件获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。由于第一可见光摄像部件采集图像的帧率较高,第一可见光采集部件可以直接基于处理部件向所述分频电路发送曝光指令进行曝光,处理部件在向所述分频电路发送曝光指令的同时,向第一可见光摄像部件发送曝光指令。
利用分频电路对曝光指令进行分频处理,并利用分频处理得到的曝光指令控制第二可见光摄像部件或深度摄像部件采集图像,实现了控制不同帧频的摄像部件的同步曝光,节省了能源消耗。
在环境光照强度较弱的情况下,可以利用第一可见光摄像部件和深度摄像部件拍摄的图像来检测场景中对象的深度信息,但是,该方式不适用于较远距离的对象的深度信息检测,同时,该方式也无法有效检测出透明度较高以及外表比较光滑的对象的深度信息,例如玻璃、水面等。此时,检测得到的深度信息可能是错误的,因此处理部件在探测得到的深度信息和场景中对象的类别之后,判断深度信息是否明显不合理,即深度信息对应的深度是否大于预设深度阈值,和/或判断是否检测到场景中出现玻璃、水面等预设类别的对象。
响应于处理部件确定深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,处理部件向超声探测部件发送超声探测指令,所述超声探测部件基于所述超声探测指令发送超声波,并将探测到的超声波反馈信号发送给处理部件。处理部件基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
超声探测部件对物体的深度信息的探测精度较高。在由于环境复杂或物体自身特性的影响,造成通过可见光摄像部件或深度摄像部件拍摄的第二场景图像不能准确确定场景中对象的深度信息的情况下,利用超声探测部件进行深度信息检测,提高了可穿戴设备的适用性,能够在更加复杂的环境中更加精确的检测场景中对象的深度信息。
佩戴上述可穿戴设备的目标对象在运动过程中,可能会做一些动作,造成根据拍摄的场景图像执行对象检测的情况下无法准确检测出可能阻碍目标对象运动的对象,例如目标对象较大角度的低头的情况下,可穿戴设备拍摄的场景集中于对面,而获得的场景图像是地面的图像,无法准确检测到目标对象前方或侧面的可能影响其运动的障碍物。此时,为了提高生成的引导信息准确度,避免无效的障碍物被检测出来以及无法准确检测到可能影响目标对象行进的障碍物,需要结合目标对象的姿态信息生成引导信息,或是在目标对象做出较大角度的仰头等预设姿态的情况下,提醒目标对象纠正姿态。
例如,上述可穿戴设备还可以包括姿态测量部件,所述姿态测量部件与所述处理部件通信连接。姿态测量部件测量所述可穿戴设备的姿态信息,这里可穿戴设备的姿态信息与目标对象的姿态信息认为是相同的。姿态测量部件将测量得到的姿态信息发送给处理部件。
上述方法还包括:基于接收的姿态信息,判断佩戴所述可穿戴设备的目标对象的姿态是否处于第一预设姿态,若是,生成姿态纠正提示信息。上述第一预设姿态为能够拍摄到不影响目标对象运行的对象的姿态,例如,较大角度的仰头。在确定目标对象处于第一预设姿态的情况下,利用姿态纠正提示信息提示目标对象纠正当前的姿态,进一步提高了引导信息的准确性和引导的安全性。
在接收到姿态信息之后,还可以基于姿态信息生成引导信息,例如,基于所述可穿戴设备的姿态信息,将对象相对于所述可穿戴设备的方位信息转换为对象相对于处于第二预设姿态的可穿戴设备的方位信息;基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
在具体用于实施的情况下,可以利用如下步骤对方位信息进行转换:确定第二预设姿态和当前所述可穿戴设备的姿态信息之间的姿态差信息;利用姿态差信息对对象相对于可穿戴设备的方位信息进行转换,得到转化后的方位信息。例如,响应于姿态差信息指示可穿戴设备处于仰头80度的姿态,确定的方位信息指示对象位于可穿戴设备的正前方,转换后的方向信息则是对象位于可穿戴设备的上方;响应于姿态差信息指示可穿戴设备处于左侧转头60度的姿态,确定的方位信息指示对象位于可穿戴设备的正前方,转换后的方向信息则是对象位于可穿戴设备的左前方。
另外,场景中的对象的深度信息具体用于表征对象与可穿戴设备的距离,在可穿戴设备的姿态发生变化后,由于可穿戴设备的位置并未发生明显的变化,因此对象与可穿戴设备的距离也不会出现明显的变化,因此,场景中的对象的深度信息不需要进行转换,场景中的对象的深度信息能够较为准确的表征对象与可穿戴设备的距离。
上述第二预设姿态例如可穿戴设备朝向目标对象的行进方向的姿态。
上述转换后的方位信息能够用于判断对应的对象是否位于目标对象的运行路径上,即判断场景中的对象是否为目标对象运行的障碍物,在对应的对象是目标对象的障碍物的情况下,需要基于转换后的方位信息生成引导信息,以引导目标对象绕开障碍物。
将场景中的对象相对于所述可穿戴设备的方位信息,转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息,即生成了对目标对象的运行有效的方位信息,利用该有效的方位信息能够生成更加准确和有效的引导信息。
在应用中,姿态测量部件可以是陀螺仪,例如九轴陀螺仪。
上述各个摄像部件采集的图像信息量较大,为了提高引导的实时性,需要在短时间内完成各个摄像部件与处理部件的信号传输,本申请的实施例利用信号串行部件、信号传输线缆和信号解串部件来实现信号的传输。
所述信号串行部件与所述第一可见光摄像部件、第二可见光摄像部件、深度摄像部件通信连接;所述信号传输线缆的两端分别与所述信号串行部件和信号解串部件连接;所述信号解串部件与所述处理部件通信连接。
所述第一可见光摄像部件将所述第一场景图像发送给所述信号串行部件;所述第二可见光摄像部件和所述深度摄像部件将所述第二场景图像发送给所述信号串行部件;所述信号串行部件将接收的第一场景图像和第二场景图像转换为串行信号,并通过所述信号传输线缆发送给所述信号解串部 件;所述信号解串部件将接收的信号进行解串行处理,并将解串行处理得到的信号发送给所述处理部件。
上述利用信号串行部件、信号传输线缆和信号解串部件传输信号例如可以是V-BY-ONE双绞线技术。V-BY-ONE是用于视频信号传输的21倍速串行化接口技术。V-BY-ONE双绞线技术传输线数量更少,只需要两根线,更轻便;对传输线材要求更低,不需要屏蔽,节省成本;传输带宽更高,可达每秒3.75千兆位(Gbps);传输距离更远,可以高质量传输距离达15米;芯片体积更小,更利于轻便型穿戴式产品的设计,例如可以封装为5毫米(mm)乘以5毫米(mm)。
上述信号传输线缆采用V-BY-ONE双绞线连接,抗弯折,抗拉伸,轻便,柔软。
利用信号串行部件将摄像部件拍摄的图像转换为双绞线高速差分信号来进行传输,能够只利用两根线就能传输信号,并且传输速度更快、成本更低,传输距离更远,部件的体积更小。
利用信号串行部件将摄像部件拍摄的图像转换为串行信号,例如双绞线高速差分信号来进行传输,能够只利用两根线就能传输信号,并且传输速度更快、成本更低,传输距离更远,部件的体积更小。
为了通过有声的方式向目标对象输出引导信息,可穿戴设备还可以包括发声部件,所述发声部件与所述处理部件通信连接。例如可以利用如下步骤实现引导信息的播放:基于所述引导信息生成语音导航信息,并发送至所述发声部件,所述发声部件向所述目标对象播放所述语音导航信息。
利用语音导航信息和发声部件能够有效引导目标对象避开障碍物,提高了引导效率和安全性。
在具体用于实施的情况下,上述处理部件可以是片上系统(System on Chip,SOC),发声部件可以是音频骨传导耳机,上述音频骨传导耳机还用于进行人机对话。
为了兼顾不同光照环境的使用、检测远距离处的对象的深度信息、弥补摄像部件的不足以及获得更加丰富的环境信息,本申请实施例综合了第一可见光摄像部件、第二可见光摄像部件,深度摄像部件,在判断出光照强弱后能主动切换摄像部件,提高检测精度,适应多变的光照环境。另外,本申请实施例还融合了超声探测部件和姿态测量部件,利用超声波来获取较为准确的深度信息,来弥补摄像部件的不足,利用姿态测量部件获取姿态信息,优化检测精度。上述实施例中的智能引导方法能够进行路径规划、定位、障碍物检测、语音提示等功能,精度更高,环境适应性更强,能够让存在视力障碍的人能够独立出行,并且更方便和安全。
对应于是上述基于可穿戴设备的智能引导方法,本申请实施例还提供了一种基于可穿戴设备的智能引导装置,该装置中的各个模块所实现的功能与上面的智能引导方法中对应的步骤相同,应用于处理部件上。所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件。如图5所示,智能引导装置可以包括:
第一图像获取模块510,配置为获取第一可见光摄像部件采集的第一场景图像。
亮度检测模块520,配置为检测所述第一场景图像的亮度。
第二图像获取模块530,配置为响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像。
检测模块540,配置为基于所述第二场景图像确定场景中的对象的深度信息。
引导信息生成模块550,配置为基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
在一些实施例中,所述第二图像获取模块530在响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像的情况下,配置为:
响应于所述第一场景图像的亮度满足预设的亮度,向所述第二可见光摄像部件发送第一曝光指令,并获取所述第二可见光摄像部件基于所述第一曝光指令进行曝光控制而采集到的第二场景图像;
所述第二图像获取模块530在响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像的情况下,配置为:
响应于所述第一场景图像的亮度不满足预设的亮度,向所述深度摄像部件发送第二曝光指令,获取所述深度摄像部件基于所述第二曝光指令进行曝光控制而采集到的第二场景图像。
在一些实施例中,所述检测模块540在基于所述第二场景图像确定场景中的对象的深度信息的情况下,配置为:
响应于所述第一场景图像的亮度满足预设的亮度,将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;
基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
在一些实施例中,所述检测模块540在基于所述第二场景图像确定场景中的对象的深度信息,包括:
响应于所述第一场景图像的亮度不满足预设的亮度,确定场景中的对象在所述第二场景图像中的目标像素点;
根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。
在一些实施例中,所述引导信息生成模块550在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。
在一些实施例中,所述检测模块540还配置为:
基于所述第一场景图像确定场景中的对象相对于可穿戴设备的方位信息;
所述引导信息生成模块550在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息的情况下,配置为:
基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
在一些实施例中,所述可穿戴设备还包括超声探测部件;
所述检测模块540还配置为:
响应于所述深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,向所述超声探测部件发送超声探测指令,并接收所述超声探测部件基于所述超声探测 指令探测到的超声波反馈信号;
基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
在一些实施例中,所述可穿戴设备还包括姿态测量部件;
所述检测模块540还配置为:
获取所述姿态测量部件采集的可穿戴设备的姿态信息;
响应于根据所述姿态信息确定所佩戴所述可穿戴设备的目标对象的姿态处于第一预设姿态,生成姿态纠正提示信息。
在一些实施例中,所述检测模块540还配置为:
基于所述第一场景图像确定场景中的对象相对于所述可穿戴设备的方位信息;
基于所述可穿戴设备的姿态信息,将所述方位信息转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息;
所述引导信息生成模块550在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息的情况下,配置为:
基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
在一些实施例中,所述可穿戴设备还包括发声部件;
所述引导信息生成模块550还配置为:
基于所述引导信息生成语音导航信息,并发送至所述发声部件,得到所述发声部件向所述目标对象播放所述语音导航信息。
在一些实施例中,所述可穿戴设备还包括分频电路;
所述第二图像获取模块530在响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像的情况下,配置为:
响应于所述第一场景图像的亮度满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第三曝光指令发送给第二可见光摄像部件,以及,获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像;和/或
所述第二图像获取模块530在响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像的情况下,配置为:
响应于所述第一场景图像的亮度不满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第四曝光指令发送给深度摄像部件,以及,获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。
对应于上述基于可穿戴设备的智能引导方法,本申请实施例还提供了一种可穿戴设备,该可穿戴设备中的各个部件所实现的功能与上述实施例中的对应的部件相同。例如,如图2所示,编号21表示头戴设备端框图,编号22表示主机端框图,头戴设备端框图和主机端框图通过信号传输线缆208连接。可穿戴设备可以包括:处理部件201、第一可见光摄像部件202、第二可见光摄像部件203 和深度摄像部件204。所述第一可见光摄像部件202配置为采集第一场景图像;所述第二可见光摄像部件203和所述深度摄像部件204配置为采集第二场景图像;所述处理部件201配置为执行上述基于可穿戴设备的智能引导方法。
在一些实施例中,上述可穿戴设备还可以包括超声探测部件205、姿态测量部件206、信号串行部件207、信号传输线缆208、信号解串部件209、发声部件210。上述超声探测部件205、姿态测量部件206、信号串行部件207、信号传输线缆208、信号解串部件209、发声部件210实现的功能与上述实施例的智能引导方法中的对应的部件相同。
另外,可穿戴设备还可以包括微控制部件MCU211,WIFI部件212、GPS部件213。微控制部件MCU211配置为进行可穿戴设备的充电管理和检测整机状态,GPS部件213配置为进行可穿戴设备的定位,WIFI部件212配置为将可穿戴设备中的第一场景图像、第二场景图像、深度信息等发送给远端服务器。
在一些实施例中,如图2所示,头戴设备端框图21包括两个可见光摄像部件202和203,深度摄像部件204,以其中一个RGB摄像部件为主摄像部件,输出同步触发信号给另一个RGB摄像部件,同时经过一个分频芯片输出一个同步触发信号给深度摄像部件,这样,两个RGB摄像部件202和203、深度摄像部件204即使在不同帧率下也能够实现三路摄像部件同时曝光。三路摄像部件的图像信号经过信号串行部件207(V-BY-ONE串行芯片),转换成双绞线高速差分信号,通过信号传输线缆208与主机端框图22连接;姿态测量部件206(陀螺仪)用来时刻监测用户姿态信息,超声波探测部件205用来检测玻璃墙、水面等距离信息。
主机端框图22包括处理部件201(SOC部件)、全球定位系统(Global Positioning System,GPS)部件213、WIFI部件212、集成V-BY-ONE解串芯片(信号解串部件)209、MCU(微控制部件)211和发声部件210,其中,处理部件201采用人工智能处理模块,算力强劲。集成V-BY-ONE解串芯片209将接触移动产业处理器接口相机接口标准(Mobile Industry Processor Interface-Camera Serial Interface,MIPI-CSI)数据送到处理部件201的移动产业处理器接口物理层(Mobile Industry Processor Interface Physical,MIPI PHY)输入接口,使用算法进一步处理,然后通过发声部件210音频骨传导耳机进行播报提醒,发声部件210还可以进行人机交互语音对话;MCU211用来检测整机工作状态和进行充电管理等;GPS部件213采用实时动态(Real time kinematic,RTK)技术进行精确定位;WIFI部件212用来数据上传同步网络。
信号传输线缆208采用V-BY-ONE双绞线连接。
如图3所示,本申请实施例提供了一种引导系统300,包括可穿戴设备301和主机302。所述可穿戴设备301包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;所述主机302包括处理部件,所述处理部件与所述第一可见光摄像部件、第二可见光摄像部件和深度摄像部件通过信号传输线缆连接,所述处理部件配置为执行上述基于可穿戴设备的智能引导方法。
所述主机302可以设有与所述处理部件连接的以下至少一项:定位模组、网络模组、配置为检测工作状态和/或充电管理的微控单元、音频模组。
对应于图1中的智能引导方法,本申请实施例还提供了一种电子设备400,如图4所示,为本 申请实施例提供的电子设备400结构示意图,包括:
处理器41、存储器42、和总线43;存储器42配置为存储执行指令,包括内存421和外部存储器422;这里的内存421也称内存储器,配置为暂时存放处理器41中的运算数据,以及与硬盘等外部存储器422交换的数据,处理器41通过内存421与外部存储器422进行数据交换,当电子设备400运行的情况下,处理器41与存储器42之间通过总线43通信,使得处理器41执行以下指令:获取第一可见光摄像部件采集的第一场景图像;检测所述第一场景图像的亮度;响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;基于所述第二场景图像确定场景中的对象的深度信息;基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行的情况下执行上述方法实施例中所述的智能引导方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本申请实施例所提供的智能引导方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的智能引导方法的步骤,例如可参见上述方法实施例。
本申请实施例还提供一种计算机程序,该计算机程序被处理器执行的情况下实现前述实施例的任意一种方法。该计算机程序产品可以例如通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品例如体现为计算机存储介质,在另一个可选实施例中,计算机程序产品例如体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体包括工作过程,可以参考前述方法实施例中的对应过程。在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用的情况下,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计 算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本申请的具体包括实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。
工业实用性
在本公开实施例中,在光照强度较大的情况下,即第一场景图像的亮度较大的情况下,利用第二可见光摄像部件拍摄的第二场景图像确定场景中对象的深度信息;在光照强度较小的情况下,即第一场景图像的亮度较小的情况下,利用深度摄像部件拍摄的第二场景图像确定场景中对象的深度信息。由此,实现了根据环境光照强度的不同,选用不同的部件采集的图像来为目标对象提供引导服务,能够有效适应将光照强度的变化,获取到较为丰富的环境信息,同时,能够检测到缺乏纹理的障碍物,提高了障碍物检测的精确度和引导的安全性。

Claims (30)

  1. 一种基于可穿戴设备的智能引导方法,所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件,所述方法包括:
    获取第一可见光摄像部件采集的第一场景图像;
    检测所述第一场景图像的亮度;
    响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;
    基于所述第二场景图像确定场景中的对象的深度信息;
    基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
  2. 根据权利要求1所述的方法,其中,所述响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,包括:
    响应于所述第一场景图像的亮度满足预设的亮度,向所述第二可见光摄像部件发送第一曝光指令,并获取所述第二可见光摄像部件基于所述第一曝光指令进行曝光控制而采集到的第二场景图像;
    所述响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像,包括:
    响应于所述第一场景图像的亮度不满足预设的亮度,向所述深度摄像部件发送第二曝光指令,获取所述深度摄像部件基于所述第二曝光指令进行曝光控制而采集到的第二场景图像。
  3. 根据权利要求1或2所述的方法,其中,所述基于所述第二场景图像确定场景中的对象的深度信息,包括:
    响应于所述第一场景图像的亮度满足预设的亮度,将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;
    基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
  4. 根据权利要求1或2所述的方法,其中,所述基于所述第二场景图像确定场景中的对象的深度信息,包括:
    响应于所述第一场景图像的亮度不满足预设的亮度,确定场景中的对象在所述第二场景图像中的目标像素点;
    根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。
  5. 根据权利要求1至4任一项所述的方法,其中,所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
    生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。
  6. 根据权利要求1至5任一项所述的方法,其中,还包括:
    基于所述第一场景图像确定场景中的对象相对于可穿戴设备的方位信息;
    所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
    基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
  7. 根据权利要求1至6任一项所述的方法,其中,所述可穿戴设备还包括超声探测部件;
    所述方法还包括:
    响应于所述深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,向所述超声探测部件发送超声探测指令,并接收所述超声探测部件基于所述超声探测指令探测到的超声波反馈信号;
    基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
  8. 根据权利要求1至7任一项所述的方法,其中,所述可穿戴设备还包括姿态测量部件;
    所述方法还包括:
    获取所述姿态测量部件采集的可穿戴设备的姿态信息;
    响应于根据所述姿态信息确定所佩戴所述可穿戴设备的目标对象的姿态处于第一预设姿态,生成姿态纠正提示信息。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    基于所述第一场景图像确定场景中的对象相对于所述可穿戴设备的方位信息;
    基于所述可穿戴设备的姿态信息,将所述方位信息转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息;
    所述基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:
    基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
  10. 根据权利要求1至9任一项所述的方法,其中,所述可穿戴设备还包括发声部件;
    所述方法还包括:
    基于所述引导信息生成语音导航信息,并发送至所述发声部件,得到所述发声部件向所述目标对象播放所述语音导航信息。
  11. 根据权利要求1至10任一项所述的方法,其中,所述可穿戴设备还包括分频电路;
    所述响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,包括:
    响应于所述第一场景图像的亮度满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第三曝光指令发送给第二可见光摄像部件,以及,获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像;和/或
    所述响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像,包括:
    响应于所述第一场景图像的亮度不满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第四曝光指令发送给深度摄像部件,以及,获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。
  12. 一种基于可穿戴设备的智能引导装置,所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件,所述装置包括:
    第一图像获取模块,配置为获取第一可见光摄像部件采集的第一场景图像;
    亮度检测模块,配置为检测所述第一场景图像的亮度;
    第二图像获取模块,配置为响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像,和/或,响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像;
    检测模块,配置为基于所述第二场景图像确定场景中的对象的深度信息;
    引导信息生成模块,配置为基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息。
  13. 根据权利要求12所述的智能引导装置,所述第二图像获取模块在响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度满足预设的亮度,向所述第二可见光摄像部件发送第一曝光指令,并获取所述第二可见光摄像部件基于所述第一曝光指令进行曝光控制而采集到的第二场景图像;
    所述第二图像获取模块在响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度不满足预设的亮度,向所述深度摄像部件发送第二曝光指令,获取所述深度摄像部件基于所述第二曝光指令进行曝光控制而采集到的第二场景图像。
  14. 根据权利要求12或13所述的智能引导装置,所述检测模块在基于所述第二场景图像确定场景中的对象的深度信息的情况下,配置为:响应于所述第一场景图像的亮度满足预设的亮度,将所述第一场景图像中表征场景中的对象的像素点与所述第二场景图像中表征场景中的对象的像素点进行匹配,得到像素点对;基于与所述像素点对相对应的视差信息、所述第一可见光摄像部件与所述第二可见光摄像部件的中心距以及所述第一可见光摄像部件的焦距,确定所述场景中的对象的深度信息。
  15. 根据权利要求12或13所述的智能引导装置,所述检测模块在基于所述第二场景图像确定场景中的对象的深度信息,包括:响应于所述第一场景图像的亮度不满足预设的亮度,确定场景中的对象在所述第二场景图像中的目标像素点;根据所述第二场景图像中所述目标像素点的深度信息,确定所述场景中的对象的深度信息。
  16. 根据权利要求12至15任一项所述的智能引导装置,所述引导信息生成模块在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息,包括:生成提示信息,所述提示信息用于向所述佩戴所述可穿戴设备的目标对象提示所述深度信息。
  17. 根据权利要求12至16任一项所述的智能引导装置,所述检测模块还配置为:基于所述第 一场景图像确定场景中的对象相对于可穿戴设备的方位信息;
    所述引导信息生成模块在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息的情况下,配置为:基于场景中的对象相对于可穿戴设备的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息。
  18. 根据权利要求12至17任一项所述的智能引导装置,所述可穿戴设备还包括超声探测部件;
    所述检测模块还配置为:响应于所述深度信息对应的深度大于预设深度阈值和/或检测到所述第一场景图像中包含预设类别的对象,向所述超声探测部件发送超声探测指令,并接收所述超声探测部件基于所述超声探测指令探测到的超声波反馈信号;基于接收的所述超声波反馈信号,更新所述场景中的对象的深度信息。
  19. 根据权利要求12至18任一项所述的智能引导装置,所述可穿戴设备还包括姿态测量部件;
    所述检测模块还配置为:获取所述姿态测量部件采集的可穿戴设备的姿态信息;响应于根据所述姿态信息确定所佩戴所述可穿戴设备的目标对象的姿态处于第一预设姿态,生成姿态纠正提示信息。
  20. 根据权利要求19所述的智能引导装置,所述检测模块还配置为:基于所述第一场景图像确定场景中的对象相对于所述可穿戴设备的方位信息;基于所述可穿戴设备的姿态信息,将所述方位信息转换为场景中的对象相对于处于第二预设姿态的可穿戴设备的方位信息;
    所述引导信息生成模块在基于所述深度信息生成对佩戴所述可穿戴设备的目标对象的引导信息的情况下,配置为:基于转换后的方位信息和场景中的对象的深度信息,生成对佩戴所述可穿戴设备的目标对象的引导信息和/或姿态纠正提示信息。
  21. 根据权利要求12至20任一项所述的智能引导装置,所述可穿戴设备还包括发声部件;
    所述引导信息生成模块还配置为:基于所述引导信息生成语音导航信息,并发送至所述发声部件,得到所述发声部件向所述目标对象播放所述语音导航信息。
  22. 根据权利要求12至21任一项所述的智能引导装置,所述可穿戴设备还包括分频电路;
    所述第二图像获取模块在响应于所述第一场景图像的亮度满足预设的亮度,获取所述第二可见光摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第三曝光指令发送给第二可见光摄像部件,以及,获取所述第二可见光摄像部件基于所述第三曝光指令进行曝光控制而采集到的第二场景图像;和/或
    所述第二图像获取模块530在响应于所述第一场景图像的亮度不满足预设的亮度,获取所述深度摄像部件采集的第二场景图像的情况下,配置为:响应于所述第一场景图像的亮度不满足预设的亮度,向所述分频电路发送曝光指令,得到所述分频电路对接收的曝光指令进行分频处理并将分频处理得到的第四曝光指令发送给深度摄像部件,以及,获取所述深度摄像部件基于所述第四曝光指令进行曝光控制而采集到的第二场景图像。
  23. 一种可穿戴设备,包括处理部件、第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;
    所述第一可见光摄像部件,配置为采集第一场景图像;
    所述第二可见光摄像部件和所述深度摄像部件配置为采集第二场景图像;
    所述处理部件,配置为执行权利要求1至11任一项所述的基于可穿戴设备的智能引导方法。
  24. 根据权利要求23所述的可穿戴设备,其中,还包括信号串行部件、信号传输线缆和信号解串部件;
    所述信号串行部件与所述第一可见光摄像部件、第二可见光摄像部件、深度摄像部件通信连接;所述信号传输线缆的两端分别与所述信号串行部件和信号解串部件连接;所述信号解串部件与所述处理部件通信连接;
    所述第一可见光摄像部件,还配置为将所述第一场景图像发送给所述信号串行部件;
    所述第二可见光摄像部件和所述深度摄像部件,还配置为将所述第二场景图像发送给所述信号串行部件;
    所述信号串行部件,配置为将接收的第一场景图像和第二场景图像转换为串行信号,并通过所述信号传输线缆发送给所述信号解串部件;
    所述信号解串部件,配置为将接收的信号进行解串行处理,并将解串行处理得到的信号发送给所述处理部件。
  25. 根据权利要求23或24所述的可穿戴设备,其中,所述深度摄像部件包括TOF摄像头。
  26. 一种引导系统,包括可穿戴设备和主机;
    所述可穿戴设备包括第一可见光摄像部件、第二可见光摄像部件和深度摄像部件;
    所述主机包括处理部件,所述处理部件与所述第一可见光摄像部件、第二可见光摄像部件和深度摄像部件通过信号传输线缆连接,所述处理部件用于执行权利要求1至11任一项所述的基于可穿戴设备的智能引导方法。
  27. 根据权利要求26所述的引导系统,其中,所述主机设有与所述处理部件连接的以下至少一项:定位模组、网络模组、用于检测工作状态和/或充电管理的微控单元、音频模组。
  28. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行的情况下,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行的情况下执行如权利要求1至11任一项所述的智能引导方法的步骤。
  29. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行的情况下执行如权利要求1至11任一项所述的智能引导方法的步骤。
  30. 一种计算机程序产品,所述计算机程序产品包括一条或多条指令,所述一条或多条指令适于由处理器加载并执行如权利要求1至11任一项所述的智能引导方法的步骤。
PCT/CN2021/091150 2020-09-30 2021-04-29 可穿戴设备、智能引导方法及装置、引导系统、存储介质 WO2022068193A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020217036427A KR20220044897A (ko) 2020-09-30 2021-04-29 웨어러블 기기, 스마트 가이드 방법 및 장치, 가이드 시스템, 저장 매체
JP2021564133A JP2023502552A (ja) 2020-09-30 2021-04-29 ウェアラブルデバイス、インテリジェントガイド方法及び装置、ガイドシステム、記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011060870.X 2020-09-30
CN202011060870.XA CN112188059B (zh) 2020-09-30 2020-09-30 可穿戴设备、智能引导方法及装置、引导系统

Publications (1)

Publication Number Publication Date
WO2022068193A1 true WO2022068193A1 (zh) 2022-04-07

Family

ID=73948406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/091150 WO2022068193A1 (zh) 2020-09-30 2021-04-29 可穿戴设备、智能引导方法及装置、引导系统、存储介质

Country Status (4)

Country Link
JP (1) JP2023502552A (zh)
KR (1) KR20220044897A (zh)
CN (1) CN112188059B (zh)
WO (1) WO2022068193A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114175A1 (zh) * 2022-11-30 2024-06-06 微智医疗器械有限公司 双目视差估计方法、视觉假体以及计算机可读存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188059B (zh) * 2020-09-30 2022-07-15 深圳市商汤科技有限公司 可穿戴设备、智能引导方法及装置、引导系统
CN114690120A (zh) * 2021-01-06 2022-07-01 杭州嘉澜创新科技有限公司 一种定位方法、装置和系统、计算机可读存储介质
CN112950699A (zh) * 2021-03-30 2021-06-11 深圳市商汤科技有限公司 深度测量方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011250928A (ja) * 2010-06-01 2011-12-15 Chuo Univ 視覚障害者用空間認識装置、方法およびプログラム
WO2016047890A1 (ko) * 2014-09-26 2016-03-31 숭실대학교산학협력단 보행 보조 방법 및 시스템, 이를 수행하기 위한 기록매체
CN106937910A (zh) * 2017-03-20 2017-07-11 杭州视氪科技有限公司 一种障碍物和坡道检测系统及方法
CN107888896A (zh) * 2017-10-20 2018-04-06 宁波天坦智慧电子科技股份有限公司 一种用于导盲眼镜的障碍判断与提醒方法及一种导盲眼镜
CN109120861A (zh) * 2018-09-29 2019-01-01 成都臻识科技发展有限公司 一种极低照度下的高质量成像方法及系统
CN109831660A (zh) * 2019-02-18 2019-05-31 Oppo广东移动通信有限公司 深度图像获取方法、深度图像获取模组及电子设备
CN112188059A (zh) * 2020-09-30 2021-01-05 深圳市商汤科技有限公司 可穿戴设备、智能引导方法及装置、引导系统

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012137434A1 (ja) * 2011-04-07 2012-10-11 パナソニック株式会社 立体撮像装置
JP2014067320A (ja) * 2012-09-27 2014-04-17 Hitachi Automotive Systems Ltd ステレオカメラ装置
CN106038183A (zh) * 2016-06-29 2016-10-26 冯伟林 一种盲人穿戴设备及导航系统
CN106210536A (zh) * 2016-08-04 2016-12-07 深圳众思科技有限公司 一种屏幕亮度调节方法、装置及终端
CN108055452B (zh) * 2017-11-01 2020-09-18 Oppo广东移动通信有限公司 图像处理方法、装置及设备
US20190320102A1 (en) * 2018-04-13 2019-10-17 Qualcomm Incorporated Power reduction for dual camera synchronization
WO2020037575A1 (zh) * 2018-08-22 2020-02-27 深圳市大疆创新科技有限公司 图像深度估计方法、装置、可读存储介质及电子设备
CN111698494B (zh) * 2018-08-22 2022-10-28 Oppo广东移动通信有限公司 电子装置
CN109712192B (zh) * 2018-11-30 2021-03-23 Oppo广东移动通信有限公司 摄像模组标定方法、装置、电子设备及计算机可读存储介质
CN110664593A (zh) * 2019-08-21 2020-01-10 重庆邮电大学 基于HoloLens的盲人导航系统及方法
CN111553862B (zh) * 2020-04-29 2023-10-13 大连海事大学 一种海天背景图像去雾和双目立体视觉定位方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011250928A (ja) * 2010-06-01 2011-12-15 Chuo Univ 視覚障害者用空間認識装置、方法およびプログラム
WO2016047890A1 (ko) * 2014-09-26 2016-03-31 숭실대학교산학협력단 보행 보조 방법 및 시스템, 이를 수행하기 위한 기록매체
CN106937910A (zh) * 2017-03-20 2017-07-11 杭州视氪科技有限公司 一种障碍物和坡道检测系统及方法
CN107888896A (zh) * 2017-10-20 2018-04-06 宁波天坦智慧电子科技股份有限公司 一种用于导盲眼镜的障碍判断与提醒方法及一种导盲眼镜
CN109120861A (zh) * 2018-09-29 2019-01-01 成都臻识科技发展有限公司 一种极低照度下的高质量成像方法及系统
CN109831660A (zh) * 2019-02-18 2019-05-31 Oppo广东移动通信有限公司 深度图像获取方法、深度图像获取模组及电子设备
CN112188059A (zh) * 2020-09-30 2021-01-05 深圳市商汤科技有限公司 可穿戴设备、智能引导方法及装置、引导系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114175A1 (zh) * 2022-11-30 2024-06-06 微智医疗器械有限公司 双目视差估计方法、视觉假体以及计算机可读存储介质

Also Published As

Publication number Publication date
JP2023502552A (ja) 2023-01-25
CN112188059B (zh) 2022-07-15
CN112188059A (zh) 2021-01-05
KR20220044897A (ko) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2022068193A1 (zh) 可穿戴设备、智能引导方法及装置、引导系统、存储介质
US10764585B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
US11989350B2 (en) Hand key point recognition model training method, hand key point recognition method and device
US20210209859A1 (en) Cross reality system
CN107545302B (zh) 一种人眼左右眼图像联合的视线方向计算方法
EP3469458B1 (en) Six dof mixed reality input by fusing inertial handheld controller with hand tracking
US9710973B2 (en) Low-latency fusing of virtual and real content
US20130335405A1 (en) Virtual object generation within a virtual environment
US9454006B2 (en) Head mounted display and image display system
US10558260B2 (en) Detecting the pose of an out-of-range controller
US20130326364A1 (en) Position relative hologram interactions
US20120154277A1 (en) Optimized focal area for augmented reality displays
TW201437688A (zh) 頭部配戴型顯示裝置、頭部配戴型顯示裝置之控制方法及顯示系統
KR20150143612A (ko) 펄스형 광원을 이용한 근접 평면 분할
WO2015093130A1 (ja) 情報処理装置、情報処理方法およびプログラム
CN108919498A (zh) 一种基于多模态成像和多层感知的增强现实眼镜
KR20210052570A (ko) 분리가능한 왜곡 불일치 결정
CN108415875A (zh) 深度成像移动终端及人脸识别应用的方法
WO2019061466A1 (zh) 一种飞行控制方法、遥控装置、遥控系统
CN204258990U (zh) 智能头戴显示装置
US11727769B2 (en) Systems and methods for characterization of mechanical impedance of biological tissues
CN113822174A (zh) 视线估计的方法、电子设备及存储介质
CN218045797U (zh) 一种盲人穿戴智慧云眼镜及系统
EP4231635A1 (en) Efficient dynamic occlusion based on stereo vision within an augmented or virtual reality application
KR20230079618A (ko) 인체를 3차원 모델링하는 방법 및 장치

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021564133

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21873856

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03-07-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21873856

Country of ref document: EP

Kind code of ref document: A1