CN112950699A - Depth measurement method, depth measurement device, electronic device and storage medium - Google Patents

Depth measurement method, depth measurement device, electronic device and storage medium Download PDF

Info

Publication number
CN112950699A
CN112950699A CN202110341313.3A CN202110341313A CN112950699A CN 112950699 A CN112950699 A CN 112950699A CN 202110341313 A CN202110341313 A CN 202110341313A CN 112950699 A CN112950699 A CN 112950699A
Authority
CN
China
Prior art keywords
target
scene image
image
scene
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110341313.3A
Other languages
Chinese (zh)
Inventor
刘国平
向许波
佘中华
刘锦金
张英宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110341313.3A priority Critical patent/CN112950699A/en
Publication of CN112950699A publication Critical patent/CN112950699A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The present disclosure provides a depth measurement method, apparatus, electronic device, and storage medium, the method comprising: acquiring a scene image of a target scene acquired by target equipment; selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene; and determining depth information corresponding to the scene image based on the target ranging method.

Description

Depth measurement method, depth measurement device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a depth measurement method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of Artificial Intelligence (AI), AI is applied more and more widely, for example, in the fields of automatic driving, robots, and the like. In many products applying AI technology, the visual depth needs to be determined, and how to accurately determine the visual depth of an object in a video is a problem worthy of research.
The image depth measurement can be generally realized by using a fixed distance measurement method, but the image depth measurement method has low measurement result accuracy due to the influence of environmental conditions.
Disclosure of Invention
In view of the above, the present disclosure provides at least a depth measuring method, a depth measuring apparatus, an electronic device and a storage medium.
In a first aspect, the present disclosure provides a depth measurement method, including:
acquiring a scene image of a target scene acquired by target equipment;
selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene;
and determining depth information corresponding to the scene image based on the target ranging method.
In the method, after the scene image of the target scene is acquired, a target ranging method matched with the image acquisition parameter information is selected for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, so that the selected target ranging method is matched with the image acquisition parameter information, for example, a ranging method suitable for an environment with strong light intensity can be selected when the light intensity is strong; when the light intensity is weak, the distance measurement method suitable for the environment with weak light intensity is selected, different image acquisition parameter information is suitable for different distance measurement methods, and then the target distance measurement method is used, so that the depth information corresponding to the scene image can be accurately determined, and the accuracy of the determined depth information of the scene image is improved.
In a possible implementation, the image acquisition parameter information includes one or more of the following:
sensitivity, analog gain, digital gain, image processing gain, average brightness.
One or more kinds of parameter information can be selected as the image acquisition parameter information according to needs, and the image acquisition parameter information is flexible to select.
In one possible implementation, in a case where the plurality of ranging methods include a binocular ranging method and a time of flight TOF ranging method, selecting a target ranging method matching the image acquisition parameter information for the scene image from among the plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, includes:
selecting a binocular distance measuring method as a target distance measuring method under the condition that image acquisition parameter information corresponding to the scene image meets a preset light intensity condition;
and under the condition that the image acquisition parameter information corresponding to the scene image does not meet the preset light intensity condition, selecting the TOF ranging method as a target ranging method.
By adopting the method, when the image acquisition parameter information meets the illumination intensity condition, a binocular distance measurement method can be selected as a target distance measurement method; when the image acquisition parameter information does not meet the illumination intensity condition, the TOF ranging method is selected as the target ranging method, and different target ranging methods are matched for representing the image acquisition parameter information with different illumination intensities, so that the accuracy of determining the depth information corresponding to the scene image by using the target ranging method is higher.
In a possible embodiment, the method further comprises:
identifying the scene image, and determining a target object included in the scene image;
determining depth information corresponding to the scene image based on the target ranging method, including:
determining depth information of the target object in the scene image based on the target ranging method.
In a possible implementation, after determining the depth information of the target object in the scene image, the method further includes:
and generating and playing voice broadcast data aiming at the target object based on the depth information of the target object.
By adopting the method, the voice broadcast data aiming at the target object can be generated and played after the depth information of the target object is obtained, and the generated voice broadcast data can accurately broadcast the position of the target object under the condition that the accuracy of the generated depth information is higher.
In a possible implementation, after determining the depth information of the target object in the scene image, the method further includes:
and controlling the carrying equipment to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object.
By adopting the method, after the depth information of the target object is obtained, the carrying equipment is controlled to move to the position corresponding to the depth information to grab the target object, and under the condition that the accuracy of the generated depth information is higher, the carrying equipment can accurately grab the target object, so that useless operation of the carrying equipment is reduced, and the grabbing efficiency is improved.
In a possible implementation, after determining the depth information of the target object in the scene image, the method further includes:
and controlling the moving direction of the sweeping robot based on the depth information of the target object.
By adopting the method, the moving direction of the sweeping robot can be controlled after the depth information of the target object is obtained, and under the condition that the accuracy of the generated depth information is high, the obstacle avoidance processing of the sweeping robot can be accurately realized, and the working safety degree of the sweeping robot is guaranteed.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides a depth measurement device comprising:
the acquisition module is used for acquiring a scene image of a target scene acquired by target equipment;
the selection module is used for selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene;
and the determining module is used for determining the depth information corresponding to the scene image based on the target ranging method.
In one possible embodiment, the image acquisition parameter information includes one or more of the following:
sensitivity, analog gain, digital gain, image processing gain, average brightness.
In a possible implementation manner, in a case that the plurality of ranging methods include a binocular ranging method and a time of flight TOF ranging method, the selecting module, when selecting a target ranging method matching with the image acquisition parameter information for the scene image from among the plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, is configured to:
selecting a binocular distance measuring method as a target distance measuring method under the condition that image acquisition parameter information corresponding to the scene image meets a preset light intensity condition;
and under the condition that the image acquisition parameter information corresponding to the scene image does not meet the preset light intensity condition, selecting the TOF ranging method as a target ranging method.
In a possible embodiment, the apparatus further comprises:
the recognition module is used for recognizing the scene image and determining a target object included in the scene image;
the determining module, when determining the depth information corresponding to the scene image based on the target ranging method, is configured to:
determining depth information of the target object in the scene image based on the target ranging method.
In a possible implementation, after determining the depth information of the target object in the scene image, the apparatus further includes:
and the broadcasting module is used for generating and broadcasting voice broadcasting data aiming at the target object based on the depth information of the target object.
In a possible implementation, after determining the depth information of the target object in the scene image, the apparatus further includes:
and the first control module is used for controlling the carrying equipment to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object.
In a possible implementation, after determining the depth information of the target object in the scene image, the apparatus further includes:
and the second control module is used for controlling the moving direction of the sweeping robot based on the depth information of the target object.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the depth measurement method as described in the first aspect or any one of the embodiments above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the depth measurement method according to the first aspect or any one of the embodiments described above.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a schematic flow chart of a depth measurement method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an architecture of a depth measuring apparatus provided in an embodiment of the present disclosure;
fig. 3 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
With the continuous development of Artificial Intelligence (AI), AI is applied more and more widely, for example, in the fields of automatic driving, robots, and the like. In many products applying AI technology, the visual depth needs to be determined, and how to accurately determine the visual depth of an object in a video is a problem worthy of research. The image depth measurement can be generally realized by using a fixed distance measurement method, but the image depth measurement method has low measurement result accuracy due to the influence of environmental conditions. To improve the accuracy of visual depth determination, the disclosed embodiments provide a depth measurement method.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The components of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
For the purpose of facilitating an understanding of the embodiments of the present disclosure, a depth measurement method disclosed in the embodiments of the present disclosure will be described in detail first. An execution subject of the depth measurement method provided by the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the depth measurement method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a schematic flow chart of a depth measurement method provided in the embodiment of the present disclosure includes S101 to S103, where:
s101, obtaining a scene image of a target scene collected by target equipment.
S102, selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene.
S103, based on the target ranging method, determining depth information corresponding to the scene image.
In the method, after the scene image of the target scene is acquired, a target ranging method matched with the image acquisition parameter information is selected for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, so that the selected target ranging method is matched with the image acquisition parameter information, for example, a ranging method suitable for an environment with strong light intensity can be selected when the light intensity is strong; when the light intensity is weak, the distance measurement method suitable for the environment with weak light intensity is selected, different image acquisition parameter information is suitable for different distance measurement methods, and then the target distance measurement method is used, so that the depth information corresponding to the scene image can be accurately determined, and the accuracy of the determined depth information of the scene image is improved.
S101-S103 will be described in detail below.
For S101:
here, the target device may be any device with an imaging device, and a scene image corresponding to the target scene may be acquired in real time by the imaging device, where the scene image may be a color image or a grayscale image. The scene image of the target scene may be a scene image of any scene acquired in real time during the movement of the target device. For example, the target device may be a mobile phone, smart glasses, a robot, and the like.
For S102 and S103:
here, when the target device captures a scene Image of a target scene, an Image Signal Processor (ISP) parameter may be associated with each frame of the scene Image, and at least one type of Image capture parameter information corresponding to the scene Image may be determined from the ISP parameter.
For example, the image acquisition parameter information may be parameter information characterizing light intensity of the target scene in ISP parameters, where the image acquisition parameter information includes one or more of the following: sensitivity, analog gain, digital gain, image processing gain, average brightness. The sensitivity ISO is a normalized value of an analog gain, a digital gain, and an image processing gain. The analog gain Again is used for adjusting the signal intensity of the linear amplification input; the digital gain Dgain is used for adjusting the pulse amplitude of the digital-to-analog conversion input; the image processing gain ISPgain is data processing performed by the ISP algorithm when processing image data. One or more kinds of parameter information can be selected as the image acquisition parameter information according to needs, and the image acquisition parameter information is flexible to select.
Furthermore, a target ranging method corresponding to the scene image can be selected from multiple ranging methods based on at least one image acquisition parameter information corresponding to the scene image, and the target ranging method is a ranging method matched with the image acquisition parameter information. The multiple distance measurement methods can be set according to actual needs, for example, the multiple distance measurement methods can include a binocular distance measurement method, a structured light distance measurement method, a TOF distance measurement method and the like.
In an optional implementation manner, in a case where the plurality of ranging methods include a binocular ranging method and a time of flight TOF ranging method, selecting a target ranging method matching with the image acquisition parameter information for the scene image from among the plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, includes:
in the first situation, under the condition that image acquisition parameter information corresponding to the scene image meets a preset light intensity condition, a binocular distance measurement method is selected as a target distance measurement method.
And under the condition that the image acquisition parameter information corresponding to the scene image does not meet the preset light intensity condition, selecting the Time of flight (TOF) ranging method as a target ranging method.
In specific implementation, the binocular distance measuring method is suitable for indoor and outdoor environments and in scenes with enough illumination, namely at night with insufficient illumination and poor imaging effect, and the depth information is difficult to accurately calculate by the binocular distance measuring method. In the TOF ranging method, a transmitting module arranged on a TOF camera transmits light pulses, a receiving module arranged on the TOF camera receives returned light pulses, the time of flight of the light pulses in space is measured, and the distance between an object and the camera device is calculated and obtained based on the time of flight and the speed of flight. However, the TOF ranging method is easily interfered by strong sunlight in an outdoor environment, so that the accuracy of measuring the distance is low.
In summary, it can be known from the analysis that the binocular distance measurement method is suitable for the scene with strong light intensity, and the TOF is suitable for the scene with weak light intensity. Therefore, when the multiple distance measurement methods comprise a binocular distance measurement method and a TOF distance measurement method, whether image acquisition parameter information corresponding to the scene image meets a preset light intensity condition or not can be judged, and if yes, the binocular distance measurement method is selected as a target distance measurement method; and if not, selecting the TOF ranging method as the target ranging method.
For example, if the image capture parameter is sensitivity, a sensitivity threshold may be set, and when the image capture parameter (sensitivity) corresponding to the scene image is greater than the set sensitivity threshold, a binocular ranging method is selected as the target ranging method; and when the image acquisition parameter (sensitivity) corresponding to the scene image is less than or equal to the set sensitivity threshold, selecting the TOF ranging method as the target ranging method. The sensitivity threshold may be set according to actual conditions.
If the image acquisition parameter is the average brightness, a brightness threshold value can be set, and when the image acquisition parameter (average brightness) corresponding to the scene image is greater than the set brightness threshold value, a binocular ranging method is selected as a target ranging method; and when the image acquisition parameter (average brightness) corresponding to the scene image is less than or equal to the set brightness threshold, selecting the TOF ranging method as the target ranging method.
If the image acquisition parameters are Again, Dgain and ISPgain, setting a gain threshold value, calculating the sum of Again, Dgain and ISPgain corresponding to the scene image, and selecting a binocular ranging method as a target ranging method when the sum of Again, Dgain and ISPgain corresponding to the scene image is smaller than the set gain threshold value; and when the sum of the Again, the Dgain and the ISPgain corresponding to the scene image is greater than or equal to the set gain threshold, selecting the TOF ranging method as the target ranging method.
By adopting the method, when the image acquisition parameter information meets the illumination intensity condition, a binocular distance measurement method can be selected as a target distance measurement method; when the image acquisition parameter information does not meet the illumination intensity condition, the TOF ranging method is selected as the target ranging method, and different target ranging methods are matched for representing the image acquisition parameter information with different illumination intensities, so that the accuracy of determining the depth information corresponding to the scene image by using the target ranging method is higher.
In an alternative embodiment, the method further comprises: identifying the scene image, and determining a target object included in the scene image;
determining depth information corresponding to the scene image based on the target ranging method, including: determining depth information of the target object in the scene image based on the target ranging method.
In specific implementation, after the scene image is acquired, the trained neural network may be utilized to identify the scene image and determine a target object included in the scene image, where the target object may be set according to actual needs, for example, the target object may be any object such as a pedestrian, a vehicle, an animal, a road sign, and stored goods. After the target object is obtained, depth information of the target object in the scene image may be determined based on a target ranging method.
Specifically, the binocular ranging method is to determine the depth information of a target object in a scene image by using a binocular camera arranged on target equipment; the TOF ranging method determines depth information of a target object in a scene image by utilizing a TOF camera arranged on target equipment.
Exemplarily, the binocular camera and the TOF camera can determine depth information corresponding to a scene image in real time, that is, the binocular camera can acquire a left eye image and a right eye image in real time and determine the depth information of the scene image in real time through the left eye image and the right eye image; meanwhile, the TOF camera can also emit light pulses in real time, and the depth information of the scene image is determined through the emitted and received light pulses; the depth information determined by the target ranging method and the depth information determined by the TOF ranging method may be sent to a controller of the target device, for example, the controller may be a System On Chip (SOC), and after receiving the depth information obtained by the binocular ranging method and the depth information obtained by the TOF ranging method, the controller may determine the depth information determined by the target ranging method as the depth information corresponding to the scene image. For example, if the target ranging method is a TOF ranging method, the depth information determined by the TOF ranging method is determined as the depth information of the scene image.
Or, the target distance measurement method may be determined based on at least one image acquisition parameter information corresponding to the scene image, for example, if the target distance measurement method is a TOF distance measurement method, the controller may control the TOF camera to determine the depth information of the scene image and control the binocular camera to not determine the depth information of the scene image; if the target ranging method is a binocular ranging method, the controller can control the binocular camera to determine the depth information of the scene image and control the TOF camera to not determine the depth information of the scene image.
For example, the TOF camera may also be configured to determine depth information corresponding to the scene image in real time, that is, the TOF camera may emit light pulses in real time, and determine the depth information of the scene image through the emitted and received light pulses; the depth information determined by the TOF ranging method is sent to a controller of the target device, and when the controller determines that the target ranging method is the TOF ranging method, the depth information determined by the TOF ranging method is determined as the depth information of the scene image; if the controller determines that the target ranging method is a binocular ranging method, the controller may control the binocular camera to determine depth information corresponding to the scene image, and determine the depth information determined by the binocular camera as the depth information corresponding to the scene image.
In an optional implementation, after determining the depth information of the target object in the scene image, the method further includes: and generating and playing voice broadcast data aiming at the target object based on the depth information of the target object.
In specific implementation, when the target device is a blind obstacle avoidance navigation device, after the depth information of the target object in the scene image is determined, voice broadcast data for the target object can be generated and played based on the depth information of the target object and the set broadcast strategy. For example, if it is detected that the target object is a zebra crossing, the determined depth information of the zebra crossing is 1 meter, and the set broadcast policy is broadcast when the distance from the zebra crossing is less than or equal to 2 meters, voice broadcast data for the zebra crossing may be generated and played, for example, the voice broadcast data may be "zebra crossing exists 1 meter ahead".
In specific implementation, one or more target objects may be included in one frame of scene image, for example, the target objects included in the scene image may be a zebra crossing and a traffic light in a red light state, depth information of the zebra crossing is 1 meter, and depth information of the traffic light is 1 meter, and the generated voice broadcast data may be that "the zebra crossing and the traffic light exist at a position 1 meter ahead, and the red light is currently available, and the street cannot be crossed".
By adopting the method, the voice broadcast data aiming at the target object can be generated and played after the depth information of the target object is obtained, and the generated voice broadcast data can accurately broadcast the position of the target object under the condition that the accuracy of the generated depth information is higher.
In an optional implementation, after determining the depth information of the target object in the scene image, the method further includes: and controlling the carrying equipment to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object.
In specific implementation, when the target device is a conveying device, after the depth information of the target object in the scene image is determined, the conveying device may be controlled to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object. For example, if the target object is stored target goods, the camera device arranged on the conveying equipment can collect a scene image, determine the depth information of the target object in the scene image, and after the depth information of the target object is determined, the conveying equipment can be controlled to grab the target object.
By adopting the method, after the depth information of the target object is obtained, the carrying equipment is controlled to move to the position corresponding to the depth information to grab the target object, and under the condition that the accuracy of the generated depth information is higher, the carrying equipment can accurately grab the target object, so that useless operation of the carrying equipment is reduced, and the grabbing efficiency is improved.
In an optional implementation, after determining the depth information of the target object in the scene image, the method further includes: and controlling the moving direction of the sweeping robot based on the depth information of the target object.
In specific implementation, when the target device is a sweeping robot, after the depth information of the target object in the scene image is determined, the moving direction of the sweeping robot can be controlled based on the depth information of the target object, so that obstacle avoidance processing of the sweeping robot is realized. For example, after the depth information of the target object is determined, when the depth information of the target object is smaller than the set depth value, the moving direction of the sweeping robot may be changed, so as to avoid the sweeping robot colliding with the target object.
By adopting the method, the moving direction of the sweeping robot can be controlled after the depth information of the target object is obtained, and under the condition that the accuracy of the generated depth information is high, the obstacle avoidance processing of the sweeping robot can be accurately realized, and the working safety degree of the sweeping robot is guaranteed.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a depth measurement apparatus, as shown in fig. 2, an architecture schematic diagram of the depth measurement apparatus provided in the embodiment of the present disclosure includes an obtaining module 201, a selecting module 202, and a determining module 203, specifically:
an obtaining module 201, configured to obtain a scene image of a target scene acquired by a target device;
a selecting module 202, configured to select, based on at least one image acquisition parameter information corresponding to the scene image, a target ranging method that matches the image acquisition parameter information for the scene image from multiple ranging methods; the image acquisition parameter information is used for representing the light intensity in the target scene;
a determining module 203, configured to determine depth information corresponding to the scene image based on the target ranging method.
In one possible embodiment, the image acquisition parameter information includes one or more of the following:
sensitivity, analog gain, digital gain, image processing gain, average brightness.
In a possible implementation, in a case that the plurality of ranging methods include a binocular ranging method and a time of flight TOF ranging method, the selecting module 202, when selecting a target ranging method matching with the image acquisition parameter information for the scene image from among a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, is configured to:
selecting a binocular distance measuring method as a target distance measuring method under the condition that image acquisition parameter information corresponding to the scene image meets a preset light intensity condition;
and under the condition that the image acquisition parameter information corresponding to the scene image does not meet the preset light intensity condition, selecting the TOF ranging method as a target ranging method.
In a possible embodiment, the apparatus further comprises:
the identification module 204 is configured to identify the scene image, and determine a target object included in the scene image;
the determining module 203, when determining the depth information corresponding to the scene image based on the target ranging method, is configured to:
determining depth information of the target object in the scene image based on the target ranging method.
In a possible implementation, after determining the depth information of the target object in the scene image, the apparatus further includes:
and the broadcasting module 205 is configured to generate and play voice broadcasting data for the target object based on the depth information of the target object.
In a possible implementation, after determining the depth information of the target object in the scene image, the apparatus further includes:
and the first control module 206 is configured to control the handling equipment to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object.
In a possible implementation, after determining the depth information of the target object in the scene image, the apparatus further includes:
and the second control module 207 is used for controlling the moving direction of the sweeping robot based on the depth information of the target object.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 3, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 301, a memory 302, and a bus 303. The memory 302 is used for storing execution instructions and includes a memory 3021 and an external memory 3022; the memory 3021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 301 and data exchanged with an external memory 3022 such as a hard disk, the processor 301 exchanges data with the external memory 3022 through the memory 3021, and when the electronic device 300 is operated, the processor 301 communicates with the memory 302 through the bus 303, so that the processor 301 executes the following instructions:
acquiring a scene image of a target scene acquired by target equipment;
selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene;
and determining depth information corresponding to the scene image based on the target ranging method.
In one possible design, in the instructions executed by processor 301, the image acquisition parameter information includes one or more of the following:
sensitivity, analog gain, digital gain, image processing gain, average brightness.
In one possible design, in the case where the plurality of ranging methods includes a binocular ranging method and a time of flight TOF ranging method, the processor 301 executes instructions to select a target ranging method matching the image acquisition parameter information for the scene image from among the plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image, including:
selecting a binocular distance measuring method as a target distance measuring method under the condition that image acquisition parameter information corresponding to the scene image meets a preset light intensity condition;
and under the condition that the image acquisition parameter information corresponding to the scene image does not meet the preset light intensity condition, selecting the TOF ranging method as a target ranging method.
In one possible design, the instructions executed by the processor 301 further include:
identifying the scene image, and determining a target object included in the scene image;
determining depth information corresponding to the scene image based on the target ranging method, including:
determining depth information of the target object in the scene image based on the target ranging method.
In one possible design, after determining depth information of the target object in the scene image, the processor 301 executes instructions that further include:
and generating and playing voice broadcast data aiming at the target object based on the depth information of the target object.
In one possible design, after determining depth information of the target object in the scene image, the processor 301 executes instructions that further include:
and controlling the carrying equipment to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object.
In one possible design, after determining depth information of the target object in the scene image, the processor 301 executes instructions that further include:
and controlling the moving direction of the sweeping robot based on the depth information of the target object.
Furthermore, the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the depth measurement method described in the above method embodiments.
The computer program product of the depth measurement method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the depth measurement method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A depth measurement method, comprising:
acquiring a scene image of a target scene acquired by target equipment;
selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene;
and determining depth information corresponding to the scene image based on the target ranging method.
2. The method of claim 1, wherein the image acquisition parameter information comprises one or more of:
sensitivity, analog gain, digital gain, image processing gain, average brightness.
3. The method according to claim 2, wherein in a case where the plurality of ranging methods include a binocular ranging method and a time of flight (TOF) ranging method, selecting a target ranging method matching the image acquisition parameter information for the scene image from among the plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image includes:
selecting a binocular distance measuring method as a target distance measuring method under the condition that image acquisition parameter information corresponding to the scene image meets a preset light intensity condition;
and under the condition that the image acquisition parameter information corresponding to the scene image does not meet the preset light intensity condition, selecting the TOF ranging method as a target ranging method.
4. The method according to any one of claims 1 to 3, further comprising:
identifying the scene image, and determining a target object included in the scene image;
determining depth information corresponding to the scene image based on the target ranging method, including:
determining depth information of the target object in the scene image based on the target ranging method.
5. The method of claim 4, wherein after determining depth information for the target object in the scene image, the method further comprises:
and generating and playing voice broadcast data aiming at the target object based on the depth information of the target object.
6. The method of claim 4, wherein after determining depth information for the target object in the scene image, the method further comprises:
and controlling the carrying equipment to move to a position corresponding to the depth information to grab the target object based on the depth information of the target object.
7. The method of claim 4, wherein after determining depth information for the target object in the scene image, the method further comprises:
and controlling the moving direction of the sweeping robot based on the depth information of the target object.
8. A depth measurement device, comprising:
the acquisition module is used for acquiring a scene image of a target scene acquired by target equipment;
the selection module is used for selecting a target ranging method matched with the image acquisition parameter information for the scene image from a plurality of ranging methods based on at least one image acquisition parameter information corresponding to the scene image; the image acquisition parameter information is used for representing the light intensity in the target scene;
and the determining module is used for determining the depth information corresponding to the scene image based on the target ranging method.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the depth measurement method of any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the depth measurement method according to any one of claims 1 to 7.
CN202110341313.3A 2021-03-30 2021-03-30 Depth measurement method, depth measurement device, electronic device and storage medium Pending CN112950699A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110341313.3A CN112950699A (en) 2021-03-30 2021-03-30 Depth measurement method, depth measurement device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110341313.3A CN112950699A (en) 2021-03-30 2021-03-30 Depth measurement method, depth measurement device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN112950699A true CN112950699A (en) 2021-06-11

Family

ID=76230659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110341313.3A Pending CN112950699A (en) 2021-03-30 2021-03-30 Depth measurement method, depth measurement device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112950699A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120105602A1 (en) * 2010-11-03 2012-05-03 3Dmedia Corporation Methods, systems, and computer program products for creating three-dimensional video sequences
CN110488311A (en) * 2019-08-05 2019-11-22 Oppo广东移动通信有限公司 Depth distance measurement method, device, storage medium and electronic equipment
CN111800790A (en) * 2020-06-19 2020-10-20 张仕红 Information analysis method based on cloud computing and 5G interconnection and man-machine cooperation cloud platform
WO2020258286A1 (en) * 2019-06-28 2020-12-30 深圳市大疆创新科技有限公司 Image processing method and device, photographing device and movable platform
CN112188059A (en) * 2020-09-30 2021-01-05 深圳市商汤科技有限公司 Wearable device, intelligent guiding method and device and guiding system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120105602A1 (en) * 2010-11-03 2012-05-03 3Dmedia Corporation Methods, systems, and computer program products for creating three-dimensional video sequences
WO2020258286A1 (en) * 2019-06-28 2020-12-30 深圳市大疆创新科技有限公司 Image processing method and device, photographing device and movable platform
CN110488311A (en) * 2019-08-05 2019-11-22 Oppo广东移动通信有限公司 Depth distance measurement method, device, storage medium and electronic equipment
CN111800790A (en) * 2020-06-19 2020-10-20 张仕红 Information analysis method based on cloud computing and 5G interconnection and man-machine cooperation cloud platform
CN112188059A (en) * 2020-09-30 2021-01-05 深圳市商汤科技有限公司 Wearable device, intelligent guiding method and device and guiding system

Similar Documents

Publication Publication Date Title
CN112489126B (en) Vehicle key point information detection method, vehicle control method and device and vehicle
JP2020502654A (en) Human-machine hybrid decision-making method and apparatus
US9776564B2 (en) Vehicle periphery monitoring device
CN111753757B (en) Image recognition processing method and device
CN110738251B (en) Image processing method, image processing apparatus, electronic device, and storage medium
US20210124960A1 (en) Object recognition method and object recognition device performing the same
CN107110648A (en) The system and method detected for visual range
CN113614730A (en) CNN classification of multi-frame semantic signals
CN101930611A (en) Multiple view face tracking
CN108846336B (en) Target detection method, device and computer readable storage medium
CN111767831B (en) Method, apparatus, device and storage medium for processing image
CN111783573B (en) High beam detection method, device and equipment
CN110232315A (en) Object detection method and device
EP4052221A1 (en) Generating depth from camera images and known depth data using neural networks
Li et al. A visualized fire detection method based on convolutional neural network beyond anchor
Bruno et al. Analysis and fusion of 2d and 3d images applied for detection and recognition of traffic signs using a new method of features extraction in conjunction with deep learning
Vishwakarma et al. Analysis of lane detection techniques using opencv
CN113515143A (en) Robot navigation method, robot and computer readable storage medium
CN112950699A (en) Depth measurement method, depth measurement device, electronic device and storage medium
CN112818816B (en) Temperature detection method, device and equipment
US20230343228A1 (en) Information processing apparatus, information processing system, and information processing method, and program
CN114415133A (en) Laser radar-camera external parameter calibration method, device, equipment and storage medium
CN111316119A (en) Radar simulation method and device
JP2021050945A (en) Object recognition device and object recognition program
CN113591777B (en) Laser radar signal processing method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination