CN117607900A - Method for enhancing resolution of TOF sensor and related equipment - Google Patents

Method for enhancing resolution of TOF sensor and related equipment Download PDF

Info

Publication number
CN117607900A
CN117607900A CN202311565712.3A CN202311565712A CN117607900A CN 117607900 A CN117607900 A CN 117607900A CN 202311565712 A CN202311565712 A CN 202311565712A CN 117607900 A CN117607900 A CN 117607900A
Authority
CN
China
Prior art keywords
depth resolution
resolution map
acquisition
pixel
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311565712.3A
Other languages
Chinese (zh)
Inventor
梁邦世
李昌国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huayifeng Technology Co ltd
Original Assignee
Shenzhen Huayifeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huayifeng Technology Co ltd filed Critical Shenzhen Huayifeng Technology Co ltd
Priority to CN202311565712.3A priority Critical patent/CN117607900A/en
Publication of CN117607900A publication Critical patent/CN117607900A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The application relates to the technical field of sensors, and provides a method for enhancing resolution of a TOF sensor and related equipment. According to the method, the TOF sensor is used for carrying out depth measurement on the acquisition object in two different states, the first depth resolution map is obtained through projection floodlight, the second depth resolution map is obtained through projection floodlight and laser, and finally the target depth resolution map is obtained according to the two depth resolution maps. The depth measurement is carried out on the acquisition object under different states, and the TOF sensor can capture the depth information of the same object under different conditions; by combining different projection modes (floodlight and laser), the depth measurement error caused by illumination change, sensor error or other environmental factors is reduced, so that a target depth resolution map generated according to the first depth resolution map and the second depth resolution map can acquire more comprehensive and accurate object depth information, and the depth resolution of the TOF sensor is improved.

Description

Method for enhancing resolution of TOF sensor and related equipment
Technical Field
The present disclosure relates to the field of sensor technology, and in particular, to a method for enhancing resolution of a TOF sensor and related devices.
Background
Traditional Time-of-Flight (TOF) techniques rely on flood illumination to resolve depth. That is, the target is illuminated with a flood light source, and then the TOF sensor is used to collect the reflection of the flood light (reflected off the surface of the target by flood projection) to measure the return time difference of the reflected light.
However, the light generated by the floodlight source may be mixed with ambient light, such that noise from the environment may be contained in the signal received by the TOF sensor, and thus, the TOF-based measurements performed using the floodlight source to illuminate the surface may not yield a highly accurate depth resolution.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method and related apparatus for enhancing the resolution of a TOF sensor, which are used to solve the technical problem that using a floodlight source to irradiate a surface for TOF-based measurement cannot generate highly accurate depth resolution.
In a first aspect, the present application provides a method for enhancing resolution of a TOF sensor, which adopts the following technical scheme:
determining a target acquisition scene when an acquisition object is measured;
in a first state, floodlight is projected to the acquisition object, and a first depth resolution map of the acquisition object during floodlight projection is acquired through a TOF sensor;
In a second state, floodlight and laser are projected to the acquisition object according to the target acquisition scene, and a second depth resolution map of the acquisition object during the floodlight and laser projection is acquired through the TOF sensor;
and obtaining a target depth resolution map according to the first depth resolution map and the second depth resolution map.
In one possible implementation, the determining the target acquisition scene when measuring the acquisition object includes:
acquiring the ambient brightness of the environment where the acquisition object is located;
determining the target acquisition scene according to the environmental brightness and a preset environmental brightness threshold segmentation interval;
the preset environment brightness threshold segmentation interval comprises a plurality of environment brightness threshold intervals, and each environment brightness threshold interval corresponds to one acquisition scene.
In one possible implementation manner, the projecting floodlight and laser light to the acquisition object according to the target acquisition scene includes:
determining the proportion of floodlight and laser projected to the acquisition object according to the acquisition scene;
determining a first current intensity of the projected floodlight and a second current intensity of the projected laser according to the proportion;
And projecting floodlight to the acquisition object according to the first current intensity and projecting laser to the acquisition object according to the second current intensity.
In one possible implementation, the projecting floodlight toward the acquisition object, and acquiring, by the TOF sensor, a first depth resolution map of the acquisition object at the time of the floodlight projection includes:
converting the flood light to a spotlight by a spotlight in the TOF sensor;
and projecting the light condensation to the acquisition object, and acquiring a first depth resolution map of the acquisition object under the projection of the light condensation through the TOF sensor.
In one possible implementation manner, the obtaining a target depth resolution map according to the first depth resolution map and the second depth resolution map includes:
acquiring a first pixel value in the first depth resolution map, and acquiring a second pixel value corresponding to the first pixel value in the second depth resolution map;
calculating a pixel difference value and a pixel mean value between the first pixel value and the corresponding second pixel value;
and obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value.
In one possible implementation manner, the obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value includes:
judging whether the pixel difference value is larger than a preset difference value threshold value or not;
when the pixel difference value is larger than the preset difference value threshold value, replacing the first pixel value with the corresponding pixel mean value to obtain a target pixel value;
when the pixel difference value is smaller than the preset difference value threshold value, updating the first pixel value to be a target pixel value;
and obtaining the target depth resolution map according to the target pixel value.
In one possible implementation, the method further includes:
and generating a three-dimensional model of the acquisition object according to the target depth resolution map.
In a second aspect, the present application provides an apparatus for enhancing the resolution of a TOF sensor, the apparatus comprising:
the scene acquisition module is used for determining a target acquisition scene when the acquisition object is measured;
the first acquisition module is used for projecting floodlight to the acquisition object in a first state and acquiring a first depth resolution map of the acquisition object in the floodlight projection through the TOF sensor;
The second acquisition module is used for projecting floodlight and laser to the acquisition object according to the target acquisition scene in a second state, and acquiring a second depth resolution map of the acquisition object during the floodlight and laser projection through the TOF sensor;
and the image processing module is used for obtaining a target depth resolution map according to the first depth resolution map and the second depth resolution map.
In a third aspect, the present application provides a TOF sensor comprising a memory, a processor and a computer program stored on said memory and executable on said processor, said processor implementing the steps of said method of enhancing the resolution of a TOF sensor when said computer program is executed.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of enhancing the resolution of a TOF sensor.
According to the method, the TOF sensor is used for carrying out depth measurement on the acquisition object in two different states, the first depth resolution map is obtained through projection floodlight, the second depth resolution map is obtained through projection floodlight and laser, and finally the target depth resolution map is obtained according to the two depth resolution maps. The depth measurement is carried out on the acquisition object under different states (a first state and a second state), the TOF sensor can capture the depth information of the same object under different conditions, and in the first state, the depth measurement is carried out on the acquisition object by using only floodlight projection; in the second state, the floodlight and the laser projection are used for carrying out depth measurement on the acquisition object, and in the second state, richer depth information can be acquired to provide higher-precision depth measurement; by combining different projection modes (floodlight and laser), the depth measurement error caused by illumination change, sensor error or other environmental factors is reduced, so that a target depth resolution map generated according to the first depth resolution map and the second depth resolution map can acquire more comprehensive and accurate object depth information, the accuracy and stability of depth measurement are improved, and the depth resolution of the TOF sensor is improved.
Drawings
FIG. 1 is a flow chart illustrating a method of enhancing resolution of a TOF sensor according to an embodiment of the present application;
FIG. 2 is a functional block diagram of an apparatus for enhancing resolution of a TOF sensor shown in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a TOF sensor according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application is intended to encompass any or all possible combinations of one or more of the listed items.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
Referring to fig. 1, a flowchart of a method for enhancing the resolution of a TOF sensor according to an embodiment of the present application is shown, where the method for enhancing the resolution of the TOF sensor specifically includes the following steps.
S11, determining a target acquisition scene when the acquisition object is measured.
The acquisition object is a target object measured or sensed by a Time of Flight (TOF) sensor. The TOF sensor acquires the distance between the target object and the sensor by emitting pulses of light and measuring the time of flight of the light. The target object can be an object with different media, and can be a solid such as a wall, a statue and the like, or a liquid such as a water column, a waterfall and the like. The target acquisition scene refers to an external environment where an acquisition object is positioned when being measured, and the target acquisition scene can comprise external environment brightness, external air humidity, irradiation distance, and the like.
When the acquisition object is measured, a target acquisition scene where the acquisition object is positioned when the measurement is performed is determined. According to different acquisition objects and acquisition scenes, a proper light projection angle can be selected, and interference of external factors can be reduced to a certain extent, so that a better acquisition effect is obtained. For example, when a tree is photographed in sunlight, a bright scene can be obtained by photographing the tree in the sunlight direction, but the camera is exposed to light due to direct sunlight to generate a shadow effect; shooting in the direction away from sunlight can reduce exposure, but the overall profile of the tree is less effective. In this embodiment, the irradiation angle defaults to front irradiation.
In an alternative embodiment, the determining the target acquisition scene when measuring the acquisition object includes:
acquiring the ambient brightness of the environment where the acquisition object is located;
and determining the target acquisition scene according to the segmentation interval of the ambient brightness and a preset ambient brightness threshold value.
The environment brightness comprises natural environment brightness and non-natural environment brightness, wherein the natural environment brightness refers to the brightness intensity emitted by sunlight irradiation or luminous animals and plants, the non-natural environment brightness refers to the brightness intensity emitted by a light source manufactured artificially, and the natural environment brightness and the non-natural environment brightness form the environment brightness together. The TOF sensor can be provided with a light sensor for measuring illumination intensity, the illumination intensity of the environment where the acquisition object is located is obtained through the light sensor, the illumination intensity is converted into an electric signal, and then the electric signal is read by a microcontroller in the TOF sensor, so that the environment brightness of the environment where the acquisition object is located is obtained. The light sensor may include, but is not limited to: photoresistors, photodiodes, phototransistors, etc.
In order to facilitate the processing of the ambient brightness, the ambient brightness threshold segmentation section may be preset, so that the brightness range is divided into a plurality of ambient brightness threshold sections, and an upper limit value and a lower limit value are defined for each of the ambient brightness threshold sections, which represent different levels of the ambient brightness. Different ambient brightness threshold intervals represent different ambient brightness levels, and each segment of ambient brightness threshold interval corresponds to one acquisition scene. The ambient brightness threshold segmentation interval may be adjusted according to the requirements of a particular application and the environmental characteristics.
And comparing the environment brightness of the environment where the acquisition object is positioned with each section of environment brightness threshold interval, and determining the section of environment brightness threshold interval as a target acquisition scene when the environment brightness of the environment where the acquisition object is positioned is in a certain section of environment brightness threshold interval. The environment brightness can be divided into five brightness intervals of extremely weak light, medium light, strong light and extremely strong light according to different environment brightness thresholds, wherein the extremely weak light corresponds to a first environment brightness threshold interval, the weak light corresponds to a second environment brightness threshold interval, the medium light corresponds to a third environment brightness threshold interval, the strong light corresponds to a fourth environment brightness threshold interval, the extremely strong light corresponds to a fifth environment brightness threshold interval, and each environment brightness threshold interval corresponds to different acquisition scenes. For example, assuming that the first environmental brightness threshold interval corresponding to the extremely weak light is [0, 50 cd), the second environmental brightness threshold interval corresponding to the weak light is [50cd,100 cd), the third environmental brightness threshold interval corresponding to the medium light is [100cd,200 cd), the fourth environmental brightness threshold interval corresponding to the strong light is [200cd,400cd ], the fifth environmental brightness threshold interval corresponding to the extremely strong light is [400cd,100000 cd ], cd (candela) candela, which is used for representing illumination brightness, when the environmental brightness of the environment where the acquisition object is located is 40cd, the target acquisition scene is determined to be an extremely weak light environment, such as night or extremely dim environment; when the environment brightness of the environment where the acquisition object is positioned is 70cd, determining that the target acquisition scene is a weak light environment, such as dim time or indoor weak illumination; when the environment brightness of the environment where the acquisition object is positioned is 300cd, determining that the target acquisition scene is a strong light environment, such as direct sunlight or in a bright indoor environment; when the ambient brightness of the environment where the acquisition object is located is 400cd, the target acquisition scene is determined to be an extremely bright environment, such as sunlight in noon in summer or dazzling light emitted by welding.
And S12, in a first state, floodlight is projected to the acquisition object, and a first depth resolution map of the acquisition object in the floodlight projection is acquired through a TOF sensor.
The basic structure of a TOF sensor generally comprises: the TOF sensor can be provided with a vertical cavity surface emitting laser.
The TOF sensor is in a first state when the vcsels are off and in a second state when the vcsels are on, i.e. the first state refers to the state when the vcsels of the TOF sensor are off and the second state refers to the state when the vcsels of the TOF sensor are on. In the first state, the vertical cavity surface emitting laser is turned off, so that the TOF sensor can only project floodlight to the acquisition object, the TOF sensor is used for acquiring the flying time of floodlight emission and return, the distance of floodlight flying is obtained through calculation of the flying time, the distance between each part of the acquisition object and the TOF sensor is obtained, and a depth resolution map is generated according to the distance. The depth resolution map is used to represent an image of the depth information of the scene, showing the distance or depth value of each point in the scene, forming a depth map corresponding to the geometry of the scene. For convenience of description below, when the TOF sensor projects floodlight to the acquisition object, the acquired depth resolution map is the first depth resolution map.
For example, assuming that a TOF sensor is used to measure an automobile, the time for floodlight to and fro is 33.33 milliseconds when the floodlight is projected to the surface of the automobile and passes through the middle part of a window measured by the TOF sensor, the distance between the TOF sensor and the middle part of the window is calculated to be 5m according to the light speed of 30 ten thousand km/s, and similarly, the distance between the head and the TOF sensor is calculated to be 6m, and a first depth resolution map corresponding to the whole automobile can be obtained according to the distance.
In an alternative embodiment, the projecting floodlight toward the acquisition object, and acquiring, by the TOF sensor, a first depth resolution map of the acquisition object at the time of the floodlight projection includes:
converting the flood light to a spotlight by a spotlight in the TOF sensor;
and projecting the light condensation to the acquisition object, and acquiring a first depth resolution map of the acquisition object under the projection of the light condensation through the TOF sensor.
The lens or lens system of the TOF sensor is covered with a condenser in front of it, which is a light emitter that can emit scattered light at a wider angle and more uniformly distributed. The scattered floodlight can be converted into uniform condensation through the condenser, so that the diffusion of the light can be effectively reduced, and a more accurate collection effect can be obtained in the whole collection process.
For example, measurement of a person's statue by a TOF sensor, irregular reflection at some groove contours due to uneven light rays and unstable angles of incidence, results in some details of the measured depth resolution of the person's statue being distorted, such as distance from equidistant nose and ear locations. Even spotlight can obtain more accurate reflection at the tiny contour position of some statues, guarantees the even degree of angle and light of transmission, can obtain more accurate distance, and the convenience is follow-up to form three-dimensional personage's statue picture.
S13, in a second state, floodlight and laser are projected to the acquisition object according to the target acquisition scene, and a second depth resolution map of the acquisition object during the floodlight and laser projection is acquired through the TOF sensor;
since the vertical cavity surface emitting laser is on in the second state, the TOF sensor can project flood light and laser light simultaneously to the acquisition object. Laser refers to a highly directional laser beam emitted by a VCSEL. The laser beam is difficult to diffuse when being disturbed by the outside, and has stability. The floodlight is easily interfered by external factors in a strong light environment, but the acquisition surface is larger than the laser, so that the range of the acquisition surface and the acquisition effect can be enhanced through the combined emission of the laser and the floodlight, and a more accurate second depth resolution map is obtained.
In an alternative embodiment, the projecting floodlight and laser light to the acquisition object according to the target acquisition scene includes:
determining the proportion of floodlight and laser projected to the acquisition object according to the acquisition scene;
determining a first current intensity of the projected floodlight and a second current intensity of the projected laser according to the proportion;
and projecting floodlight to the acquisition object according to the first current intensity and projecting laser to the acquisition object according to the second current intensity.
In different acquisition scenes, because the interference degrees of external light sources subjected to floodlight are different, floodlight and laser are required to be fused and projected according to a certain proportion, so that the external interference is effectively reduced, and a more accurate depth resolution map is obtained.
The first current intensity refers to the current intensity used in the concentrator to control the floodlight intensity. The second current intensity refers to the current intensity used to control the laser intensity in the VCSEL. The first current intensity and the second current intensity are both proportional to the illumination intensity. That is, the higher the current, the higher the intensity of illumination, and the lower the intensity of illumination.
For example, for a concentrator, assuming a current range of 5mA (milliamp) to 50mA, at 10mA current, the flood intensity is 10mW (milliwatts) and by increasing the current, for example to 20mA, the flood intensity is increased to 20mW. As another example, for a vertical cavity surface emitting laser, assuming a current range of 1mA to 10mA, at a current of 5mA the emitted laser intensity may be 1mW, by adjusting the current, for example, to 8mA, the laser intensity may be increased to 2mW.
The example assumes that the acquisition scene is a dark night, the influence of external illumination is extremely low, floodlight is hardly interfered by other external light sources, and meanwhile, the acquisition range of floodlight is larger than that of laser, and projection can be performed according to the proportion of 90% of floodlight and 10% of laser; assuming that the acquisition scene is a weak indoor illumination condition, floodlight is less influenced by weak indoor illumination, and when some illumination points are still possibly influenced, the floodlight can be projected according to the proportion of 70% of floodlight and 30% of laser; under the condition that the acquisition scene is medium brightness, projection can be carried out according to the proportion of 50% of floodlight and 50% of laser; assuming that the acquisition scene is in a strong light environment, light projection can be carried out according to the proportion of 70% of floodlight and 30% of laser; when the collected scene is in a very strong light environment, such as in the noon sun in summer, the floodlight in the scene is greatly disturbed, and the laser is required to be used as a main body for irradiation, at this time, the light projection can be performed according to the proportion of 10% of floodlight and 90% of laser.
The above alternative embodiments are adapted to different scene conditions. Such a system may be used in a variety of applications such as computer vision, depth perception, three-dimensional imaging, or other fields where fine control of light projection is desired.
According to the technical effect of the technical scheme, the proportion of floodlight and laser projected to the acquisition object can be adjusted in a self-adaptive and dynamic mode according to the acquisition scene, and the light projection is accurately controlled by determining the corresponding current intensity according to the proportion, so that the output of the light source is flexibly adjusted according to actual requirements and environmental conditions. Proper light source projection ratios help to improve imaging quality or to increase accuracy of depth perception.
S14, obtaining a target depth resolution map according to the first depth resolution map and the second depth resolution map.
The method comprises the steps of obtaining a first depth resolution map through floodlight collection, obtaining a second depth resolution map through floodlight and laser collection in different proportions, and optimizing the first depth resolution map through the second depth resolution map so as to obtain a target depth resolution map.
In an optional embodiment, the obtaining the target depth resolution map according to the first depth resolution map and the second depth resolution map includes:
Acquiring a first pixel value in the first depth resolution map, and acquiring a second pixel value corresponding to the first pixel value in the second depth resolution map;
calculating a pixel difference value and a pixel mean value between the first pixel value and the corresponding second pixel value;
and obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value.
Each pixel value in the first depth resolution map is a first pixel value, and the first pixel value refers to a depth or distance value of each part of the corresponding acquisition object in the first depth resolution map. Each pixel value in the second depth resolution map is a second pixel value, and the second pixel value refers to a depth or distance value of each part of the corresponding acquisition object in the second depth resolution map.
Since the first depth resolution map and the second depth resolution map have the same size, the number of first pixel values in the first depth resolution map is the same as the number of second pixel values in the second depth resolution map, and the first pixel values in the first depth resolution map and the second pixel values in the second depth resolution map are in one-to-one correspondence.
After the first pixel value in the first depth resolution map and the second pixel value corresponding to the first pixel value in the second depth resolution map are obtained, calculating a pixel difference value and a pixel mean value between each first pixel value and the corresponding second pixel value, namely, one pixel difference value and one pixel mean value exist in each first pixel value correspondingly.
For each first pixel value in the first depth resolution map, a target pixel value can be obtained according to a pixel difference value and a pixel average value corresponding to the first pixel value, so that a target depth resolution map is obtained according to the target pixel value of each first pixel value.
Illustratively, assume that a desk is measured by a TOF sensor, wherein a first point on the desk is 5.001m from the TOF sensor, a second point on the desk is 5.002m from the TOF sensor, and a third point on the desk is 5.003m from the TOF sensor; the first pixel value corresponding to the first point in the first depth resolution map is 5.002, the first pixel value corresponding to the second point is 5.000, and the first pixel value corresponding to the third point is 5.005; the second pixel value corresponding to the first point in the second depth resolution map is 5.002, the second pixel value corresponding to the second point is 5.002, and the second pixel value corresponding to the third point is 5.001; then for the first point, the pixel difference between the first pixel value 5.002 and the second pixel value 5.002 is calculated to be 0, the pixel average value is calculated to be 5.002, for the second point, the pixel difference between the first pixel value 5.000 and the second pixel value 5.002 is calculated to be 0.002, the pixel average value is calculated to be 5.001, for the third point, the pixel difference between the first pixel value 5.005 and the second pixel value 5.001 is calculated to be 0.004, and the pixel average value is calculated to be 5.003. Finally, according to the pixel difference value 0 and the pixel mean value 5.002 corresponding to the first point, the pixel difference value 0.002 and the pixel mean value 5.001 corresponding to the second point, and the pixel difference value 0.004 and the pixel mean value 5.003 corresponding to the third point, a target depth resolution map can be obtained.
In an optional embodiment, the obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value includes:
judging whether the pixel difference value is larger than a preset difference value threshold value or not;
when the pixel difference value is larger than the preset difference value threshold value, replacing the first pixel value with the corresponding pixel mean value to obtain a target pixel value;
when the pixel difference value is smaller than the preset difference value threshold value, updating the first pixel value to be a target pixel value;
and obtaining the target depth resolution map according to the target pixel value.
The pixel difference between the first depth resolution map and the second depth resolution map refers to the difference between the pixel values at the corresponding locations, representing the depth variation between the depth maps acquired at two different points in time. The preset difference threshold is a predefined threshold for determining whether the pixel difference value at the corresponding position in the two depth resolution maps is large enough.
If the pixel difference value is larger than the preset difference value threshold value, replacing the first pixel value with the corresponding pixel mean value, namely replacing the first pixel value with the pixel mean value to realize the smoothing of the depth change, so that the influence of the abnormal value is reduced. If the pixel difference value is smaller than the preset pixel threshold value, the original value of the corresponding pixel in the first depth resolution map is maintained. The updated first depth resolution map is then the target depth resolution map.
According to the alternative embodiment, whether the pixel difference value is larger than the preset difference value threshold is judged, when the pixel difference value is smaller than the preset difference value threshold, the first pixel value is kept unchanged, and when the pixel difference value is larger than the preset difference value threshold, the first pixel value is replaced by the corresponding pixel mean value by adopting a mean value replacing strategy, so that the first depth resolution map is updated, noise or abnormal values in the depth map are filtered out by the updated first depth resolution map, and therefore the quality and stability of the depth resolution map are improved.
For example, assuming that the preset difference threshold is 0.0001, since the pixel difference value 0 corresponding to the first point is smaller than the preset difference threshold 0.0001, the first pixel value 5.002 corresponding to the first point in the first depth resolution map is kept unchanged, that is, the target pixel value corresponding to the first point is 5.002; since the pixel difference value 0.002 corresponding to the second point is greater than the preset difference threshold value 0.0001, the first pixel value 5.002 corresponding to the second point in the first depth resolution map is updated to be the corresponding pixel mean value, that is, the target pixel value corresponding to the second point is 5.002; since the pixel difference value 0.004 corresponding to the third point is greater than the preset difference threshold value 0.0001, the first pixel value 5.005 corresponding to the third point in the first depth resolution map is updated to the corresponding pixel mean value, that is, the target pixel value corresponding to the third point is 5.003.
In an alternative embodiment, the method further comprises:
and generating a three-dimensional model of the acquisition object according to the target depth resolution map.
Obtaining a target pixel value of each pixel point in the target depth resolution map, obtaining a target pixel value set, taking the distance from the nearest pixel point of the TOF sensor as a reference, namely taking the minimum target pixel value in the target pixel value set as a reference, subtracting the minimum target pixel value from each target pixel value in the target pixel value set, obtaining a corresponding target pixel difference value, and converting each target pixel difference value into three-dimensional coordinates in space, so as to generate a corresponding three-dimensional model.
According to the method, the TOF sensor is used for carrying out depth measurement on the acquisition object in two different states, the first depth resolution map is obtained through projection floodlight, the second depth resolution map is obtained through projection floodlight and laser, and finally the target depth resolution map is obtained according to the two depth resolution maps. The depth measurement is carried out on the acquisition object under different states (a first state and a second state), the TOF sensor can capture the depth information of the same object under different conditions, and in the first state, the depth measurement is carried out on the acquisition object by using only floodlight projection; in the second state, the floodlight and the laser projection are used for carrying out depth measurement on the acquisition object, and in the second state, richer depth information can be acquired to provide higher-precision depth measurement; by combining different projection modes (floodlight and laser), the depth measurement error caused by illumination change, sensor error or other environmental factors is reduced, so that a target depth resolution map generated according to the first depth resolution map and the second depth resolution map can acquire more comprehensive and accurate object depth information, the accuracy and stability of depth measurement are improved, and the depth resolution of the TOF sensor is improved.
Referring to fig. 2, a functional block diagram of an apparatus for enhancing the resolution of a TOF sensor according to an embodiment of the present application is shown.
In some embodiments, the means 20 for enhancing the resolution of the TOF sensor may comprise a plurality of functional modules consisting of computer program segments. The computer program of the individual program segments in the TOF sensor resolution enhancing apparatus 20 can be stored in a memory that collects light projections of the model and executed by at least one processor to perform (see fig. 1 for details) the functions of enhancing the TOF sensor resolution.
In this embodiment, the device 20 for enhancing the resolution of the TOF sensor can be divided into a plurality of functional modules according to the functions performed by the device. The functional module may include: a scene acquisition module 201, a first acquisition module 202, a second acquisition module 203, an image processing module 204 and a model generation module 205. A module as referred to in this application refers to a series of computer program segments, stored in a memory, capable of being executed by at least one processor and of performing a fixed function. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
The scene acquisition module 201 is configured to determine a target acquisition scene when an acquisition object is measured.
The acquisition object is a target object measured or sensed by a Time of Flight (TOF) sensor. The TOF sensor acquires the distance between the target object and the sensor by emitting pulses of light and measuring the time of flight of the light. The target object can be an object with different media, and can be a solid such as a wall, a statue and the like, or a liquid such as a water column, a waterfall and the like. The target acquisition scene refers to an external environment where an acquisition object is positioned when being measured, and the target acquisition scene can comprise external environment brightness, external air humidity, irradiation distance, and the like.
When the acquisition object is measured, a target acquisition scene where the acquisition object is positioned when the measurement is performed is determined. According to different acquisition objects and acquisition scenes, a proper light projection angle can be selected, and interference of external factors can be reduced to a certain extent, so that a better acquisition effect is obtained. For example, when a tree is photographed in sunlight, a bright scene can be obtained by photographing the tree in the sunlight direction, but the camera is exposed to light due to direct sunlight to generate a shadow effect; shooting in the direction away from sunlight can reduce exposure, but the overall profile of the tree is less effective. In this embodiment, the irradiation angle defaults to front irradiation.
In an alternative embodiment, the determining the target acquisition scene when measuring the acquisition object includes:
acquiring the ambient brightness of the environment where the acquisition object is located;
and determining the target acquisition scene according to the segmentation interval of the ambient brightness and a preset ambient brightness threshold value.
The environment brightness comprises natural environment brightness and non-natural environment brightness, wherein the natural environment brightness refers to the brightness intensity emitted by sunlight irradiation or luminous animals and plants, the non-natural environment brightness refers to the brightness intensity emitted by a light source manufactured artificially, and the natural environment brightness and the non-natural environment brightness form the environment brightness together. The TOF sensor can be provided with a light sensor for measuring illumination intensity, the illumination intensity of the environment where the acquisition object is located is obtained through the light sensor, the illumination intensity is converted into an electric signal, and then the electric signal is read by a microcontroller in the TOF sensor, so that the environment brightness of the environment where the acquisition object is located is obtained. The light sensor may include, but is not limited to: photoresistors, photodiodes, phototransistors, etc.
In order to facilitate the processing of the ambient brightness, the ambient brightness threshold segmentation section may be preset, so that the brightness range is divided into a plurality of ambient brightness threshold sections, and an upper limit value and a lower limit value are defined for each of the ambient brightness threshold sections, which represent different levels of the ambient brightness. Different ambient brightness threshold intervals represent different ambient brightness levels, and each segment of ambient brightness threshold interval corresponds to one acquisition scene. The ambient brightness threshold segmentation interval may be adjusted according to the requirements of a particular application and the environmental characteristics.
And comparing the environment brightness of the environment where the acquisition object is positioned with each section of environment brightness threshold interval, and determining the section of environment brightness threshold interval as a target acquisition scene when the environment brightness of the environment where the acquisition object is positioned is in a certain section of environment brightness threshold interval. The environment brightness can be divided into five brightness intervals of extremely weak light, medium light, strong light and extremely strong light according to different environment brightness thresholds, wherein the extremely weak light corresponds to a first environment brightness threshold interval, the weak light corresponds to a second environment brightness threshold interval, the medium light corresponds to a third environment brightness threshold interval, the strong light corresponds to a fourth environment brightness threshold interval, the extremely strong light corresponds to a fifth environment brightness threshold interval, and each environment brightness threshold interval corresponds to different acquisition scenes. For example, assuming that the first environmental brightness threshold interval corresponding to the extremely weak light is [0, 50 cd), the second environmental brightness threshold interval corresponding to the weak light is [50cd,100 cd), the third environmental brightness threshold interval corresponding to the medium light is [100cd,200 cd), the fourth environmental brightness threshold interval corresponding to the strong light is [200cd,400cd ], the fifth environmental brightness threshold interval corresponding to the extremely strong light is [400cd,100000 cd ], cd (candela) candela, which is used for representing illumination brightness, when the environmental brightness of the environment where the acquisition object is located is 40cd, the target acquisition scene is determined to be an extremely weak light environment, such as night or extremely dim environment; when the environment brightness of the environment where the acquisition object is positioned is 70cd, determining that the target acquisition scene is a weak light environment, such as dim time or indoor weak illumination; when the environment brightness of the environment where the acquisition object is positioned is 300cd, determining that the target acquisition scene is a strong light environment, such as direct sunlight or in a bright indoor environment; when the ambient brightness of the environment where the acquisition object is located is 400cd, the target acquisition scene is determined to be an extremely bright environment, such as sunlight in noon in summer or dazzling light emitted by welding.
The first acquisition module 202 is configured to project floodlight to the acquisition object in a first state, and acquire a first depth resolution map of the acquisition object during the floodlight projection by using a TOF sensor.
The basic structure of a TOF sensor generally comprises: the TOF sensor can be provided with a vertical cavity surface emitting laser.
The TOF sensor is in a first state when the vcsels are off and in a second state when the vcsels are on, i.e. the first state refers to the state when the vcsels of the TOF sensor are off and the second state refers to the state when the vcsels of the TOF sensor are on. In the first state, the vertical cavity surface emitting laser is turned off, so that the TOF sensor can only project floodlight to the acquisition object, the TOF sensor is used for acquiring the flying time of floodlight emission and return, the distance of floodlight flying is obtained through calculation of the flying time, the distance between each part of the acquisition object and the TOF sensor is obtained, and a depth resolution map is generated according to the distance. The depth resolution map is used to represent an image of the depth information of the scene, showing the distance or depth value of each point in the scene, forming a depth map corresponding to the geometry of the scene. For convenience of description below, when the TOF sensor projects floodlight to the acquisition object, the acquired depth resolution map is the first depth resolution map.
For example, assuming that a TOF sensor is used to measure an automobile, the time for floodlight to and fro is 33.33 milliseconds when the floodlight is projected to the surface of the automobile and passes through the middle part of a window measured by the TOF sensor, the distance between the TOF sensor and the middle part of the window is calculated to be 5m according to the light speed of 30 ten thousand km/s, and similarly, the distance between the head and the TOF sensor is calculated to be 6m, and a first depth resolution map corresponding to the whole automobile can be obtained according to the distance.
In an alternative embodiment, the projecting floodlight toward the acquisition object, and acquiring, by the TOF sensor, a first depth resolution map of the acquisition object at the time of the floodlight projection includes:
converting the flood light to a spotlight by a spotlight in the TOF sensor;
and projecting the light condensation to the acquisition object, and acquiring a first depth resolution map of the acquisition object under the projection of the light condensation through the TOF sensor.
The lens or lens system of the TOF sensor is covered with a condenser in front of it, which is a light emitter that can emit scattered light at a wider angle and more uniformly distributed. The scattered floodlight can be converted into uniform condensation through the condenser, so that the diffusion of the light can be effectively reduced, and a more accurate collection effect can be obtained in the whole collection process.
For example, measurement of a person's statue by a TOF sensor, irregular reflection at some groove contours due to uneven light rays and unstable angles of incidence, results in some details of the measured depth resolution of the person's statue being distorted, such as distance from equidistant nose and ear locations. Even spotlight can obtain more accurate reflection at the tiny contour position of some statues, guarantees the even degree of angle and light of transmission, can obtain more accurate distance, and the convenience is follow-up to form three-dimensional personage's statue picture.
The second acquisition module 203 is configured to, in a second state, project floodlight and laser light to the acquisition object according to the target acquisition scene, and acquire, by using the TOF sensor, a second depth resolution map of the acquisition object during the floodlight and the laser light projection;
since the vertical cavity surface emitting laser is on in the second state, the TOF sensor can project flood light and laser light simultaneously to the acquisition object. Laser refers to a highly directional laser beam emitted by a VCSEL. The laser beam is difficult to diffuse when being disturbed by the outside, and has stability. The floodlight is easily interfered by external factors in a strong light environment, but the acquisition surface is larger than the laser, so that the range of the acquisition surface and the acquisition effect can be enhanced through the combined emission of the laser and the floodlight, and a more accurate second depth resolution map is obtained.
In an alternative embodiment, the projecting floodlight and laser light to the acquisition object according to the target acquisition scene includes:
determining the proportion of floodlight and laser projected to the acquisition object according to the acquisition scene;
determining a first current intensity of the projected floodlight and a second current intensity of the projected laser according to the proportion;
and projecting floodlight to the acquisition object according to the first current intensity and projecting laser to the acquisition object according to the second current intensity.
In different acquisition scenes, because the interference degrees of external light sources subjected to floodlight are different, floodlight and laser are required to be fused and projected according to a certain proportion, so that the external interference is effectively reduced, and a more accurate depth resolution map is obtained.
The first current intensity refers to the current intensity used in the concentrator to control the floodlight intensity. The second current intensity refers to the current intensity used to control the laser intensity in the VCSEL. The first current intensity and the second current intensity are both proportional to the illumination intensity. That is, the higher the current, the higher the intensity of illumination, and the lower the intensity of illumination.
For example, for a concentrator, assuming a current range of 5mA (milliamp) to 50mA, at 10mA current, the flood intensity is 10mW (milliwatts) and by increasing the current, for example to 20mA, the flood intensity is increased to 20mW. As another example, for a vertical cavity surface emitting laser, assuming a current range of 1mA to 10mA, at a current of 5mA the emitted laser intensity may be 1mW, by adjusting the current, for example, to 8mA, the laser intensity may be increased to 2mW.
The example assumes that the acquisition scene is a dark night, the influence of external illumination is extremely low, floodlight is hardly interfered by other external light sources, and meanwhile, the acquisition range of floodlight is larger than that of laser, and projection can be performed according to the proportion of 90% of floodlight and 10% of laser; assuming that the acquisition scene is a weak indoor illumination condition, floodlight is less influenced by weak indoor illumination, and when some illumination points are still possibly influenced, the floodlight can be projected according to the proportion of 70% of floodlight and 30% of laser; under the condition that the acquisition scene is medium brightness, projection can be carried out according to the proportion of 50% of floodlight and 50% of laser; assuming that the acquisition scene is in a strong light environment, light projection can be carried out according to the proportion of 70% of floodlight and 30% of laser; when the collected scene is in a very strong light environment, such as in the noon sun in summer, the floodlight in the scene is greatly disturbed, and the laser is required to be used as a main body for irradiation, at this time, the light projection can be performed according to the proportion of 10% of floodlight and 90% of laser.
The above alternative embodiments are adapted to different scene conditions. Such a system may be used in a variety of applications such as computer vision, depth perception, three-dimensional imaging, or other fields where fine control of light projection is desired.
According to the technical effect of the technical scheme, the proportion of floodlight and laser projected to the acquisition object can be adjusted in a self-adaptive and dynamic mode according to the acquisition scene, and the light projection is accurately controlled by determining the corresponding current intensity according to the proportion, so that the output of the light source is flexibly adjusted according to actual requirements and environmental conditions. Proper light source projection ratios help to improve imaging quality or to increase accuracy of depth perception.
The image processing module 204 is configured to obtain a target depth resolution map according to the first depth resolution map and the second depth resolution map.
The method comprises the steps of obtaining a first depth resolution map through floodlight collection, obtaining a second depth resolution map through floodlight and laser collection in different proportions, and optimizing the first depth resolution map through the second depth resolution map so as to obtain a target depth resolution map.
In an optional embodiment, the obtaining the target depth resolution map according to the first depth resolution map and the second depth resolution map includes:
Acquiring a first pixel value in the first depth resolution map, and acquiring a second pixel value corresponding to the first pixel value in the second depth resolution map;
calculating a pixel difference value and a pixel mean value between the first pixel value and the corresponding second pixel value;
and obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value.
Each pixel value in the first depth resolution map is a first pixel value, and the first pixel value refers to a depth or distance value of each part of the corresponding acquisition object in the first depth resolution map. Each pixel value in the second depth resolution map is a second pixel value, and the second pixel value refers to a depth or distance value of each part of the corresponding acquisition object in the second depth resolution map.
Since the first depth resolution map and the second depth resolution map have the same size, the number of first pixel values in the first depth resolution map is the same as the number of second pixel values in the second depth resolution map, and the first pixel values in the first depth resolution map and the second pixel values in the second depth resolution map are in one-to-one correspondence.
After the first pixel value in the first depth resolution map and the second pixel value corresponding to the first pixel value in the second depth resolution map are obtained, calculating a pixel difference value and a pixel mean value between each first pixel value and the corresponding second pixel value, namely, one pixel difference value and one pixel mean value exist in each first pixel value correspondingly.
For each first pixel value in the first depth resolution map, a target pixel value can be obtained according to a pixel difference value and a pixel average value corresponding to the first pixel value, so that a target depth resolution map is obtained according to the target pixel value of each first pixel value.
Illustratively, assume that a desk is measured by a TOF sensor, wherein a first point on the desk is 5.001m from the TOF sensor, a second point on the desk is 5.002m from the TOF sensor, and a third point on the desk is 5.003m from the TOF sensor; the first pixel value corresponding to the first point in the first depth resolution map is 5.002, the first pixel value corresponding to the second point is 5.000, and the first pixel value corresponding to the third point is 5.005; the second pixel value corresponding to the first point in the second depth resolution map is 5.002, the second pixel value corresponding to the second point is 5.002, and the second pixel value corresponding to the third point is 5.001; then for the first point, the pixel difference between the first pixel value 5.002 and the second pixel value 5.002 is calculated to be 0, the pixel average value is calculated to be 5.002, for the second point, the pixel difference between the first pixel value 5.000 and the second pixel value 5.002 is calculated to be 0.002, the pixel average value is calculated to be 5.001, for the third point, the pixel difference between the first pixel value 5.005 and the second pixel value 5.001 is calculated to be 0.004, and the pixel average value is calculated to be 5.003. Finally, according to the pixel difference value 0 and the pixel mean value 5.002 corresponding to the first point, the pixel difference value 0.002 and the pixel mean value 5.001 corresponding to the second point, and the pixel difference value 0.004 and the pixel mean value 5.003 corresponding to the third point, a target depth resolution map can be obtained.
In an optional embodiment, the obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value includes:
judging whether the pixel difference value is larger than a preset difference value threshold value or not;
when the pixel difference value is larger than the preset difference value threshold value, replacing the first pixel value with the corresponding pixel mean value to obtain a target pixel value;
when the pixel difference value is smaller than the preset difference value threshold value, updating the first pixel value to be a target pixel value;
and obtaining the target depth resolution map according to the target pixel value.
The pixel difference between the first depth resolution map and the second depth resolution map refers to the difference between the pixel values at the corresponding locations, representing the depth variation between the depth maps acquired at two different points in time. The preset difference threshold is a predefined threshold for determining whether the pixel difference value at the corresponding position in the two depth resolution maps is large enough.
If the pixel difference value is larger than the preset difference value threshold value, replacing the first pixel value with the corresponding pixel mean value, namely replacing the first pixel value with the pixel mean value to realize the smoothing of the depth change, so that the influence of the abnormal value is reduced. If the pixel difference value is smaller than the preset pixel threshold value, the original value of the corresponding pixel in the first depth resolution map is maintained. The updated first depth resolution map is then the target depth resolution map.
According to the alternative embodiment, whether the pixel difference value is larger than the preset difference value threshold is judged, when the pixel difference value is smaller than the preset difference value threshold, the first pixel value is kept unchanged, and when the pixel difference value is larger than the preset difference value threshold, the first pixel value is replaced by the corresponding pixel mean value by adopting a mean value replacing strategy, so that the first depth resolution map is updated, noise or abnormal values in the depth map are filtered out by the updated first depth resolution map, and therefore the quality and stability of the depth resolution map are improved.
For example, assuming that the preset difference threshold is 0.0001, since the pixel difference value 0 corresponding to the first point is smaller than the preset difference threshold 0.0001, the first pixel value 5.002 corresponding to the first point in the first depth resolution map is kept unchanged, that is, the target pixel value corresponding to the first point is 5.002; since the pixel difference value 0.002 corresponding to the second point is greater than the preset difference threshold value 0.0001, the first pixel value 5.002 corresponding to the second point in the first depth resolution map is updated to be the corresponding pixel mean value, that is, the target pixel value corresponding to the second point is 5.002; since the pixel difference value 0.004 corresponding to the third point is greater than the preset difference threshold value 0.0001, the first pixel value 5.005 corresponding to the third point in the first depth resolution map is updated to the corresponding pixel mean value, that is, the target pixel value corresponding to the third point is 5.003.
The model generating module 205 is configured to generate a three-dimensional model of the acquisition object according to the target depth resolution map.
Obtaining a target pixel value of each pixel point in the target depth resolution map, obtaining a target pixel value set, taking the distance from the nearest pixel point of the TOF sensor as a reference, namely taking the minimum target pixel value in the target pixel value set as a reference, subtracting the minimum target pixel value from each target pixel value in the target pixel value set, obtaining a corresponding target pixel difference value, and converting each target pixel difference value into three-dimensional coordinates in space, so as to generate a corresponding three-dimensional model.
According to the method, the TOF sensor is used for carrying out depth measurement on the acquisition object in two different states, the first depth resolution map is obtained through projection floodlight, the second depth resolution map is obtained through projection floodlight and laser, and finally the target depth resolution map is obtained according to the two depth resolution maps. The depth measurement is carried out on the acquisition object under different states (a first state and a second state), the TOF sensor can capture the depth information of the same object under different conditions, and in the first state, the depth measurement is carried out on the acquisition object by using only floodlight projection; in the second state, the floodlight and the laser projection are used for carrying out depth measurement on the acquisition object, and in the second state, richer depth information can be acquired to provide higher-precision depth measurement; by combining different projection modes (floodlight and laser), the depth measurement error caused by illumination change, sensor error or other environmental factors is reduced, so that a target depth resolution map generated according to the first depth resolution map and the second depth resolution map can acquire more comprehensive and accurate object depth information, the accuracy and stability of depth measurement are improved, and the depth resolution of the TOF sensor is improved. It should be understood that the various modifications and embodiments of the method for enhancing the resolution of the TOF sensor provided in the foregoing embodiments are equally applicable to the apparatus for enhancing the resolution of the TOF sensor in this embodiment, and the implementation procedure of the apparatus for enhancing the resolution of the TOF sensor in this embodiment will be apparent to those skilled in the art from the foregoing detailed description of the method for enhancing the resolution of the TOF sensor, and will not be described in detail herein for brevity of description.
Embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs all or part of the steps of the method of enhancing the resolution of a TOF sensor.
Referring to fig. 3, a schematic structural diagram of a TOF sensor 3 according to an embodiment of the present application is shown. In the preferred embodiment of the present application, the TOF sensor 3 comprises a memory 31, at least one processor 32, at least one communication bus 33.
It will be appreciated by those skilled in the art that the structure of the TOF sensor shown in fig. 3 is not limiting of the embodiments of the present application, and that it may be either a bus-type structure or a star-type structure, and that the TOF sensor 3 may also include more or less other hardware or software than illustrated, or a different arrangement of components.
In some embodiments, the TOF sensor 3 is a device capable of automatically performing numerical calculations and/or information processing according to preset or stored instructions, and its hardware includes, but is not limited to, microprocessors, application specific integrated circuits, programmable gate arrays, digital processors, embedded devices, and the like. The TOF sensor 3 may further comprise a client device, including but not limited to any electronic product capable of performing man-machine interaction with a client through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, etc
It should be noted that the TOF sensor 3 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of protection of the present application and are incorporated herein by reference.
In some embodiments, the memory 31 has stored therein a computer program which, when executed by the at least one processor 32, implements all or part of the steps of a method of enhancing the resolution of a TOF sensor as described. The Memory 31 includes Read-Only Memory (ROM), programmable Read-Only Memory (PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (One-time Programmable Read-Only Memory, OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic tape Memory, or any other medium that can be used for computer-readable carrying or storing data. Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like.
In some embodiments, the at least one processor 32 is a Control Unit (Control Unit) of the TOF sensor 3, connects the various components of the entire TOF sensor 3 using various interfaces and lines, and performs various functions and processes of the TOF sensor 3 by running or executing programs or modules stored in the memory 31, and invoking data stored in the memory 31. For example, the at least one processor 32, when executing the computer program stored in the memory, implements all or part of the steps of a method of enhancing the resolution of a TOF sensor described in embodiments of the present application; or to implement all or part of the functionality of the means to enhance the resolution of the TOF sensor. The at least one processor 32 may be comprised of integrated circuits, such as a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functionality, including one or more central processing units (Central Processing Unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like.
In some embodiments, the at least one communication bus 33 is arranged to enable connected communication between the memory 31 and the at least one processor 32 or the like. Although not shown, the TOF sensor 3 may further comprise a power source (such as a battery) for powering the various components, preferably the power source may be logically connected to the at least one processor 32 via a power management device, whereby the functions of managing charging, discharging, and power consumption are performed by the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The TOF sensor 3 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium that includes instructions for causing a TOF sensor or processor (processor) to perform portions of the methods described in various embodiments of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A method of enhancing the resolution of a TOF sensor, the method comprising:
determining a target acquisition scene when an acquisition object is measured;
in a first state, floodlight is projected to the acquisition object, and a first depth resolution map of the acquisition object during floodlight projection is acquired through a TOF sensor;
in a second state, floodlight and laser are projected to the acquisition object according to the target acquisition scene, and a second depth resolution map of the acquisition object during the floodlight and laser projection is acquired through the TOF sensor;
and obtaining a target depth resolution map according to the first depth resolution map and the second depth resolution map.
2. The method of claim 1, wherein determining a target acquisition scene at which to measure an acquisition object comprises:
acquiring the ambient brightness of the environment where the acquisition object is located;
determining the target acquisition scene according to the environmental brightness and a preset environmental brightness threshold segmentation interval;
the preset environment brightness threshold segmentation interval comprises a plurality of environment brightness threshold intervals, and each environment brightness threshold interval corresponds to one acquisition scene.
3. The method of claim 1, wherein the projecting flood light and laser light onto the acquisition object according to the target acquisition scene comprises:
determining the proportion of floodlight and laser projected to the acquisition object according to the acquisition scene;
determining a first current intensity of the projected floodlight and a second current intensity of the projected laser according to the proportion;
and projecting floodlight to the acquisition object according to the first current intensity and projecting laser to the acquisition object according to the second current intensity.
4. The method of claim 3, wherein the projecting flood light onto the acquisition object and acquiring a first depth resolution map of the acquisition object as the flood light is projected by the TOF sensor comprises:
converting the flood light to a spotlight by a spotlight in the TOF sensor;
and projecting the light condensation to the acquisition object, and acquiring a first depth resolution map of the acquisition object under the projection of the light condensation through the TOF sensor.
5. The method of claim 4, wherein obtaining a target depth resolution map from the first depth resolution map and the second depth resolution map comprises:
Acquiring a first pixel value in the first depth resolution map, and acquiring a second pixel value corresponding to the first pixel value in the second depth resolution map;
calculating a pixel difference value and a pixel mean value between the first pixel value and the corresponding second pixel value;
and obtaining the target depth resolution map according to the pixel difference value and the pixel mean value corresponding to the first pixel value.
6. The method of claim 5, wherein the obtaining the target depth resolution map from the pixel differences and the pixel averages corresponding to the first pixel values comprises:
judging whether the pixel difference value is larger than a preset difference value threshold value or not;
when the pixel difference value is larger than the preset difference value threshold value, replacing the first pixel value with the corresponding pixel mean value to obtain a target pixel value;
when the pixel difference value is smaller than the preset difference value threshold value, updating the first pixel value to be a target pixel value;
and obtaining the target depth resolution map according to the target pixel value.
7. The method of enhancing the resolution of a TOF sensor according to any one of claims 1 to 6, further comprising:
And generating a three-dimensional model of the acquisition object according to the target depth resolution map.
8. An apparatus for enhancing the resolution of a TOF sensor, the apparatus comprising:
the scene acquisition module is used for determining a target acquisition scene when the acquisition object is measured;
the first acquisition module is used for projecting floodlight to the acquisition object in a first state and acquiring a first depth resolution map of the acquisition object in the floodlight projection through the TOF sensor;
the second acquisition module is used for projecting floodlight and laser to the acquisition object according to the target acquisition scene in a second state, and acquiring a second depth resolution map of the acquisition object during the floodlight and laser projection through the TOF sensor;
and the image processing module is used for obtaining a target depth resolution map according to the first depth resolution map and the second depth resolution map.
9. A TOF sensor comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of enhancing the resolution of a TOF sensor according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method of enhancing the resolution of a TOF sensor according to any one of claims 1 to 7.
CN202311565712.3A 2023-11-21 2023-11-21 Method for enhancing resolution of TOF sensor and related equipment Pending CN117607900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311565712.3A CN117607900A (en) 2023-11-21 2023-11-21 Method for enhancing resolution of TOF sensor and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311565712.3A CN117607900A (en) 2023-11-21 2023-11-21 Method for enhancing resolution of TOF sensor and related equipment

Publications (1)

Publication Number Publication Date
CN117607900A true CN117607900A (en) 2024-02-27

Family

ID=89948974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311565712.3A Pending CN117607900A (en) 2023-11-21 2023-11-21 Method for enhancing resolution of TOF sensor and related equipment

Country Status (1)

Country Link
CN (1) CN117607900A (en)

Similar Documents

Publication Publication Date Title
TWI714131B (en) Control method, microprocessor, computer-readable storage medium and computer device
RU2729045C2 (en) Adaptive lighting system for mirror component and method for controlling adaptive lighting system
US9648694B2 (en) Lighting systems and methods providing active glare control
US10156437B2 (en) Control method of a depth camera
US20210065392A1 (en) Optimized exposure control for improved depth mapping
US8797385B2 (en) Robot device and method of controlling robot device
KR20200103832A (en) LIDAR-based distance measurement using hierarchical power control
CN108845332B (en) Depth information measuring method and device based on TOF module
US10313601B2 (en) Image capturing device and brightness adjusting method
CN103548423A (en) LED lamp comprising a power regulating device
CN112363150B (en) Calibration method, calibration controller, electronic device and calibration system
US20200225350A1 (en) Depth information acquisition system and method, camera module, and electronic device
CN104254174A (en) Lighting system
US10616561B2 (en) Method and apparatus for generating a 3-D image
CN113776449A (en) Tunnel deformation monitoring system and method based on machine vision self-adaption
CN117607900A (en) Method for enhancing resolution of TOF sensor and related equipment
KR20120000234A (en) The method of auto-exposure control for white light 3d scanner using an illuminometer
CN117119310A (en) Light supplementing device, system and method for wearable scanning equipment
CN111189840A (en) Paper defect detection method with near-field uniform illumination
EP3832533A1 (en) Face illumination control system and method
WO2022148769A1 (en) Time-of-flight demodulation circuitry, time-of-flight demodulation method, time-of-flight imaging apparatus, time-of-flight imaging apparatus control method
CN213780284U (en) Photosensitive performance test system of photosensitive element
CN117355006B (en) Method and device for illuminating solar flashlight, solar flashlight and storage medium
CN116503369B (en) Deformation monitoring method of structure and image exposure parameter adjusting method
CN112950691B (en) Control method and device for measuring depth information, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination