CN114114289A - Optimization method and system of time-of-flight sensor - Google Patents

Optimization method and system of time-of-flight sensor Download PDF

Info

Publication number
CN114114289A
CN114114289A CN202010904873.0A CN202010904873A CN114114289A CN 114114289 A CN114114289 A CN 114114289A CN 202010904873 A CN202010904873 A CN 202010904873A CN 114114289 A CN114114289 A CN 114114289A
Authority
CN
China
Prior art keywords
modulation frequency
time
map
scene
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010904873.0A
Other languages
Chinese (zh)
Inventor
洪嘉良
卢一斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shending Technology Nanjing Co ltd
Original Assignee
Shending Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shending Technology Nanjing Co ltd filed Critical Shending Technology Nanjing Co ltd
Priority to CN202010904873.0A priority Critical patent/CN114114289A/en
Publication of CN114114289A publication Critical patent/CN114114289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The disclosure relates to a time-of-flight sensor, and discloses a method and a system for optimizing the time-of-flight sensor. The method comprises the following steps: detecting by using a flight time sensor to obtain the integral confidence of a detection result; if the overall confidence coefficient is lower than a preset first threshold, extracting the characteristic parameters of the current scene, inputting the extracted characteristic parameters into a pre-trained machine learning model for scene type recognition, determining a new modulation frequency according to the scene type output by the machine learning model, and updating the modulation frequency of the time-of-flight sensor by using the new modulation frequency.

Description

Optimization method and system of time-of-flight sensor
Technical Field
The present disclosure relates to time-of-flight sensors, and more particularly to techniques for optimizing time-of-flight sensors.
Background
TOF is an abbreviation for Time of Flight (Time of Flight) technique. The TOF sensor emits modulated near infrared light, the modulated near infrared light is reflected after meeting an object, the distance of a shot scenery is converted by the TOF sensor through calculating the time difference or phase difference between light ray emission and reflection so as to generate depth information, and in addition, the TOF sensor can show the three-dimensional outline of the object in a topographic map mode that different colors represent different distances by combining with the shooting of a traditional camera.
When the ToF sensor detects a scene with a small confidence, the confidence may be too small to obtain a low confidence of the depth, so that sufficient depth information cannot be successfully established on the depth map. There are generally two reasons for lower confidence, the first because the modulation frequency setting is not compatible with the currently used scenes, and the second because of encountering objects with stronger optical absorption (e.g., black or transparent objects).
In this situation, the main processing method in the prior art is to adjust the operating parameters of the ToF sensor, and most methods are to give different modulation frequencies to a plurality of different frames, and then obtain a larger depth measurement range through de-aliasing algorithm.
A first disadvantage of the prior art is that the setting of the modulation frequency, although adjustable, is static and cannot automatically adapt to different scenarios.
A second disadvantage of the prior art is that objects in the same scene may have different confidence levels, and using the same operating parameters may result in relatively less depth information for objects in the scene that are less confident. And an important disadvantage of the current optical detection is that the optical detection is easily interfered by transparent objects such as glass containers.
Disclosure of Invention
A first object of the present disclosure is to provide a method and a system for optimizing a time-of-flight sensor, which can obtain the most suitable depth measurement range and the best depth measurement result in different scenarios without manually adjusting parameters.
A second object of the present disclosure is to be able to build sufficient depth information on the depth map also for objects of lower confidence or transparency in the scene.
The application discloses an optimization method of a flight time sensor, which comprises the following steps:
detecting by using a flight time sensor to obtain the integral confidence of a detection result;
if the overall confidence coefficient is lower than a preset first threshold, extracting the characteristic parameters of the current scene, inputting the extracted characteristic parameters into a pre-trained machine learning model for scene type identification, determining a new modulation frequency according to the scene type output by the machine learning model, and updating the modulation frequency of the time-of-flight sensor by using the new modulation frequency.
In a preferred example, the detecting with the time-of-flight sensor obtains an overall confidence of the detection result, and then further includes:
if the overall confidence is higher than the first threshold, fusing the depth map and the color map detected by the flight time sensor to obtain a color-depth fusion map;
carrying out object recognition and object segmentation in the color-depth fusion map to obtain at least one object region;
for each object region, if the local confidence of the object region is lower than a preset second threshold and the color dark degree exceeds a preset third threshold, the local confidence threshold of the object region is adjusted to retain the depth information of the object region as much as possible.
In a preferred embodiment, the depth map is obtained from phase information detected by the time-of-flight sensor.
In a preferred embodiment, the performing object recognition and object segmentation in the color-depth fusion map further includes:
identifying a normal line and a shielding boundary of the surface of the object and the surface shielding of the transparent object by using a convolution depth neural network to obtain a surface normal map, an object boundary map and a transparent object map;
and carrying out object recognition and object segmentation according to the surface normal map, the object boundary map and the transparent object map, and segmenting the region where the recognized object is located from the color-depth fusion map to serve as the object region.
In a preferred embodiment, the characteristic parameters of the current scenario include one of the following or any combination thereof:
the confidence histogram of the detection result is the edge, corner point and spot extracted from the color image shot by the flight time sensor.
In a preferred example, the scene type includes one of the following or any combination thereof:
indoor, outdoor, bright light, dim light, far distance and near distance.
In a preferred embodiment, the determining a new modulation frequency according to the scene type output by the machine learning model further includes:
using a first modulation frequency for an indoor scene type and a second modulation frequency for an outdoor scene type, wherein the first modulation frequency is greater than the second modulation frequency;
using a third modulation frequency for a scene type close to the distance, and using a fourth modulation frequency for a scene type far away from the distance, wherein the third modulation frequency is greater than the fourth modulation frequency;
a fifth modulation frequency is used for dim light scene types and a sixth modulation frequency is used for bright light scene types, wherein the fifth modulation frequency is greater than the sixth modulation frequency.
The application also discloses time of flight sensor's optimization system includes:
a time-of-flight sensor;
the scene recognition model is a machine learning model used for carrying out scene type recognition according to the input characteristic parameters;
the integral confidence judgment module is used for acquiring the integral confidence of the detection result of the flight time sensor, if the integral confidence is lower than a preset first threshold, extracting the characteristic parameters of the current scene, and inputting the extracted characteristic parameters into the pre-trained scene recognition model;
and the modulation frequency setting module is used for determining a new modulation frequency according to the scene type output by the scene recognition model and updating the modulation frequency of the time-of-flight sensor by using the new modulation frequency.
In a preferred embodiment, the method further comprises the following steps:
the color-depth fusion map module is used for fusing the depth map and the color map detected by the flight time sensor when the overall confidence coefficient is higher than the first threshold to obtain a color-depth fusion map;
the object recognition and segmentation module is used for carrying out object recognition and object segmentation in the color-depth fusion map to obtain at least one object region;
and the local confidence threshold adjusting module is used for adjusting the local confidence threshold of each object region to reserve the depth information of the object region as much as possible if the local confidence of the object region is lower than a preset second threshold and the color dark degree exceeds a preset third threshold.
The application also discloses time of flight sensor's optimization system includes:
a memory for storing computer executable instructions; and the number of the first and second groups,
a processor, coupled with the memory, for implementing the steps in the method as described above when executing the computer-executable instructions.
The present application also discloses a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the steps in the method as described above.
In the embodiment of the present disclosure, by detecting the overall confidence of a scene, according to different scenes, the ToF sensor operating parameters suitable for the scene are dynamically given in combination with AI, so as to obtain the most suitable depth measurement range and the best depth measurement result. In addition, for objects with low confidence level or transparency in the scene, more depth features are obtained by combining a color-depth fusion map, AI object identification and an AI neural network, so that sufficient depth information can be finally established on the depth map.
The present disclosure describes a large number of technical features distributed in various technical solutions, and if all possible combinations of the technical features (i.e., technical solutions) of the present disclosure are listed, the description is too long. In order to avoid this problem, the respective technical features disclosed in the above summary, the respective technical features disclosed in the following embodiments and examples, and the respective technical features disclosed in the drawings may be freely combined with each other to constitute various new technical solutions (all of which should be regarded as having been described in the present specification) unless such a combination of the technical features is technically impossible. For example, in one example, the feature a + B + C is disclosed, in another example, the feature a + B + D + E is disclosed, and the features C and D are equivalent technical means for the same purpose, and technically only one feature is used, but not simultaneously employed, and the feature E can be technically combined with the feature C, then the solution of a + B + C + D should not be considered as being described because the technology is not feasible, and the solution of a + B + C + E should be considered as being described.
Drawings
FIG. 1 is a schematic flow chart of a method for optimizing a time-of-flight sensor according to a first embodiment of the present disclosure;
fig. 2 is a schematic diagram of an optimized system architecture for a time-of-flight sensor according to a second embodiment of the present disclosure.
Detailed Description
In the following description, numerous technical details are set forth in order to provide a better understanding of the present disclosure. However, it will be understood by those of ordinary skill in the art that the claimed embodiments of the present disclosure may be practiced without these specific details and with various changes and modifications based on the following embodiments.
Description of partial concepts:
time of Flight ranging
AI of Artificial Intelligence
RGB-D: is an image comprising Depth information, is a common RGB three-channel color image + Depth Map (Depth Map). A Depth Map (Depth Map) is an image or image channel containing information about the distance of the surface of a scene object of a viewpoint. The Depth Map is similar to a grayscale image except that each pixel value thereof is the actual distance of the sensor from the object. Usually, the RGB image and the depth map are registered, so that there is a one-to-one correspondence between pixel points.
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings.
A first embodiment of the present disclosure relates to a method for optimizing a time-of-flight sensor, the flow of which is shown in fig. 1, the method comprising the steps of:
in step 101, a time-of-flight sensor is used for detection, and the overall confidence of the detection result is obtained.
Then, step 102 is entered to determine whether the overall confidence is lower than a preset first threshold, and if the overall confidence is lower than the preset first threshold, step 103 is entered. Otherwise step 106 is entered.
In step 103, feature parameters of the current scene are extracted. Optionally, in one embodiment, the feature parameters of the current scene may include a confidence histogram of the detection result, edges, corners, spots, and the like extracted from a color map captured by the time-of-flight sensor.
And then entering step 104, inputting the extracted characteristic parameters into a pre-trained machine learning model for scene type recognition, and obtaining an output scene type. Optionally, in one embodiment, the scene types may include indoor, outdoor, bright, dim, far, near, and so on.
Machine learning models that may be used include: support Vector Machines (SVM), K-Nearest Neighbors (KNN), AdaBoost, Naive Bayes classifier, artificial Neural Network (NN), and the like.
The training method of the machine learning model comprises the following steps: respectively acquiring characteristic parameters of a plurality of scenes, respectively setting labels representing scene types for the scenes to form a training set, and training by using data in the training set and a training method adaptive to the selected machine learning model. After training, a machine learning model with a scene type classification function can be obtained.
Thereafter, step 105 is entered, a new modulation frequency is determined according to the scene type output by the machine learning model, and the modulation frequency of the time-of-flight sensor is updated by using the new modulation frequency. The time-of-flight sensor can then re-detect using the new modulation frequency.
In step 106, if the overall confidence is higher than the first threshold, the depth map detected by the time-of-flight sensor and the color map are fused to obtain a color-depth fusion map (e.g., an RGB-D fusion map). Wherein the depth map is obtained from phase information detected by the time-of-flight sensor.
Then, step 107 is entered, and object recognition and object segmentation are performed in the color-depth fusion map to obtain at least one object region.
Then, step 108 is entered, and for each object region, if the local confidence of the object region is lower than the preset second threshold and the color dark degree exceeds the preset third threshold, the local confidence threshold of the object region is adjusted to retain the depth information of the object region as much as possible. That is, the depth information is processed using the adjusted confidence threshold for dark object regions of low confidence, while the prior art method is used for other regions. Optionally, in an embodiment, the color dark degree is determined according to RGB information in the color-depth fusion map, and if the gray value is greater than the third threshold, the color is determined to be dark. Alternatively, in one embodiment, because the transparent and black objects are similar in nature to the time-of-flight sensor, the same approach may be applied for optimization.
The above steps 106 to 108 are for processing black or transparent objects, and may not be used if processing of black or transparent objects is not required in some scenes. In this case, if the overall confidence is lower than the preset first threshold, the processing may be performed according to the prior art.
Optionally, in an embodiment, step 107 may further include: and identifying the normal line and the shielding boundary of the surface of the object and the surface shielding of the transparent object by using the convolution depth neural network to obtain a surface normal map, an object boundary map and a transparent object map. And performing object recognition and object segmentation according to the surface normal map, the object boundary map and the transparent object map, and segmenting the region where the recognized object is located from the color-depth fusion map to be used as an object region. In another embodiment, other algorithms may be used to achieve object recognition and object segmentation.
Optionally, in an embodiment, in step 105, determining a new modulation frequency according to the scene type output by the machine learning model may further include:
using a first modulation frequency for an indoor scene type and a second modulation frequency for an outdoor scene type, wherein the first modulation frequency is greater than the second modulation frequency;
using a third modulation frequency for a scene type close to the distance, and using a fourth modulation frequency for a scene type far away from the distance, wherein the third modulation frequency is greater than the fourth modulation frequency;
a fifth modulation frequency is used for dim light scene types and a sixth modulation frequency is used for bright light scene types, wherein the fifth modulation frequency is greater than the sixth modulation frequency.
In this embodiment, by detecting the overall confidence of the scene, the modulation frequency suitable for the scene is given. The best time-of-flight sensor operating parameters are found in conjunction with AI to get the most appropriate depth measurement range. And the AI and the color-depth fusion map are combined for object identification, so that objects with relatively low local confidence in the scene can acquire enough depth information. And restoring the depth information of the transparent object by using the information of the color-depth fusion map and the depth convolution network.
In another embodiment, step 103 and step 105 can be omitted, and only steps 101, 102 and 106 and step 108 are retained. When step 102 determines that the overall confidence level is below the first threshold, processing proceeds in the manner of the prior art. The technical scheme can establish enough depth information on the depth map for objects with low confidence level or transparency in the scene.
A second embodiment of the present disclosure relates to an optimization system of a time-of-flight sensor, which is configured as shown in fig. 2, and includes:
a time-of-flight sensor.
And the scene recognition model is a machine learning model used for carrying out scene type recognition according to the input characteristic parameters. Optionally, in one embodiment, the scene types may include indoor, outdoor, bright, dim, far, near, and so on.
And the overall confidence judgment module is used for acquiring the overall confidence of the detection result of the flight time sensor, extracting the characteristic parameters of the current scene if the overall confidence is lower than a preset first threshold, and inputting the extracted characteristic parameters into a pre-trained scene recognition model. In one embodiment, the feature parameters of the current scene may include a confidence histogram of the detection results, edges, corners, blobs, etc. extracted from a color map captured by a time-of-flight sensor.
And the modulation frequency setting module is used for determining a new modulation frequency according to the scene type output by the scene recognition model and updating the modulation frequency of the time-of-flight sensor by using the new modulation frequency.
Optionally, in an embodiment, the new modulation frequency may be determined according to the scene type output by the scene recognition model by:
using a first modulation frequency for an indoor scene type and a second modulation frequency for an outdoor scene type, wherein the first modulation frequency is greater than the second modulation frequency;
using a third modulation frequency for a scene type close to the distance, and using a fourth modulation frequency for a scene type far away from the distance, wherein the third modulation frequency is greater than the fourth modulation frequency;
a fifth modulation frequency is used for dim light scene types and a sixth modulation frequency is used for bright light scene types, wherein the fifth modulation frequency is greater than the sixth modulation frequency.
Optionally, in an embodiment, in order to effectively process black objects or transparent objects, the optimization system of the time-of-flight sensor may further include:
and the color-depth fusion map module is used for fusing the depth map and the color map detected by the flight time sensor when the overall confidence coefficient is higher than a first threshold so as to obtain a color-depth fusion map. Wherein the depth map is obtained from phase information detected by the time-of-flight sensor.
And the object recognition and segmentation module is used for carrying out object recognition and object segmentation in the color-depth fusion map to obtain at least one object region. Alternatively, in one embodiment, object recognition and object segmentation may be performed in the color-depth fusion map by: and identifying the normal line and the shielding boundary of the surface of the object and the surface shielding of the transparent object by using the convolution depth neural network to obtain a surface normal map, an object boundary map and a transparent object map. And performing object recognition and object segmentation according to the surface normal map, the object boundary map and the transparent object map, and segmenting the region where the recognized object is located from the color-depth fusion map to be used as an object region.
And the local confidence threshold adjusting module is used for adjusting the local confidence threshold of each object region to reserve the depth information of the object region as much as possible if the local confidence of the object region is lower than a preset second threshold and the color dark degree exceeds a preset third threshold.
The first embodiment is a method embodiment corresponding to the present embodiment, and the technical details in the first embodiment may be applied to the present embodiment, and the technical details in the present embodiment may also be applied to the first embodiment.
It should be noted that, as will be understood by those skilled in the art, the implementation functions of the modules shown in the embodiment of the optimization system of the time-of-flight sensor described above can be understood by referring to the related description of the optimization method of the time-of-flight sensor. The functions of the modules shown in the above-described embodiment of the time-of-flight sensor optimization system may be implemented by a program (executable instructions) running on a processor, or may be implemented by specific logic circuits. The time-of-flight sensor optimization system according to the embodiments of the present disclosure may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present disclosure also provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the method embodiments of the present disclosure. Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable storage medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Additionally, embodiments of the present disclosure also provide a system for optimizing a time-of-flight sensor, comprising a memory for storing computer-executable instructions, and a processor; the processor is configured to implement the steps of the method embodiments described above when executing the computer-executable instructions in the memory. The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. The aforementioned memory may be a read-only memory (ROM), a Random Access Memory (RAM), a Flash memory (Flash), a hard disk, or a solid state disk. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
It is noted that, in the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element. In the present disclosure, if it is mentioned that a certain action is performed according to a certain element, it means that the action is performed at least according to the element, and two cases are included: performing the action based only on the element, and performing the action based on the element and other elements. The expression of a plurality of, a plurality of and the like includes 2, 2 and more than 2, more than 2 and more than 2.
This specification includes combinations of the various embodiments described herein. Separate references to embodiments (e.g., "one embodiment" or "some embodiments" or "a preferred embodiment"); however, these embodiments are not mutually exclusive, unless indicated as mutually exclusive or as would be apparent to one of ordinary skill in the art. It should be noted that the term "or" is used in this specification in a non-exclusive sense unless the context clearly dictates otherwise.
All documents mentioned in this specification are to be considered as being integrally included in the disclosure of the present disclosure so as to be able to be a basis for modification as necessary. It should be understood that the above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present disclosure should be included in the scope of protection of one or more embodiments of the present disclosure.
In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

Claims (11)

1. A method of optimizing a time-of-flight sensor, comprising:
detecting by using a flight time sensor to obtain the integral confidence of a detection result;
if the overall confidence coefficient is lower than a preset first threshold, extracting the characteristic parameters of the current scene, inputting the extracted characteristic parameters into a pre-trained machine learning model for scene type identification, determining a new modulation frequency according to the scene type output by the machine learning model, and updating the modulation frequency of the time-of-flight sensor by using the new modulation frequency.
2. The method for optimizing time-of-flight sensor according to claim 1, wherein the detecting using the time-of-flight sensor obtains an overall confidence of the detection result, and then further comprising:
if the overall confidence is higher than the first threshold, fusing the depth map and the color map detected by the flight time sensor to obtain a color-depth fusion map;
carrying out object recognition and object segmentation in the color-depth fusion map to obtain at least one object region;
for each object region, if the local confidence of the object region is lower than a preset second threshold and the color dark degree exceeds a preset third threshold, the local confidence threshold of the object region is adjusted to retain the depth information of the object region as much as possible.
3. The method of optimizing a time-of-flight sensor of claim 2, wherein the depth map is obtained from phase information detected by the time-of-flight sensor.
4. The method for optimizing time-of-flight sensor according to claim 2, wherein the performing object recognition and object segmentation in the color-depth fusion map further comprises:
identifying a normal line and a shielding boundary of the surface of the object and the surface shielding of the transparent object by using a convolution depth neural network to obtain a surface normal map, an object boundary map and a transparent object map;
and carrying out object recognition and object segmentation according to the surface normal map, the object boundary map and the transparent object map, and segmenting the region where the recognized object is located from the color-depth fusion map to serve as the object region.
5. The method for optimizing time-of-flight sensor according to claim 1, wherein the characteristic parameters of the current scene include one of the following or any combination thereof:
the confidence histogram of the detection result is the edge, corner point and spot extracted from the color image shot by the flight time sensor.
6. The method of optimizing a time-of-flight sensor of any one of claims 1 to 5, wherein the scene type comprises one of the following or any combination thereof:
indoor, outdoor, bright light, dim light, far distance and near distance.
7. The method of claim 6, wherein determining a new modulation frequency based on the scene type output by the machine learning model further comprises:
using a first modulation frequency for an indoor scene type and a second modulation frequency for an outdoor scene type, wherein the first modulation frequency is greater than the second modulation frequency;
using a third modulation frequency for a scene type close to the distance, and using a fourth modulation frequency for a scene type far away from the distance, wherein the third modulation frequency is greater than the fourth modulation frequency;
a fifth modulation frequency is used for dim light scene types and a sixth modulation frequency is used for bright light scene types, wherein the fifth modulation frequency is greater than the sixth modulation frequency.
8. A system for optimizing a time-of-flight sensor, comprising:
a time-of-flight sensor;
the scene recognition model is a machine learning model used for carrying out scene type recognition according to the input characteristic parameters;
the integral confidence judgment module is used for acquiring the integral confidence of the detection result of the flight time sensor, if the integral confidence is lower than a preset first threshold, extracting the characteristic parameters of the current scene, and inputting the extracted characteristic parameters into the pre-trained scene recognition model;
and the modulation frequency setting module is used for determining a new modulation frequency according to the scene type output by the scene recognition model and updating the modulation frequency of the time-of-flight sensor by using the new modulation frequency.
9. The system for optimizing a time-of-flight sensor of claim 8, further comprising:
the color-depth fusion map module is used for fusing the depth map and the color map detected by the flight time sensor when the overall confidence coefficient is higher than the first threshold to obtain a color-depth fusion map;
the object recognition and segmentation module is used for carrying out object recognition and object segmentation in the color-depth fusion map to obtain at least one object region;
and the local confidence threshold adjusting module is used for adjusting the local confidence threshold of each object region to reserve the depth information of the object region as much as possible if the local confidence of the object region is lower than a preset second threshold and the color dark degree exceeds a preset third threshold.
10. A system for optimizing a time-of-flight sensor, comprising:
a memory for storing computer executable instructions; and the number of the first and second groups,
a processor, coupled with the memory, for implementing the steps in the method of any of claims 1-7 when executing the computer-executable instructions.
11. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the steps in the method of any one of claims 1 to 7.
CN202010904873.0A 2020-09-01 2020-09-01 Optimization method and system of time-of-flight sensor Pending CN114114289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010904873.0A CN114114289A (en) 2020-09-01 2020-09-01 Optimization method and system of time-of-flight sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010904873.0A CN114114289A (en) 2020-09-01 2020-09-01 Optimization method and system of time-of-flight sensor

Publications (1)

Publication Number Publication Date
CN114114289A true CN114114289A (en) 2022-03-01

Family

ID=80360656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010904873.0A Pending CN114114289A (en) 2020-09-01 2020-09-01 Optimization method and system of time-of-flight sensor

Country Status (1)

Country Link
CN (1) CN114114289A (en)

Similar Documents

Publication Publication Date Title
CN108304873B (en) Target detection method and system based on high-resolution optical satellite remote sensing image
US20200012865A1 (en) Adapting to appearance variations when tracking a target object in video sequence
US9710716B2 (en) Computer vision pipeline and methods for detection of specified moving objects
Kuang et al. Combining region-of-interest extraction and image enhancement for nighttime vehicle detection
CN105184763B (en) Image processing method and device
US9165369B1 (en) Multi-object detection and recognition using exclusive non-maximum suppression (eNMS) and classification in cluttered scenes
WO2015163830A1 (en) Target localization and size estimation via multiple model learning in visual tracking
JP2006209755A (en) Method for tracing moving object inside frame sequence acquired from scene
BR102013024545A2 (en) method deployed using a processor, system, and computer-readable device-based device
CN110378837B (en) Target detection method and device based on fish-eye camera and storage medium
US20190311216A1 (en) Image processing device, image processing method, and image processing program
WO2021201774A1 (en) Method and system for determining a trajectory of a target object
CN112784712A (en) Missing child early warning implementation method and device based on real-time monitoring
CN114998595A (en) Weak supervision semantic segmentation method, semantic segmentation method and readable storage medium
US20130208947A1 (en) Object tracking apparatus and control method thereof
CN110309808B (en) Self-adaptive smoke root node detection method in large-scale space
CN111950597B (en) Improved closed-loop detection method of robot based on original image illumination invariant image bag-of-words model
KR101268596B1 (en) Foreground extraction apparatus and method using CCB and MT LBP
CN114114289A (en) Optimization method and system of time-of-flight sensor
Hodne et al. Detecting and suppressing marine snow for underwater visual slam
CN114863257A (en) Image processing method, device, equipment and storage medium
CN114511911A (en) Face recognition method, device and equipment
Akila et al. Ontology based multiobject segmentation and classification in sports videos
Bogun et al. Robstruck: Improving occlusion handling of structured tracking-by-detection using robust kalman filter
CN110569865A (en) Method and device for recognizing vehicle body direction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination