CN116068578A - Depth camera module and robot with dirty detection function - Google Patents

Depth camera module and robot with dirty detection function Download PDF

Info

Publication number
CN116068578A
CN116068578A CN202310023840.9A CN202310023840A CN116068578A CN 116068578 A CN116068578 A CN 116068578A CN 202310023840 A CN202310023840 A CN 202310023840A CN 116068578 A CN116068578 A CN 116068578A
Authority
CN
China
Prior art keywords
camera module
depth camera
area
depth
structured light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310023840.9A
Other languages
Chinese (zh)
Inventor
胡涛
朱力
吕方璐
汪博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202310023840.9A priority Critical patent/CN116068578A/en
Publication of CN116068578A publication Critical patent/CN116068578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S2007/4975Means for monitoring or calibrating of sensor obstruction by, e.g. dirt- or ice-coating, e.g. by reflection measurement on front-screen
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

A depth camera module with a dirt detection function comprises a laser emitter, a sensor and a sensor, wherein the laser emitter is used for switchably emitting structured light or floodlight; a receiver for receiving the structured light or the floodlight reflected signal; the controller is used for controlling the laser transmitter to emit structured light and floodlight according to a certain proportion, the receiver receives a first signal of the adjacent structured light and a second signal of the floodlight, a first depth map and a second depth map are respectively generated, a first area N1 with a depth value smaller than N is obtained on the first depth map, a second area N2 with a depth value smaller than N is obtained on the second depth map, and whether the lens is polluted or not is judged according to the ratio of the average intensity of the signals on the first area N1 and the second area N2. The invention does not need an extra device, can carry out dirt detection in any time, does not influence data acquisition, and has very high efficiency and convenience.

Description

Depth camera module and robot with dirty detection function
Technical Field
The invention relates to the technical field of depth cameras, in particular to a depth camera module with a dirt detection function and a robot.
Background
The depth camera is a pair of glasses of a plurality of artificial intelligent devices such as robots, and has important significance for effective operation of various intelligent devices. Due to the characteristics of the depth camera, it is often damaged due to external environmental influences, such as water stains, dust pollution, scratches, etc. When the lens of the depth camera is located at the outermost side, the lens of the depth camera is easily damaged. When there is a protective lens outside the depth camera, the protective lens is easily damaged. Both the lens of the depth camera and the protective lens are damaged, the damage to the lens affects the data acquired by the depth camera, so that the dirt needs to be recognized in time.
Additional detection devices are often employed in the prior art to detect soiling of the depth camera lenses or protective lenses.
For example, an invention discloses a smudge monitoring system, a depth camera, an intelligent terminal, a smudge detection method and a computer readable storage medium of a transmitting module, wherein the smudge monitoring system comprises a light source, an optical element, a detecting element and a processor, the light source is used for transmitting light signals, the optical element is positioned on a projection path of the light source, the detecting element generates photocurrent after receiving the light signals reflected by the optical element, the processor is used for calculating the reflectivity of the optical element to the light signals according to the photocurrent, and when the reflectivity of the optical element is larger than a first preset threshold value, the optical element is judged to have smudge.
However, these techniques are complicated to operate, require additional devices, and are difficult to be widely used in practical applications.
The foregoing background is only for the purpose of providing an understanding of the inventive concepts and technical aspects of the present invention and is not necessarily prior art to the present application and is not intended to be used to evaluate the novelty and creativity of the present application in the event that no clear evidence indicates that such is already disclosed at the filing date of the present application.
Disclosure of Invention
Therefore, the invention aims at the depth camera capable of switching the projection structure light and floodlight, screens the first area and the second area by utilizing the depth value, and judges whether the first area and the second area are stained on the lens according to the ratio of the average signal intensity of the first area and the average signal intensity of the second area.
In a first aspect, the present invention provides a depth camera module having a contamination detection function, comprising a laser emitter, a receiver, and a controller;
the laser emitter is used for switchably emitting structured light or floodlight;
the receiver is used for receiving the reflected signal of the structured light or the floodlight;
the controller is used for controlling the laser transmitter to emit structured light and floodlight according to a certain proportion, the receiver receives a first signal of the adjacent structured light and a second signal of the floodlight, a first depth image and a second depth image are respectively generated, a first area N1 with a depth value smaller than N is obtained on the first depth image, a second area N2 with a depth value smaller than N is obtained on the second depth image, and whether the lens is polluted or not is judged according to the ratio of the average intensities of the signals on the first area N1 and the second area N2; wherein N is a preset value.
Optionally, the depth camera module with the contamination detection function is characterized in that the error value N of the lens of the depth camera module with the contamination detection function is positively correlated with the error value N of the lens of the depth camera module with the contamination detection function.
Optionally, the depth camera module with the dirt detecting function is characterized in that when the laser emitter emits structured light or floodlight, the error values of the laser emitter and the laser emitter at the lens are different, and the error value n is the larger value of errors of the laser emitter and the laser emitter.
Optionally, the depth camera module with the dirt detection function is characterized in that n=3n.
Optionally, the depth camera module with the contamination detection function is characterized in that if contamination is detected, specific features of the contamination are detected on the second depth map.
Optionally, the depth camera module with the dirt detection function is characterized in that a water stain model, a scratch model and a particle model are sequentially utilized to detect dirt.
Optionally, the depth camera module with the dirt detecting function is characterized in that the average signal intensity of the first area N1 refers to the total intensity of the structured light beam projected on the first area N1 divided by the area of the first area N1 irradiated by the structured light beam.
Optionally, the depth camera module with the dirt detection function is characterized in that the average signal intensity of the first area N1 is M1, the average signal intensity of the second area N2 is M2, if
Figure BDA0004043797360000021
Judging that the lens is stained; wherein σ is the attenuation coefficient associated with the lens.
In a second aspect, the present invention provides a robot comprising a depth camera module with a contamination detection function as claimed in any one of claims 1 to 8.
Optionally, the robot is characterized by further comprising a distance sensor; the distance sensor and the depth camera module with the dirt detection function are arranged adjacently in the same direction, and when the distance sensor detects a close-range object, the depth camera module with the dirt detection function does not judge that the depth camera module with the dirt detection function has dirt.
Compared with the prior art, the invention has the following beneficial effects:
the invention uses the signals acquired by the depth camera module to detect, thus obtaining dirty data information, and does not need additional devices or equipment to detect, thus having the characteristics of being very convenient and fast, and simultaneously being capable of being synchronously carried out with normal data acquisition and improving the efficiency.
The invention screens the first area and the second area by utilizing the depth value aiming at the depth camera capable of switching the projection structure light and floodlight, and greatly reduces the judging area according to whether the lens has dirt or not according to the ratio of the average signal intensity of the first area and the average signal intensity of the second area, thereby being beneficial to quickly obtaining the detection result and realizing the normalized real-time detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic structural diagram of a depth camera module with a dirt detecting function according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a laser transmitter according to an embodiment of the present invention;
FIG. 3 is a schematic view of another laser transmitter according to an embodiment of the present invention;
FIG. 4 is a schematic view of fouling in accordance with an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a robot according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a depth camera module with a dirt detection function and a system thereof, which aim to solve the problems in the prior art.
The following describes the technical scheme of the present invention and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
The invention screens the first area and the second area by utilizing the depth value aiming at the depth camera capable of switching the projection structure light and floodlight, and whether the first area and the second area are stained or not is judged according to the average signal intensity ratio of the first area and the second area.
Fig. 1 is a schematic structural diagram of a depth camera module with a contamination detection function according to an embodiment of the invention. As shown in fig. 1, a depth camera module with a contamination detection function according to an embodiment of the present invention includes a laser emitter 1, a receiver 2, and a controller 3.
A laser emitter 1 for switchably emitting structured light or floodlight.
Specifically, the laser transmitter 1 can realize rapid switching between structured light and floodlight, and can cause structured light projection and floodlight projection to be projected at a certain ratio within a preset period of time. For example, within 1s, structured light and floodlight are projected 10 times and 5 times, respectively, in a ratio of 2:1. structured light and floodlight can be switched rapidly. The structure of the laser emitter 1 may be varied as long as the switching between structured light and floodlight can be achieved.
In some embodiments, as shown in fig. 2, the laser transmitter 1 includes a structured light projector 101 and a modulation sheet 102. The structured light projector 101 projects a structured light beam. The structured-light beam is directed perpendicularly to the modulation sheet 102. The modulation sheet 102 is voltage controlled to switch between a light transmissive state and a frosted state. When the modulation sheet 102 is in a light-transmitting state, the structured light beam penetrates the modulation sheet 102 to emit the structured light beam. When the modulation sheet 102 is frosted, the structured light beam passes through the modulation sheet 102 to be floodlight and exits. In this embodiment, the intensity of the floodlight is smaller than that of the structured light. By controlling the voltage applied to the modulator plate 102, the laser transmitter 1 can be switched between structured light and floodlight rapidly.
In some embodiments, as shown in fig. 3, the laser transmitter 1 includes a structured light projector 101, a lens 103, and a diffractive optical element 104. The structured light projector 101 projects a structured light beam. The lens 103 is used to refract the structured light beam emitted from the structured light projector 101 and project the structured light beam onto the diffractive optical element 104. The diffractive optical element 104 diffracts the incident light beam to form structured light or flood light. By adjusting the relative distance between the lens 103 and the diffractive optical element 104 and changing the relative position between the focal length of the lens 103 and the diffractive optical element 104, the light beam emitted from the diffractive optical element 104 can be structured light or floodlight. In the adjustment, the positions of the lens 103 or the diffractive optical element 104 may be adjusted individually, or the positions of the lens 103 and the diffractive optical element 104 may be adjusted simultaneously.
And a receiver 2 for receiving the reflected signal of the structured light or the floodlight.
Specifically, the receiver 2 is a TOF receiver, which can receive floodlight signals and structured light signals. The signal received by the receiver 2 comprises a reflected signal of the target area. If there is smudge on the lens, the receiver 2 may also receive a reflected signal of smudge on the lens. However, since the lens is too close to the receiver 2, the error of the depth data obtained by the lens is too large, and it is difficult to directly determine the lens from the depth data.
The controller 3 is used for controlling the laser transmitter to emit structured light and floodlight according to a certain proportion, the receiver receives a first signal of the adjacent structured light and a second signal of the floodlight, a first depth image and a second depth image are respectively generated, a first area N1 with a depth value smaller than N is obtained on the first depth image, a second area N2 with a depth value smaller than N is obtained on the second depth image, and whether the lens is polluted or not is judged according to the ratio of the average intensities of the signals on the first area N1 and the second area N2; wherein N is a preset value.
In particular, the ratio of structured light and flood light projection by the laser emitter 1 may be any ratio, such as 1:1,1:2,1:3,2:1,3:1, etc. When the ratio of structured light to floodlight is 1:1, the images obtained by adjacent structured light and floodlight are used to determine whether the lens is stained. When the ratio of structured light to floodlight is not 1:1, at 2:1 is described as an example, each laser emitted by floodlight is emitted by structural light before and after the laser emitted by floodlight, and each laser emitted by floodlight is calculated with the laser emitted by structural light before and after the laser emitted by structural light respectively, so as to obtain a dirt condition.
N is positively correlated with the error value N of the depth camera module with the dirt detection function on the lens. When the laser emitter 1 can project not only the structured light but also the floodlight, and the light intensities of the structured light and the floodlight are different, the principle is different, and the error value at the lens is different, so that the error value n takes the value with larger error at the lens in the structured light and the floodlight. In order to effectively identify dirt, a target object in a normal measurement process cannot be misjudged to be dirt, and the target object is 5n > N >2n. Through different types of depth camera tests, when n=3n, the method is suitable for most depth camera modules.
On the first depth image and the second depth image, objects far away from the lens can be filtered by utilizing the depth value of N to filter, and objects near the lens are identified, so that dirt is identified. The dirt exists in the same region of the first region N1 and the second region N2. But in addition to dirt, there may be objects in the first and second regions N1, N2 that are closer to the lens. However, since the smudge is located on the lens, the smudge can be identified by the ratio of the average intensities of the signals in the first and second areas N1, N2, unlike the depth values of other objects. The signal average intensity of the first region N1 refers to the total intensity of the structured light beam projected on the first region N1 divided by the area of the first region N1 irradiated with the structured light. When the ratio of the average intensities of the signals in the first region N1 and the second region N2 is greater than a, it is determined that the dirt exists. a is a preset value related to the beam density of the structured light. The greater the beam density of the structured light, the greater a.
In some embodiments, if a soil is detected, a specific feature of the soil is detected on the second depth map. By further detecting the dirt, more features of the dirt, such as dirt area, dirt shape, etc., can be obtained. Through the judgement to dirty area, can judge whether need clean, perhaps remind the user to clear up. Through the judgment of the dirt shape, the dirt type can be cleaned by the self-contained cleaning device or the user is prompted about what cleaning means are adopted.
In some embodiments, the water stain model, scratch model, and particle model are used in order to detect the soil. The particle model does not detect areas detected by the water stain model and the scratch model. Fig. 4 shows the three main forms of soiling, respectively water stains, scratches and particles, present in the robot cleaner. The water stain is mainly generated by sewage and presents certain regional characteristics. Scratches are mainly generated due to collision or the like. The particles are mainly caused by dust. The particles in this embodiment may be either single particles or connected regions of multiple particles. And respectively adopting a machine learning method to learn the water stain, the scratch and the particles to obtain a water stain model, a scratch model and a particle model so as to improve the recognition effect. When the detection is carried out, the detection is carried out by utilizing a water stain model, and then the detection is carried out by utilizing a scratch model. Both the water stain model and the scratch model detect the entire area of the image. The particle model only detects areas where no dirt is identified to improve efficiency.
In some embodiments, the signal average intensity of the first region N1 is M1, the signal average intensity of the second region N2 is M2, if
Figure BDA0004043797360000061
Judging that the lens is stained; where σ is the attenuation coefficient associated with the lens. The value of N in this embodiment is the same as that of the previous embodiment. The first region N1 and the second region N2 may be continuous regions or discontinuous regions. The areas of the first and second regions N1 and N2 may be large or small. When the dirt is particles, the areas of the first and second regions N1 and N2 are the smallest. Sigma is related not only to the parameters of the lens itself, but also to the position of the lens from the laser transmitter 1 and receiver 2.
Fig. 5 is a schematic structural diagram of a robot according to an embodiment of the present invention. The robot shown in fig. 5 includes a depth camera 601, a robot body 602, and a display screen 603. The depth camera 601 is a depth camera module with a contamination detection function as described in any of the above embodiments. The depth camera 601 may acquire a scene in front of the robot so that three-dimensional information may be acquired. Typically, the depth camera 601 further comprises an RGB camera, which can obtain RGB images, so that RGBD images can be generated in combination with the depth images. The robot body 602 may select different functions according to different robot types. For example, the meal delivery robot may have a tray, a bracket, a driving wheel, etc.; the greeting robot may have a drive wheel, a manikin, etc. The display 603 is used to display information for interaction with a user. The display 603 may be unidirectional or bi-directional. Due to the adoption of the high-integration automatic switching depth camera, the robot can accommodate more sensors under the same size, so that the robot has better environment sensing capability. Of course, robots can also be reduced in size so as to accommodate more size-critical scenarios. The automatic switching depth camera can also utilize the condition of the monitoring range to set different triggering distances, so that the functions of automatically starting the automatic switching depth camera at a short distance and remotely closing the automatic switching depth camera are realized, and the energy consumption can be saved.
In some embodiments, a distance sensor 604 is also included; the distance sensor and the depth camera module with the dirt detection function are arranged adjacently in the same direction, and when the distance sensor detects a close-range object, the depth camera module with the dirt detection function does not judge that the depth camera module with the dirt detection function has dirt. The distance sensor 604 is used for detecting a near-distance target object, so that the depth camera module with the dirt detection function can be prevented from judging the near-distance target object as dirt, and the accuracy of recognition of the depth camera module with the dirt detection function can be further improved. Only one distance sensor 604 needs to be arranged and is adjacent to the depth camera module with the dirt detection function, so that stability and reliability are improved. The distance between the distance sensor 604 and the edge of the depth camera module having the contamination detection function is not more than 1cm.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.

Claims (10)

1. A depth camera module with a dirt detection function is characterized by comprising a laser emitter, a receiver and a controller;
the laser emitter is used for switchably emitting structured light or floodlight;
the receiver is used for receiving the reflected signal of the structured light or the floodlight;
the controller is used for controlling the laser transmitter to emit structured light and floodlight according to a certain proportion, the receiver receives a first signal of the adjacent structured light and a second signal of the floodlight, a first depth image and a second depth image are respectively generated, a first area N1 with a depth value smaller than N is obtained on the first depth image, a second area N2 with a depth value smaller than N is obtained on the second depth image, and whether the lens is polluted or not is judged according to the ratio of the average intensities of the signals on the first area N1 and the second area N2; wherein N is a preset value.
2. The depth camera module with contamination detection according to claim 1, wherein N is positively correlated with an error value N of the lens of the depth camera module with contamination detection.
3. The depth camera module with contamination detection according to claim 2, wherein when the laser emitter emits structured light or floodlight, the error value n is the larger of the errors of the two lens positions when the error values of the two lens positions are different.
4. The depth camera module of claim 1, wherein n=3n.
5. The depth camera module of claim 1, wherein if a smear is detected, a specific feature of the smear is detected on the second depth map.
6. The depth camera module with contamination detection function according to claim 5, wherein the contamination is detected by using a water stain model, a scratch model, and a particle model in this order.
7. The depth camera module of claim 1, wherein the average intensity of the signal of the first area N1 is the total intensity of the structured light beam projected on the first area N1 divided by the area of the first area N1 illuminated by the structured light.
8. The depth camera module with contamination detection function according to claim 1, wherein the signal average intensity of the first area N1 is M1, the signal average intensity of the second area N2 is M2, if
Figure FDA0004043797350000011
Judging that the lens is stained; wherein σ is the attenuation coefficient associated with the lens.
9. A robot comprising a depth camera module having a contamination detection function according to any one of claims 1 to 8.
10. The robot of claim 9, further comprising a distance sensor; the distance sensor and the depth camera module with the dirt detection function are arranged adjacently in the same direction, and when the distance sensor detects a close-range object, the depth camera module with the dirt detection function does not judge that the depth camera module with the dirt detection function has dirt.
CN202310023840.9A 2023-01-09 2023-01-09 Depth camera module and robot with dirty detection function Pending CN116068578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310023840.9A CN116068578A (en) 2023-01-09 2023-01-09 Depth camera module and robot with dirty detection function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310023840.9A CN116068578A (en) 2023-01-09 2023-01-09 Depth camera module and robot with dirty detection function

Publications (1)

Publication Number Publication Date
CN116068578A true CN116068578A (en) 2023-05-05

Family

ID=86174369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310023840.9A Pending CN116068578A (en) 2023-01-09 2023-01-09 Depth camera module and robot with dirty detection function

Country Status (1)

Country Link
CN (1) CN116068578A (en)

Similar Documents

Publication Publication Date Title
EP3367660B1 (en) A camera device comprising a dirt detection unit
EP2026104B1 (en) Distance measurement method and device and vehicle equipped with said device
US8750564B2 (en) Changing parameters of sequential video frames to detect different types of objects
US9726604B2 (en) Adhering detection apparatus, adhering substance detection method, storage medium, and device control system for controlling vehicle-mounted devices
CN100541407C (en) The integrated circuit that is used for optical guidance in pointing apparatus and the pointing apparatus
US10893197B2 (en) Passive and active stereo vision 3D sensors with variable focal length lenses
US11073379B2 (en) 3-D environment sensing by means of projector and camera modules
US10469758B2 (en) Structured light 3D sensors with variable focal length lenses and illuminators
CN112702488A (en) Test structural part, ToF device and lens contamination detection method
WO2020086698A1 (en) Methods and systems used to measure tire treads
JP2021517635A (en) Road surface monitoring systems and methods using infrared rays, automobiles
JP6555569B2 (en) Image processing apparatus, mobile device control system, and image processing program
CN114270210A (en) Window blocking imager near the focal plane
CN1170197C (en) Multipoint ranging apparatus
CN116068578A (en) Depth camera module and robot with dirty detection function
CN115248440A (en) TOF depth camera based on dot matrix light projection
JP3294916B2 (en) Guide line imaging method for unmanned guided vehicles
CN112672017B (en) Test structural part, ToF device and lens contamination detection method
JP2004271404A (en) Obstacle detector for vehicle
CN108427424B (en) Obstacle detection device and method and mobile robot
JP2019135468A (en) Disturbance light discrimination device, disturbance light separation device, disturbance light discrimination method and disturbance light separation method
CN116047539A (en) Depth camera module and robot with dirty detection function
JP2017003531A (en) Object detection device, object removal operation control system, object detection method, and object detection program
JP2006170768A (en) Apparatus and method for detecting stain and monitoring camera
KR102444568B1 (en) Method of controlling image based on illuminance sensing and robot implementing thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination