CN115989973A - Sweeping robot and depth camera - Google Patents

Sweeping robot and depth camera Download PDF

Info

Publication number
CN115989973A
CN115989973A CN202111212690.3A CN202111212690A CN115989973A CN 115989973 A CN115989973 A CN 115989973A CN 202111212690 A CN202111212690 A CN 202111212690A CN 115989973 A CN115989973 A CN 115989973A
Authority
CN
China
Prior art keywords
structure light
light
lattice structure
area
lattice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111212690.3A
Other languages
Chinese (zh)
Inventor
黄龙祥
黄瑞彬
杨煦
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202111212690.3A priority Critical patent/CN115989973A/en
Publication of CN115989973A publication Critical patent/CN115989973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Measurement Of Optical Distance (AREA)

Abstract

The invention provides a sweeping robot and a depth camera, comprising: the depth camera is arranged on the side surface of the robot body; the depth camera includes a light projector and a light receiver; the light projector is used for projecting first lattice structure light, second lattice structure light and linear array structure light to a target scene, wherein the power density of each light beam in the first lattice structure light is greater than that of each light beam in the second lattice structure light; the optical receiver is used for receiving the first lattice structure light, the second lattice structure light and the linear array structure light reflected by any object in the target scene, generating first depth information according to the first lattice structure light, generating second depth information by the second lattice structure light and generating third depth information by the linear array structure light; and the controller module is used for carrying out instant positioning and map construction according to the first depth information and generating obstacle avoidance information according to the second depth information and the third depth information. The invention reduces the complexity of the product, reduces the manufacturing cost of the product and is convenient for popularization and application of the product.

Description

Sweeping robot and depth camera
Technical Field
The invention relates to intelligent equipment, in particular to a sweeping robot and a depth camera.
Background
The floor sweeping robot is one of intelligent household appliances, and can automatically complete floor cleaning in a room by means of certain artificial intelligence. Generally, the brushing and vacuum modes are adopted, and the ground sundries are firstly absorbed into the garbage storage box of the ground, so that the function of cleaning the ground is completed.
The robot of sweeping floor among the prior art generally carries out route planning and map drawing through LDS (Laser Direct Structuring) laser radar that the top set up to keep away the barrier through the camera that the front end set up. Path planning and mapping by LDS, however, has at least two drawbacks: the laser radar is frequently rotated and is easy to break, and objects with high reflectivity such as a landing window, a landing mirror and a vase cannot be detected. Meanwhile, two sets of devices are needed for path planning and obstacle avoidance functions, so that the complexity of the product is increased, the manufacturing cost of the product is increased, and the popularization and application of the product are not facilitated.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a sweeping robot and a depth camera.
The invention provides a sweeping robot, which comprises a robot body, a depth camera and a controller module, wherein the depth camera is connected with the robot body; the depth camera is arranged on the side face of the robot body;
the depth camera includes a light projector and a light receiver;
the light projector is used for projecting first lattice structure light, second lattice structure light and linear lattice structure light to a target scene, and the power density of each light beam in the first lattice structure light is greater than that of each light beam in the second lattice structure light;
the light receiver is used for receiving the first lattice structure light, the second lattice structure light and the linear array structure light after being reflected by any object in the target scene, generating first depth information according to the first lattice structure light, generating second depth information according to the second lattice structure light and generating third depth information according to the linear array structure light;
the controller module is used for performing instant positioning and map construction according to the first depth information and generating obstacle avoidance information according to the second depth information and the third depth information.
Preferably, the first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern;
the first lattice pattern is located between the linear array pattern and the second lattice pattern is located at the upper side of the linear array pattern.
Preferably, the first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern;
the first lattice pattern is located between the linear array pattern and the second lattice pattern and the linear array pattern is located at an upper side of the second lattice pattern.
Preferably, the light spot density of the second lattice structure light is greater than the light spot density of the first lattice structure light, so that the first lattice structure light forms a sparse lattice pattern, and the second lattice structure light forms a dense lattice pattern.
Preferably, the linear array structure light comprises a plurality of light beams in a linear shape;
and a plurality of linear light beams are obliquely distributed, and the perpendicular lines of two adjacent linear light beams in the width direction of the angle of view have overlapping areas.
Preferably, the optical receiver is configured to generate first depth information according to a transmission time or a phase difference of the first lattice structure light, generate second depth information according to a transmission time or a phase difference of the second lattice structure light, and generate third depth information according to a spot image formed by the linear array structure light.
Preferably, the light projector comprises a first laser module and a first projection lens;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structure light, the second point laser array group is used for projecting second point array structure light, and the line laser array group is used for projecting line array structure light;
the first projection lens is arranged on the light emitting side of the laser module and comprises a first area, a second area and a third area, wherein the first area is arranged between the second area and the third area, receives and projects first lattice structure light through the first area, receives and projects second lattice structure light through the second area, and receives and projects linear structure light through the third area.
Preferably, the light projector comprises a second laser module, a beam splitter device and a second projection lens;
the second laser module is used for projecting laser beams;
the beam splitting device comprises a first beam splitting area, a second beam splitting area and a third beam splitting area, wherein the first beam splitting area is used for splitting the laser beam into a plurality of laser beams to form first lattice structure light, the second beam splitting area is used for splitting the laser beam into a plurality of laser beams to form second lattice structure light, and the third beam splitting area is used for splitting the laser beam into a plurality of laser beams to form linear array structure light;
the second projection lens is arranged on the light emitting side of the beam splitter and comprises a first area, a second area and a third area, wherein the first area is arranged between the second area and the third area, the first area is used for receiving and projecting first lattice structure light, the second area is used for receiving and projecting second lattice structure light, and the third area is used for receiving and projecting linear structure light.
Preferably, the field angle of the depth camera is between 100 ° and 110 °.
The depth camera provided by the invention comprises a light projector and a light receiver;
the light projector is used for projecting first lattice structure light, second lattice structure light and linear lattice structure light to a target scene, and the power density of each light beam in the first lattice structure light is greater than that of each light beam in the second lattice structure light;
the light receiver is configured to receive the first lattice structure light, the second lattice structure light and the linear array structure light after being reflected by any object in the target scene, generate first depth information according to the first lattice structure light, generate second depth information according to the second lattice structure light, and generate third depth information according to the linear array structure light.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the sweeping robot is loaded with the depth camera, the light projector of the depth camera is used for projecting the first lattice structure light, the second lattice structure light and the linear array structure light to the target scene, the light receiver can receive the first lattice structure light, the second lattice structure light and the linear array structure light after being reflected by any object in the target scene, the first depth information is generated according to the lattice structure light with higher power density, the second depth information is generated according to the second lattice structure light with lower power density, the third depth information is generated according to the linear array structure light, the controller module can irradiate the first depth information generated by the first lattice structure light with a longer distance to perform instant positioning and map construction, the object recognition is performed according to the second depth information generated by the second lattice structure light with a shorter distance and a denser distance, the obstacle avoidance is performed according to the type of the object, the third depth information generated by the linear array structure light is performed on the long-shaped object, such as a table leg, a wire and the like obstacle avoidance scope, the object detection of the whole range of the angle of view is convenient, the object detection of the strip-shaped object is avoided, the realization of a person who can conveniently realize the realization of the sweeping robot through the depth module and the realization of the complex obstacle avoidance and the product, the realization of the product is convenient to reduce the popularization and the product, and the cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic diagram of a working principle of a sweeping robot in an embodiment of the present invention;
FIG. 2 is a schematic view of a light field of a depth camera according to an embodiment of the present invention;
FIG. 3 is a schematic view of another light field of a depth camera according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a depth camera according to an embodiment of the present invention; and
fig. 5 is a schematic diagram of another structure of a depth camera according to an embodiment of the invention.
In the figure: 100 is a robot body; 200 is an object; 1 is a light projector; 2 is an optical receiver; 201 is a first region; 202 is a second region; 203 is a third region; 3 is a driving circuit; 101 is an edge-emitting laser; 102 is a collimating lens; 103 is a beam splitter; 104 is a projection lens; 105 is a diffraction device; 106 is a laser array.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The following describes the technical scheme of the present invention and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a working principle of a sweeping robot according to an embodiment of the present invention, and as shown in fig. 1, the sweeping robot provided by the present invention includes a robot body 100, a depth camera, and a controller module; the depth camera is disposed on a side of the robot body 100;
the depth camera comprises a light projector 1 and a light receiver 2;
the light projector 1 is configured to project first lattice structure light, second lattice structure light, and linear lattice structure light toward a target scene, where the power density of each beam in the first lattice structure light is greater than the power density of each beam in the second lattice structure light;
the light receiver 2 is configured to receive the first lattice structure light, the second lattice structure light, and the linear array structure light after being reflected by any object 200 in the target scene, generate first depth information according to the first lattice structure light, generate second depth information according to the second lattice structure light, and generate third depth information according to the linear array structure light;
the controller module is used for performing instant localization and mapping (SLAM) according to the first depth information and generating obstacle avoidance information according to the second depth information and the third depth information.
In the embodiment of the invention, each light beam in the lattice structure light has higher power density and longer projection distance, can obtain the distribution of objects at a position far from the sweeping robot indoors, is convenient for the sweeping robot to perform instant positioning and map construction, each light beam in the lattice structure light has lower power density and higher light beam density and shorter projection distance, can obtain the distribution of objects 200 at a position near the sweeping robot indoors, can obtain the surface profile of the objects 200 due to higher light beam density, and is convenient for the object recognition and obstacle avoidance according to the type of the objects. And the strip-shaped object such as a table leg, an electric wire and the like is avoided according to third depth information generated by the linear array structure light, the linear array structure light has a longer extension range, the object detection in the whole range of the view angle is facilitated, and omission of the strip-shaped object is avoided.
The line type of the line structured light includes, but is not limited to, any one or more of a straight line, a curved line, a line segment, and a broken line, and the number of lines may be 1 or more.
Fig. 4 is a schematic structural diagram of a depth camera according to an embodiment of the present invention, as shown in fig. 4, the light projector 1 includes a first laser module and a first projection lens 104;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structure light, the second point laser array group is used for projecting second point array structure light, and the line laser array group is used for projecting line array structure light;
the first projection lens 104 is disposed on the light emitting side of the laser module, and includes a first region 201, a second region 202, and a third region 203, where the first region 201, the second region 202, and the third region 203 are transparent regions, and receive the first lattice structure light through the first region 201 and project the first lattice structure light, receive and project the linear structure light through the second region 202, and receive the second lattice structure light through the third region 203 and project the second lattice structure light.
In an embodiment of the present invention, the first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern; the light spot concentration of the second lattice structure light is larger than that of the first lattice structure light, so that the first lattice structure light forms a sparse lattice pattern, and the second lattice structure light forms a dense lattice pattern.
The first lattice pattern is located between the line pattern and the second lattice pattern is located at an upper side of the line pattern, as shown in fig. 2.
In another embodiment of the present invention, the light projector 1 includes a first laser module and a first projection lens 104;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structure light, the second point laser array group is used for projecting second point array structure light, and the line laser array group is used for projecting a line array structure;
the first projection lens 104 is disposed on the light emitting side of the laser module, and includes a first area 201, a second area 202, and a third area 203, where the first area 201 is disposed between the second area 202 and the third area 203, and the first area 201, the second area 202, and the third area 203 are transparent areas, and receive the lattice structure light and project the lattice structure light through the first area 201, receive the second lattice structure light and project the second lattice structure light through the second area 202, and receive and project the linear structure light through the third area 203.
The first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern;
the first lattice pattern is located between the linear array pattern and the second lattice pattern and the linear array pattern is located at an upper side of the second lattice pattern as shown in fig. 3.
In an embodiment of the present invention, the number of light beams in the first lattice structure light is between two and several thousand beams, such as 2 to 1 thousand beams; the number of light beams in the second structured light is between several thousand and several tens of thousands, such as 1 to 5 tens of thousands.
The first laser module may employ a laser array 106 formed of a plurality of vertical cavity surface emitting lasers (Vertical Cavity Surface Emitting Laser, VCSELs) or a plurality of edge emitting lasers (Edge Emitting Laser, EELs). The multiple laser beams can become highly parallel collimated beams after passing through the collimating lens 102, so as to realize the projection of the lattice structure light.
FIG. 5 is a schematic diagram of another structure of a depth camera according to an embodiment of the present invention, as shown in FIG. 5, the light projector 1 includes a second laser module, a beam splitter device, and a second projection lens 104;
the second laser module is used for projecting laser beams;
the beam splitter 103 includes a first beam splitting area, a second beam splitting area, and a third beam splitting area, where the first beam splitting area is used to split the laser beam into a group of multiple laser beams to form lattice structure light, the second beam splitting area is used to split the laser beam into another group of multiple laser beams to form linear array structure light, and the third beam splitting area is used to split the laser beam into multiple laser beams to form linear array structure light;
the second projection lens 104 is disposed on the light emitting side of the beam splitter 103, and includes a first area 201, a second area 202, and a third area 203, where the first area 201, the second area 202, and the third area 203 are transparent areas, and receive and project the first lattice structured light through the first area 201, receive and project the lattice structured light through the second area 202, and receive and project the second lattice structured light through the third area 203.
In an embodiment of the present invention, the first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern; the light spot concentration of the second lattice structure light is larger than that of the first lattice structure light, so that the first lattice structure light forms a sparse lattice pattern, and the second lattice structure light forms a dense lattice pattern.
The first lattice pattern is located between the line pattern and the second lattice pattern is located at an upper side of the line pattern, as shown in fig. 2.
In one embodiment of the present invention, the light projector 1 includes a second laser module, a beam splitter, and a second projection lens 104;
the second laser module is used for projecting laser beams;
the beam splitting device 103 includes a first beam splitting area, a second beam splitting area, and a third beam splitting area, where the first beam splitting area is used to split the laser beam into multiple laser beams to form a first lattice structure light, the second beam splitting area is used to split the laser beam into multiple laser beams to form a second lattice structure light, and the third beam splitting area is used to split the laser beam into multiple laser beams to form a linear array structure light;
the second projection lens 104 is disposed on the light emitting side of the beam splitter 103, and includes a first area 201, a second area 202, and a third area 203, where the first area 201 is disposed between the second area 202 and the third area 203, and the first area 201, the second area 202, and the third area 203 are transparent areas, and receive and project first lattice structured light through the first area 201, receive and project second lattice structured light through the second area 202, and receive and project the linear structured light through the third area 203.
The first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern;
the first lattice pattern is located between the linear array pattern and the second lattice pattern and the linear array pattern is located at an upper side of the second lattice pattern as shown in fig. 3.
The beam splitter 103 achieves more collimated laser beams. The beam splitter 103 may be a diffraction grating (DOE), a waveguide device, a coded structured photomask, a Spatial Light Modulator (SLM), or the like.
In an embodiment of the present invention, the optical receiver 2 is configured to generate first depth information according to a transmission time or a phase difference of the first lattice structure light, generate second depth information according to a transmission time or a phase difference of the second lattice structure light, and generate third depth information according to a spot image formed by the linear array structure light.
The driving circuit 3, the driving circuit 3 is used for controlling the light projector 1 and the light receiver 2 to be turned on or off at the same time. The driving circuit 3 may be a separate dedicated circuit, such as a dedicated SOC chip, FPGA chip, ASIC chip, etc., or may comprise a general purpose processor, such as when the depth camera is integrated into an intelligent terminal, such as a sweeping robot, etc., the processor in the terminal may act as at least part of the processing circuit.
The angle of view of the depth camera is preferably between 100 ° and 110 °.
The light receiver 2 comprises an optical imaging lens, a light detector array and a driving circuit 3; the light detector array comprises a plurality of light detectors distributed in an array;
the optical imaging lens is used for receiving the lattice structure light and the linear array structure light reflected by any object in the target scene and projecting the lattice structure light and the linear array structure light to the optical detector;
the light detector is used for receiving the lattice structure light and the linear array structure light;
the driving circuit 3 is configured to measure a propagation time or a phase difference of the first lattice structure light to generate first depth data of the target object surface, measure a propagation time or a phase difference of the second lattice structure light to generate second depth data of the target object surface, and generate the second depth data of the target object surface from a spot image formed by the linear array structure light
In order to filter background noise, a narrow-band filter is usually further installed in the optical imaging lens, so that the photodetector array 1 can only pass the incident collimated light beam with a preset wavelength. The predetermined wavelength may be the wavelength of the incident collimated light beam, or may be between 50 nanometers less than the incident collimated light beam and 50 nanometers greater than the incident collimated light beam. The photodetector arrays may be arranged periodically or aperiodically. Each light detector cooperates with an auxiliary circuit to achieve the time of flight alignment of the light beam for measurement. The photodetector array may be a combination of a plurality of single point photodetectors or a sensor chip incorporating a plurality of photodetectors, depending on the number of discrete collimated light beams required. To further optimize the sensitivity of the photodetectors, the illumination spot of one discrete collimated beam on the target object may correspond to one or more photodetectors. When a plurality of light detectors correspond to the same irradiation light spot, the signals of each detector can be communicated through a circuit, so that the light detectors with larger detection areas can be combined.
The photodetector adopts a CMOS photosensor, a CD photosensor or a SPAD photosensor.
In an embodiment of the present invention, the depth camera provided by the present invention includes a light projector 1 and a light receiver 2;
the light projector 1 is configured to project first lattice structure light, second lattice structure light, and linear lattice structure light toward a target scene, where the power density of each beam in the first lattice structure light is greater than the power density of each beam in the second lattice structure light;
the light receiver 2 is configured to receive the first lattice structure light, the second lattice structure light, and the linear array structure light after being reflected by any object in the target scene, generate first depth information according to the first lattice structure light, generate second depth information according to the second lattice structure light, and generate third depth information according to the linear array structure light.
According to the invention, the sweeping robot is loaded with the depth camera, the light projector of the depth camera is used for projecting the first lattice structure light, the second lattice structure light and the linear array structure light to the target scene, the light receiver can receive the first lattice structure light, the second lattice structure light and the linear array structure light after being reflected by any object in the target scene, the first depth information is generated according to the lattice structure light with higher power density, the second depth information is generated according to the second lattice structure light with lower power density, the third depth information is generated according to the linear array structure light, the controller module can irradiate the first depth information generated by the first lattice structure light with a longer distance to perform instant positioning and map construction, the object recognition is performed according to the second depth information generated by the second lattice structure light with a shorter distance and a denser distance, the obstacle avoidance is performed according to the type of the object, the third depth information generated by the linear array structure light is performed on the long-shaped object, such as a table leg, a wire and the like obstacle avoidance scope, the object detection of the whole range of the angle of view is convenient, the object detection of the strip-shaped object is avoided, the realization of a person who can conveniently realize the realization of the sweeping robot through the depth module and the realization of the complex obstacle avoidance and the product, the realization of the product is convenient to reduce the popularization and the product, and the cost.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.

Claims (10)

1. The sweeping robot is characterized by comprising a robot body, a depth camera and a controller module; the depth camera is arranged on the side face of the robot body;
the depth camera includes a light projector and a light receiver;
the light projector is used for projecting first lattice structure light, second lattice structure light and linear lattice structure light to a target scene, and the power density of each light beam in the first lattice structure light is greater than that of each light beam in the second lattice structure light;
the light receiver is used for receiving the first lattice structure light, the second lattice structure light and the linear array structure light after being reflected by any object in the target scene, generating first depth information according to the first lattice structure light, generating second depth information according to the second lattice structure light and generating third depth information according to the linear array structure light;
the controller module is used for performing instant positioning and map construction according to the first depth information and generating obstacle avoidance information according to the second depth information and the third depth information.
2. The robot of claim 1, wherein the first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern;
the first lattice pattern is located between the linear array pattern and the second lattice pattern is located at the upper side of the linear array pattern.
3. The robot of claim 1, wherein the first lattice structure light forms a first lattice pattern, the second lattice structure light forms a second lattice pattern, and the linear array structure light forms a linear array pattern;
the first lattice pattern is located between the linear array pattern and the second lattice pattern and the linear array pattern is located at an upper side of the second lattice pattern.
4. The robot of claim 1, wherein the second lattice structure light has a spot concentration greater than the spot concentration of the first lattice structure light such that the first lattice structure light forms a sparse lattice pattern and the second lattice structure light forms a dense lattice pattern.
5. The robot cleaner of claim 1, wherein the linear array structured light includes a plurality of linear light beams;
and a plurality of linear light beams are obliquely distributed, and the perpendicular lines of two adjacent linear light beams in the width direction of the angle of view have overlapping areas.
6. The robot of claim 1, wherein the light receiver is configured to generate first depth information according to a transmission time or a phase difference of the first lattice structure light, generate second depth information according to a transmission time or a phase difference of the second lattice structure light, and generate third depth information according to a spot image formed by the linear array structure light.
7. The robot of claim 1, wherein the light projector comprises a first laser module and a first projection lens;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structure light, the second point laser array group is used for projecting second point array structure light, and the line laser array group is used for projecting line array structure light;
the first projection lens is arranged on the light emitting side of the laser module and comprises a first area, a second area and a third area, wherein the first area is arranged between the second area and the third area, receives and projects first lattice structure light through the first area, receives and projects second lattice structure light through the second area, and receives and projects linear structure light through the third area.
8. The robot of claim 1, wherein the light projector comprises a second laser module, a beam splitter, and a second projection lens;
the second laser module is used for projecting laser beams;
the beam splitting device comprises a first beam splitting area, a second beam splitting area and a third beam splitting area, wherein the first beam splitting area is used for splitting the laser beam into a plurality of laser beams to form first lattice structure light, the second beam splitting area is used for splitting the laser beam into a plurality of laser beams to form second lattice structure light, and the third beam splitting area is used for splitting the laser beam into a plurality of laser beams to form linear array structure light;
the second projection lens is arranged on the light emitting side of the beam splitter and comprises a first area, a second area and a third area, wherein the first area is arranged between the second area and the third area, the first area is used for receiving and projecting first lattice structure light, the second area is used for receiving and projecting second lattice structure light, and the third area is used for receiving and projecting linear structure light.
9. The robot of claim 1, wherein the field angle of the depth camera is between 100 ° and 110 °.
10. A depth camera comprising a light projector and a light receiver;
the light projector is used for projecting first lattice structure light, second lattice structure light and linear lattice structure light to a target scene, and the power density of each light beam in the first lattice structure light is greater than that of each light beam in the second lattice structure light;
the light receiver is configured to receive the first lattice structure light, the second lattice structure light and the linear array structure light after being reflected by any object in the target scene, generate first depth information according to the first lattice structure light, generate second depth information according to the second lattice structure light, and generate third depth information according to the linear array structure light.
CN202111212690.3A 2021-10-18 2021-10-18 Sweeping robot and depth camera Pending CN115989973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111212690.3A CN115989973A (en) 2021-10-18 2021-10-18 Sweeping robot and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111212690.3A CN115989973A (en) 2021-10-18 2021-10-18 Sweeping robot and depth camera

Publications (1)

Publication Number Publication Date
CN115989973A true CN115989973A (en) 2023-04-21

Family

ID=85990656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111212690.3A Pending CN115989973A (en) 2021-10-18 2021-10-18 Sweeping robot and depth camera

Country Status (1)

Country Link
CN (1) CN115989973A (en)

Similar Documents

Publication Publication Date Title
US10976417B2 (en) Using detectors with different gains in a lidar system
US11378666B2 (en) Sizing the field of view of a detector to improve operation of a lidar system
US10983213B2 (en) Non-uniform separation of detector array elements in a lidar system
JP6780308B2 (en) Object detection device, sensing device and mobile device
CN111722241B (en) Multi-line scanning distance measuring system, method and electronic equipment
CN116430401A (en) Three-dimensional LIDAR system with target field of view
US11867808B2 (en) Waveguide diffusers for LIDARs
CN110221309B (en) 3D imaging device and electronic equipment based on asynchronous ToF discrete point cloud
CN216167219U (en) Depth camera and sweeping robot
CN210835244U (en) 3D imaging device and electronic equipment based on synchronous ToF discrete point cloud
CN115989973A (en) Sweeping robot and depth camera
CN216167222U (en) Depth camera and sweeping robot
CN216535129U (en) Depth camera and sweeping robot
CN216535127U (en) Depth camera and sweeping robot
CN210128694U (en) Depth imaging device
CN115989971A (en) Sweeping robot and depth camera
CN216535123U (en) Depth camera and sweeping robot
CN216167218U (en) Depth camera and sweeping robot
CN117322804A (en) Sweeping robot and structured light camera
CN115868858A (en) Sweeping robot and structured light camera
CN117322805A (en) Double-scanning floor sweeping robot and depth camera
CN115868857A (en) Double-scanning sweeping robot and depth camera
CN216754353U (en) Depth camera and sweeping robot
CN114935742A (en) Emitting module, photoelectric detection device and electronic equipment
CN114935743A (en) Emitting module, photoelectric detection device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination