CN216535129U - Depth camera and sweeping robot - Google Patents

Depth camera and sweeping robot Download PDF

Info

Publication number
CN216535129U
CN216535129U CN202122506845.6U CN202122506845U CN216535129U CN 216535129 U CN216535129 U CN 216535129U CN 202122506845 U CN202122506845 U CN 202122506845U CN 216535129 U CN216535129 U CN 216535129U
Authority
CN
China
Prior art keywords
structured light
light
lattice
area
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202122506845.6U
Other languages
Chinese (zh)
Inventor
黄龙祥
杨煦
沈燕
陈松坤
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202122506845.6U priority Critical patent/CN216535129U/en
Application granted granted Critical
Publication of CN216535129U publication Critical patent/CN216535129U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The utility model provides a depth camera, which comprises a light projector, a light receiver and a driving circuit, wherein the light projector is used for projecting light; the light projector is used for projecting first lattice structured light, second lattice structured light and linear array structured light to a target scene, and the power density of each light beam in the first lattice structured light is greater than that of each light beam in the second lattice structured light; the optical receiver is configured to receive the first lattice structured light, the second lattice structured light, and the linear array structured light reflected by any object in the target scene, generate first depth information according to the first lattice structured light, generate second depth information according to the second lattice structured light, and generate third depth information according to the linear array structured light; and the driving circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off. The utility model reduces the complexity of the product, reduces the manufacturing cost of the product and is convenient for popularization and application of the product.

Description

Depth camera and sweeping robot
Technical Field
The utility model relates to intelligent equipment, in particular to a depth camera and a sweeping robot.
Background
A floor sweeping robot is one of intelligent household appliances, and can automatically finish floor cleaning work in a room by means of certain artificial intelligence. Generally, the floor cleaning machine adopts a brushing and vacuum mode, and firstly absorbs the impurities on the floor into the garbage storage box, so that the function of cleaning the floor is achieved.
In the prior art, a sweeping robot generally performs path planning and mapping by using an lds (laser Direct structuring) laser radar arranged at the top, and performs obstacle avoidance by using a camera arranged at the front end. However, path planning and mapping by LDS have at least two disadvantages: firstly, the laser radar needs to rotate frequently and is easy to damage, and secondly, high-reflectivity objects such as a French window, a floor mirror, a vase and the like cannot be detected. And two sets of devices are needed for path planning and obstacle avoidance functions, so that the complexity of the product is increased, the manufacturing cost of the product is increased, and the popularization and the application of the product are not facilitated.
SUMMERY OF THE UTILITY MODEL
Aiming at the defects in the prior art, the utility model aims to provide a depth camera and a sweeping robot.
The depth camera provided by the utility model comprises a light projector, a light receiver and a driving circuit;
the light projector is used for projecting first lattice structured light, second lattice structured light and linear array structured light to a target scene, and the power density of each light beam in the first lattice structured light is greater than that of each light beam in the second lattice structured light;
the optical receiver is configured to receive the first lattice structured light, the second lattice structured light, and the linear array structured light reflected by any object in the target scene, generate first depth information according to the first lattice structured light, generate second depth information according to the second lattice structured light, and generate third depth information according to the linear array structured light;
and the driving circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off.
Preferably, the first lattice-structured light forms a first lattice pattern, the second lattice-structured light forms a second lattice pattern, and the line-structured light forms a line pattern;
the first dot pattern is located between the linear array pattern and the second dot pattern is located on the upper side of the linear array pattern.
Preferably, the first lattice structured light forms a first lattice pattern, the second lattice structured light forms a second lattice pattern, and the line structured light forms a line pattern;
the first dot pattern is located between the linear array pattern and the second dot pattern and the linear array pattern is located on the upper side of the second dot pattern.
Preferably, the light spot concentration of the second lattice structure light is greater than that of the first lattice structure light, so that the first lattice structure light forms a sparse lattice pattern and the second lattice structure light forms a dense lattice pattern.
Preferably, the linear array structured light comprises a plurality of linear light beams;
and a plurality of linear light beams are distributed in an inclined manner, and the vertical lines of two adjacent linear light beams in the width direction of the field angle have overlapping areas.
Preferably, the optical receiver is configured to generate first depth information according to the transmission time or the phase difference of the first lattice structured light, generate second depth information according to the transmission time or the phase difference of the second lattice structured light, and generate third depth information according to a spot image formed by the linear array structured light.
Preferably, the light projector comprises a first laser module and a first projection lens;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structured light, the second point laser array group is used for projecting second point array structured light, and the line laser array group is used for projecting linear array structured light;
the first projection lens is arranged on the light emitting side of the laser module and comprises a first area, a second area and a third area, the first area is arranged between the second area and the third area, first lattice structured light is received and projected through the first area, second lattice structured light is received and projected through the second area, and the third area is received and projected linear structured light.
Preferably, the light projector comprises a second laser module, a beam splitting device and a second projection lens;
the second laser module is used for projecting laser beams;
the beam splitting device comprises a first beam splitting area, a second beam splitting area and a third beam splitting area, wherein the first beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a first dot matrix structured light, the second beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a second dot matrix structured light, and the third beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a linear array structured light;
the second projection lens is arranged on the light emitting side of the beam splitting device and comprises a first area, a second area and a third area, the first area is arranged between the second area and the third area, first lattice structured light is received and projected through the first area, second lattice structured light is received and projected through the second area, and the third area is received and projected linear structured light.
Preferably, the field angle of the depth camera is between 100 ° and 110 °.
The sweeping robot provided by the utility model comprises a robot body, a depth camera and a controller module; the depth camera is arranged on the side surface of the robot body;
the depth camera includes a light projector, a light receiver, and a drive circuit;
the light projector is used for projecting first lattice structured light, second lattice structured light and linear array structured light to a target scene, and the power density of each light beam in the first lattice structured light is greater than that of each light beam in the second lattice structured light;
the optical receiver is configured to receive the first lattice structured light, the second lattice structured light, and the linear array structured light reflected by any object in the target scene, generate first depth information according to the first lattice structured light, generate second depth information according to the second lattice structured light, and generate third depth information according to the linear array structured light;
the driving circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off;
and the controller module is used for carrying out instant positioning and map construction according to the first depth information and generating obstacle avoidance information according to the second depth information and the third depth information.
Compared with the prior art, the utility model has the following beneficial effects:
the depth camera can be applied to a sweeping robot, a light projector of the depth camera is used for projecting first dot matrix structured light, second dot matrix structured light and linear array structured light to a target scene, a light receiver can receive the first dot matrix structured light, the second dot matrix structured light and the linear array structured light reflected by any object in the target scene, first depth information is generated according to the dot matrix structured light with higher power density, second depth information is generated according to the second dot matrix structured light with lower power density, third depth information is generated according to the linear array structured light, so that a controller module can irradiate the first dot matrix structured light with longer distance to generate the first depth information for instant positioning and map construction, object identification is carried out according to the second depth information generated by the second dot matrix structured light with close distance and higher density, and obstacle avoidance is carried out according to the type of the object, according to the third depth information generated by the linear array structured light, long-strip-shaped objects such as desk legs, electric wires and the like are prevented from being blocked, the linear structured light has a long extension range, the full-range object detection of the field angle is facilitated, the omission of strip-shaped objects is avoided, the floor sweeping robot can realize instant positioning and map construction and obstacle avoidance through a depth camera module, the complexity of products is reduced, and the manufacturing cost of the products is reduced, so that the popularization and the application of the products are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the utility model will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic view of the working principle of the sweeping robot in the embodiment of the utility model;
FIG. 2 is a schematic view of a light field view of a depth camera in an embodiment of the utility model;
FIG. 3 is a schematic view of another light field view of a depth camera in an embodiment of the utility model;
FIG. 4 is a schematic diagram of a depth camera according to an embodiment of the present invention; and
FIG. 5 is a schematic diagram of another structure of a depth camera according to an embodiment of the utility model.
In the figure: 100 is a robot body; 200 is an object; 1 is a light projector; 2 is an optical receiver; 201 is a first region; 202 is a second region; 203 is a third region; 3 is a driving circuit; 101 is an edge-emitting laser; 102 is a collimating lens; 103 is a beam splitting device; 104 is a projection lens; 105 is a diffractive device; 106 is a laser array.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the utility model, but are not intended to limit the utility model in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the utility model. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the utility model described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic view of a working principle of a sweeping robot in an embodiment of the present invention, and as shown in fig. 1, the sweeping robot provided by the present invention includes a robot body 100, a depth camera, and a controller module; the depth camera is disposed on a side surface of the robot body 100;
the depth camera comprises a light projector 1, a light receiver 2 and a driving circuit;
the light projector 1 is configured to project a first lattice structured light, a second lattice structured light, and a linear array structured light to a target scene, where a power density of each light beam in the first lattice structured light is greater than a power density of each light beam in the second lattice structured light;
the optical receiver 2 is configured to receive the first dot matrix structured light, the second dot matrix structured light, and the linear array structured light reflected by any object 200 in the target scene, generate first depth information according to the first dot matrix structured light, generate second depth information according to the second dot matrix structured light, and generate third depth information according to the linear array structured light;
the driving circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off;
the controller module is configured to perform immediate positioning and map building (SLAM) according to the first depth information, and generate obstacle avoidance information according to the second depth information and the third depth information.
In the embodiment of the utility model, each light beam in the lattice structured light has higher power density, the projection distance is longer, the distribution of objects far away from the sweeping robot indoors can be obtained, the sweeping robot can perform instant positioning and map construction conveniently, each light beam in the linear array structured light has lower power density and higher light beam density, the projection distance is shorter, the distribution of objects 200 near the sweeping robot indoors can be obtained, the light beam density is higher, the surface profile of the objects 200 can be obtained, the object identification is facilitated, and the obstacle avoidance can be performed according to the types of the objects. And the obstacle avoidance of long-strip-shaped objects such as table legs, electric wires and the like is carried out according to the third depth information generated by the linear array structured light, the extension range of the linear array structured light is longer, the object detection in the full range of the field angle is facilitated, and the omission of strip-shaped objects is avoided.
The line type of the line structured light includes, but is not limited to, any one or any plurality of straight lines, curved lines, line segments, and dotted lines, and the number of lines may be 1 or more.
Fig. 4 is a schematic diagram of a depth camera according to an embodiment of the present invention, and as shown in fig. 4, the light projector 1 includes a first laser module and a first projection lens 104;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structured light, the second point laser array group is used for projecting second point array structured light, and the line laser array group is used for projecting linear array structured light;
the first projection lens 104 is arranged on the light emitting side of the laser module and comprises a first area 201, a second area 202 and a third area 203, the first area 201, the second area 202 and the third area 203 are all transparent areas, the first area 201 receives the first lattice structure light and projects the first lattice structure light, the second area 202 receives and projects the linear array structure light, and the third area 203 receives the second lattice structure light and projects the second lattice structure light.
In an embodiment of the utility model, the first lattice structured light forms a first lattice pattern, the second lattice structured light forms a second lattice pattern, and the line structured light forms a line pattern; the light spot density of the second lattice structure light is greater than that of the first lattice structure light, so that the first lattice structure light forms a sparse lattice pattern, and the second lattice structure light forms a dense lattice pattern.
The first dot pattern is located between the line pattern and the second dot pattern is located at an upper side of the line pattern, as shown in fig. 2.
In another embodiment of the utility model, the light projector 1 comprises a first laser module and a first projection lens 104;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structure light, the second point laser array group is used for projecting second point array structure light, and the line laser array group is used for projecting a linear array structure;
the first projection lens 104 is disposed on the light emitting side of the laser module, and includes a first region 201, a second region 202, and a third region 203, the first region 201 is disposed between the second region 202 and the third region 203, the first region 201, the second region 202, and the third region 203 are transparent regions, the lattice structured light is received by the first region 201 and projected by the lattice structured light, the second lattice structured light is received by the second region 202 and projected by the lattice structured light, and the linear structured light is received by the third region 203 and projected by the third region 203.
The first lattice structured light forms a first lattice pattern, the second lattice structured light forms a second lattice pattern, and the linear array structured light forms a linear array pattern;
the first dot pattern is located between the line pattern and the second dot pattern and the line pattern is located on the upper side of the second dot pattern, as shown in fig. 3.
In an embodiment of the present invention, the number of the light beams in the first lattice structured light is between two and several thousand beams, such as 2 to 1 thousand beams; the number of light beams in the second structured light is between several thousand beams and several ten thousand beams, such as 1 ten thousand beams to 5 ten thousand beams.
The first Laser module may adopt a Laser array 106 formed by a plurality of Vertical Cavity Surface Emitting Lasers (VCSELs) or a plurality of Edge Emitting Lasers (EELs). After passing through the collimating lens 102, the multiple laser beams can become highly parallel collimated beams, and the projection of the lattice structured light is realized.
Fig. 5 is another schematic structural diagram of a depth camera in an embodiment of the present invention, and as shown in fig. 5, the light projector 1 includes a second laser module, a beam splitter, and a second projection lens 104;
the second laser module is used for projecting laser beams;
the beam splitting device 103 comprises a first beam splitting area, a second beam splitting area and a third beam splitting area, wherein the first beam splitting area is used for splitting the laser beam into a group of multiple laser forming lattice structured light, the second beam splitting area is used for splitting the laser beam into another group of multiple laser forming linear array structured light, and the third beam splitting area is used for splitting the laser beam into multiple laser forming linear array structured light;
the second projection lens 104 is disposed on the light exit side of the beam splitter 103, and includes a first region 201, a second region 202, and a third region 203, where the first region 201, the second region 202, and the third region 203 are all transparent regions, and receives and projects the first lattice structured light through the first region 201, receives and projects the linear structured light through the second region 202, and receives and projects the second lattice structured light through the third region 203.
In an embodiment of the utility model, the first lattice structured light forms a first lattice pattern, the second lattice structured light forms a second lattice pattern, and the line structured light forms a line pattern; the light spot density of the second lattice structure light is greater than that of the first lattice structure light, so that the first lattice structure light forms a sparse lattice pattern, and the second lattice structure light forms a dense lattice pattern.
The first dot pattern is located between the line pattern and the second dot pattern is located at an upper side of the line pattern, as shown in fig. 2.
In an embodiment of the utility model, the light projector 1 comprises a second laser module, a beam splitting device and a second projection lens 104;
the second laser module is used for projecting a laser beam;
the beam splitting device 103 comprises a first beam splitting area, a second beam splitting area and a third beam splitting area, wherein the first beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a first dot-matrix structured light, the second beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a second dot-matrix structured light, and the third beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a linear-matrix structured light;
the second projection lens 104 is disposed on the light-emitting side of the beam splitter 103, and includes a first region 201, a second region 202, and a third region 203, where the first region 201 is disposed between the second region 202 and the third region 203, and the first region 201, the second region 202, and the third region 203 are transparent regions, and receive and project the first lattice structured light through the first region 201, receive and project the second lattice structured light through the second region 202, and receive and project the linear structured light through the third region 203.
The first lattice structured light forms a first lattice pattern, the second lattice structured light forms a second lattice pattern, and the linear array structured light forms a linear array pattern;
the first dot pattern is located between the line pattern and the second dot pattern and the line pattern is located on the upper side of the second dot pattern, as shown in fig. 3.
The beam splitting device 103 achieves more collimated laser beams. The beam splitting device 103 may employ a diffraction grating (DOE), a waveguide device, a coded structure photomask, a Spatial Light Modulator (SLM), or the like.
In an embodiment of the present invention, the optical receiver 2 is configured to generate first depth information according to the transmission time or the phase difference of the first lattice structured light, generate second depth information according to the transmission time or the phase difference of the second lattice structured light, and generate third depth information according to a spot image formed by the linear array structured light.
The driving circuit 3 and the driving circuit 3 are used for controlling the light projector 1 and the light receiver 2 to be turned on or off simultaneously. The driving circuit 3 may be a separate dedicated circuit, such as a dedicated SOC chip, an FPGA chip, an ASIC chip, or the like, or may include a general-purpose processor, for example, when the depth camera is integrated into an intelligent terminal, such as a sweeping robot, the processor in the terminal may serve as at least one part of the processing circuit.
The field angle of the depth camera is preferably between 100 ° and 110 °.
The optical receiver 2 comprises an optical imaging lens, a light detector array and a driving circuit 3; the light detector array comprises a plurality of light detectors distributed in an array;
the optical imaging lens is used for receiving the lattice structure light and the linear array structure light reflected by any object in a target scene and projecting the lattice structure light and the linear array structure light to the optical detector;
the optical detector is used for receiving the lattice structure light and the linear array structure light;
the driving circuit 3 is configured to measure a propagation time or a phase difference of the first lattice-structured light to generate first depth data of the surface of the target object, measure a propagation time or a phase difference of the second lattice-structured light to generate second depth data of the surface of the target object, and generate second depth data of the surface of the target object from a spot image formed by the linear-structured light
In order to filter background noise, a narrow band filter is usually installed in the optical imaging lens, so that the photodetector array 1 can only pass incident collimated light beams with preset wavelength. The preset wavelength can be the wavelength of the incident collimated light beam, and can also be between 50 nanometers smaller than the incident collimated light beam and 50 nanometers larger than the incident collimated light beam. The photodetector array may be arranged periodically or aperiodically. Each photodetector, in cooperation with an auxiliary circuit, may enable measurement of the time of flight of the collimated beam. The photodetector array may be a combination of multiple single-point photodetectors or a sensor chip integrating multiple photodetectors, as required by the number of discrete collimated beams. To further optimize the sensitivity of the light detectors, the illumination spot of one discrete collimated light beam on the target object may correspond to one or more light detectors. When a plurality of light detectors correspond to the same irradiation light spot, signals of each detector can be communicated through a circuit, so that the light detectors with larger detection areas can be combined.
The light detector adopts a CMOS light sensor, a CD light sensor or a SPAD light sensor.
In an embodiment of the present invention, the depth camera provided by the present invention comprises a light projector 1, a light receiver 2 and a driving circuit;
the light projector 1 is configured to project a first lattice structured light, a second lattice structured light, and a linear array structured light to a target scene, where a power density of each light beam in the first lattice structured light is greater than a power density of each light beam in the second lattice structured light;
the optical receiver 2 is configured to receive the first dot matrix structured light, the second dot matrix structured light, and the linear array structured light reflected by any object in the target scene, generate first depth information according to the first dot matrix structured light, generate second depth information according to the second dot matrix structured light, and generate third depth information according to the linear array structured light;
and the driving circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off.
In the embodiment of the utility model, the depth camera can be applied to a sweeping robot, a light projector of the depth camera is used for projecting a first dot matrix structured light, a second dot matrix structured light and a linear array structured light to a target scene, a light receiver can receive the first dot matrix structured light, the second dot matrix structured light and the linear array structured light reflected by any object in the target scene, first depth information is generated according to the dot matrix structured light with higher power density, second depth information is generated according to the second dot matrix structured light with lower power density, third depth information is generated according to the linear array structured light, so that a controller module can irradiate the first dot matrix structured light with longer distance to generate the first depth information for instant positioning and map construction, object identification is carried out according to the second depth information generated by the second dot matrix structured light with closer distance and denser distance, and obstacle avoidance is carried out according to the type of the object, according to the third depth information generated by the linear array structured light, long-strip-shaped objects such as desk legs, electric wires and the like are prevented from being blocked, the linear structured light has a long extension range, the full-range object detection of the field angle is facilitated, the omission of strip-shaped objects is avoided, the floor sweeping robot can realize instant positioning and map construction and obstacle avoidance through a depth camera module, the complexity of products is reduced, and the manufacturing cost of the products is reduced, so that the popularization and the application of the products are facilitated.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the utility model. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the utility model.

Claims (10)

1. A depth camera includes a light projector, a light receiver, and a driving circuit;
the light projector is used for projecting first lattice structured light, second lattice structured light and linear array structured light to a target scene, and the power density of each light beam in the first lattice structured light is greater than that of each light beam in the second lattice structured light;
the optical receiver is configured to receive the first lattice structured light, the second lattice structured light, and the linear array structured light reflected by any object in the target scene, generate first depth information according to the first lattice structured light, generate second depth information according to the second lattice structured light, and generate third depth information according to the linear array structured light;
the driving circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off.
2. The depth camera of claim 1, wherein the first lattice-structured light forms a first lattice pattern, the second lattice-structured light forms a second lattice pattern, and the line-structured light forms a line pattern;
the first dot pattern is located between the linear array pattern and the second dot pattern is located on the upper side of the linear array pattern.
3. The depth camera of claim 1, wherein the first lattice-structured light forms a first lattice pattern, the second lattice-structured light forms a second lattice pattern, and the line-structured light forms a line pattern;
the first dot pattern is located between the linear array pattern and the second dot pattern and the linear array pattern is located on the upper side of the second dot pattern.
4. The depth camera of claim 1, wherein the second lattice-structured light has a spot density that is greater than a spot density of the first lattice-structured light such that the first lattice-structured light forms a sparse lattice pattern and the second lattice-structured light forms a dense lattice pattern.
5. The depth camera of claim 1, wherein the line structured light comprises a plurality of line-shaped light beams;
and a plurality of linear light beams are distributed in an inclined manner, and the vertical lines of two adjacent linear light beams in the width direction of the field angle have overlapping areas.
6. The depth camera according to claim 1, wherein the optical receiver is configured to generate first depth information according to a transmission time or a phase difference of the first lattice-structured light, generate second depth information according to a transmission time or a phase difference of the second lattice-structured light, and generate third depth information according to a spot image formed by the linear-structured light.
7. The depth camera of claim 1, wherein the light projector comprises a first laser module and a first projection lens;
the first laser module comprises a first point laser array group, a second point laser array group and a line laser array group, wherein the first point laser array group is used for projecting first point array structured light, the second point laser array group is used for projecting second point array structured light, and the line laser array group is used for projecting linear array structured light;
the first projection lens is arranged on the light emitting side of the laser module and comprises a first area, a second area and a third area, the first area is arranged between the second area and the third area, first lattice structured light is received and projected through the first area, second lattice structured light is received and projected through the second area, and the third area is received and projected linear structured light.
8. The depth camera of claim 1, wherein the light projector comprises a second laser module, a beam splitting device, and a second projection lens;
the second laser module is used for projecting laser beams;
the beam splitting device comprises a first beam splitting area, a second beam splitting area and a third beam splitting area, wherein the first beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a first dot matrix structured light, the second beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a second dot matrix structured light, and the third beam splitting area is used for splitting the laser beam into a plurality of laser beams to form a linear array structured light;
the second projection lens is arranged on the light emitting side of the beam splitting device and comprises a first area, a second area and a third area, the first area is arranged between the second area and the third area, first lattice structured light is received and projected through the first area, second lattice structured light is received and projected through the second area, and the third area is received and projected linear structured light.
9. The depth camera of claim 1, wherein the field angle of the depth camera is between 100 ° and 110 °.
10. A floor sweeping robot is characterized by comprising a robot body, a depth camera and a controller module; the depth camera is arranged on the side surface of the robot body;
the depth camera includes a light projector, a light receiver, and a drive circuit;
the light projector is used for projecting first lattice structured light, second lattice structured light and linear array structured light to a target scene, and the power density of each light beam in the first lattice structured light is greater than that of each light beam in the second lattice structured light;
the optical receiver is configured to receive the first lattice-structured light, the second lattice-structured light, and the linear array-structured light reflected by any object in the target scene, generate first depth information according to the first lattice-structured light, generate second depth information according to the second lattice-structured light, and generate third depth information according to the linear array-structured light;
the drive circuit is used for controlling the light projector and the light receiver to be simultaneously switched on or switched off;
and the controller module is used for performing instant positioning and map construction according to the first depth information and generating obstacle avoidance information according to the second depth information and the third depth information.
CN202122506845.6U 2021-10-18 2021-10-18 Depth camera and sweeping robot Active CN216535129U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202122506845.6U CN216535129U (en) 2021-10-18 2021-10-18 Depth camera and sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202122506845.6U CN216535129U (en) 2021-10-18 2021-10-18 Depth camera and sweeping robot

Publications (1)

Publication Number Publication Date
CN216535129U true CN216535129U (en) 2022-05-17

Family

ID=81566483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202122506845.6U Active CN216535129U (en) 2021-10-18 2021-10-18 Depth camera and sweeping robot

Country Status (1)

Country Link
CN (1) CN216535129U (en)

Similar Documents

Publication Publication Date Title
EP3465266B1 (en) Optical system for object detection and location
KR102592139B1 (en) Eye-Safe Scanning LIDAR System
CN111722241B (en) Multi-line scanning distance measuring system, method and electronic equipment
KR102633680B1 (en) Lidar device
CN216167219U (en) Depth camera and sweeping robot
CN216535129U (en) Depth camera and sweeping robot
CN216535127U (en) Depth camera and sweeping robot
CN216167222U (en) Depth camera and sweeping robot
CN115248440A (en) TOF depth camera based on dot matrix light projection
CN216167218U (en) Depth camera and sweeping robot
CN210835244U (en) 3D imaging device and electronic equipment based on synchronous ToF discrete point cloud
CN216535123U (en) Depth camera and sweeping robot
CN114935743B (en) Emission module, photoelectric detection device and electronic equipment
CN114935742B (en) Emission module, photoelectric detection device and electronic equipment
CN115868858A (en) Sweeping robot and structured light camera
CN216754353U (en) Depth camera and sweeping robot
CN115989971A (en) Sweeping robot and depth camera
CN115989973A (en) Sweeping robot and depth camera
CN216535126U (en) Depth camera covering short distance and long distance and sweeping robot
CN115868857A (en) Double-scanning sweeping robot and depth camera
CN117322804A (en) Sweeping robot and structured light camera
CN216167221U (en) Depth camera and sweeping robot
CN117322805A (en) Double-scanning floor sweeping robot and depth camera
CN115989972A (en) Sweeping robot and depth camera
KR20220064830A (en) Lidar device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant