CN219126187U - Intelligent sweeping robot - Google Patents

Intelligent sweeping robot Download PDF

Info

Publication number
CN219126187U
CN219126187U CN202221020505.0U CN202221020505U CN219126187U CN 219126187 U CN219126187 U CN 219126187U CN 202221020505 U CN202221020505 U CN 202221020505U CN 219126187 U CN219126187 U CN 219126187U
Authority
CN
China
Prior art keywords
image
reference plane
sweeping robot
light
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202221020505.0U
Other languages
Chinese (zh)
Inventor
刘勖
黄龙祥
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202221020505.0U priority Critical patent/CN219126187U/en
Application granted granted Critical
Publication of CN219126187U publication Critical patent/CN219126187U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Image Processing (AREA)

Abstract

An intelligent sweeping robot is characterized by comprising a robot body, a depth camera and a processor; the robot body comprises a vertical baffle plate for pushing the object to a designated place; the depth camera is arranged on the side face of the robot body; the depth camera includes a light projector and a light receiver; the light projector is used for projecting structural light or floodlight to a target scene; the light receiver is used for receiving the structured light or floodlight reflected by any object in the target scene and generating an acquisition image; the processor is used for controlling the movement of the robot body and correcting and reconstructing the acquired image. According to the utility model, the vertical baffle is arranged on the robot body, so that objects can be pushed to a designated place, the purpose of sucking smaller garbage into the body and pushing larger garbage to the designated place is realized, the garbage treatment range of the sweeping robot is expanded, and the performance is improved.

Description

Intelligent sweeping robot
Technical Field
The utility model relates to a sweeping robot, in particular to an intelligent sweeping robot.
Background
The floor sweeping robot is one of intelligent household appliances, and can automatically complete floor cleaning in a room by means of certain artificial intelligence. Generally, the brushing and vacuum modes are adopted, and the ground sundries are firstly absorbed into the garbage storage box of the ground, so that the function of cleaning the ground is completed.
Obstacle avoidance is an important function of the intelligent sweeping robot. Common obstacle avoidance schemes include a monocular RGB camera, line structured light, monocular structured light and the like, wherein the scheme based on 3D vision can better solve the problem of identifying obstacles in a plurality of home scenes.
In the prior art, the sweeping robot sucks garbage into the body through the suction port, so that the aim of cleaning a target area is fulfilled. The sweeping robot has better performance in daily living environment, but still has the problem of only sweeping small-volume garbage, and the garbage needs to be manually treated.
Disclosure of Invention
Therefore, the utility model can push objects to the appointed place by arranging the vertical baffle plate on the robot body, thereby realizing the purposes of sucking the small garbage into the body and pushing the large garbage to the appointed place, expanding the garbage treatment range of the sweeping robot and improving the performance.
The intelligent sweeping robot is characterized by comprising a robot body, a depth camera and a processor; the robot body comprises a vertical baffle plate for pushing the object to a designated place;
the depth camera is arranged on the side face of the robot body;
the depth camera includes a light projector and a light receiver;
the light projector is used for projecting structural light or floodlight to a target scene;
the light receiver is used for receiving the structured light or floodlight reflected by any object in the target scene and generating an acquisition image;
the processor is used for controlling the movement of the robot body and correcting and reconstructing the acquired image.
Optionally, the intelligent sweeping robot is characterized in that the processor is used for correcting the acquired image according to a horizontal reference plane and a vertical reference plane.
Optionally, the intelligent sweeping robot is characterized in that the processor may further generate a depth image according to the corrected image, and determine the volume of the object according to the horizontal reference plane and the depth image. When the volume of the object is within a certain range, the object is pushed to a designated place through the vertical baffle plate.
Optionally, an intelligent cleaning robot, its characterized in that, vertical baffle can lift to rubbish discharge.
Optionally, the intelligent cleaning robot is characterized in that when the light projector projects the structured light, the processor corrects the collected image according to a horizontal reference plane and a vertical reference plane, and the intelligent cleaning robot comprises the following steps:
s1, according to calibration information of a vertical reference plane and internal parameters of a camera, obtaining space coordinates of each scattered spot
Figure BDA0003623534000000021
Wherein the calibration information of the vertical reference plane comprises the image coordinates of scattered spots
Figure BDA0003623534000000022
Distance from reference plane z ref
Step S2: based on the spatial coordinates of each speckle
Figure BDA0003623534000000023
Positional relationship between laser and camera (t x ,t y ,t z ) And imaginary horizontal planes (a, b, c, d), and determining the projection coordinates of each scattered spot on the ground
Figure BDA0003623534000000024
Step S3: re-projecting scattered spots on the ground back to the image coordinate system to obtain calibration information of a horizontal reference plane; wherein the calibration information of the horizontal reference plane comprises the image coordinates of scattered spots
Figure BDA0003623534000000025
Distance from reference point
Figure BDA0003623534000000026
Step S4: and correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane.
Optionally, the intelligent sweeping robot is characterized in that between the step S3 and the step S4, the method further includes:
step S5: according to the coordinate pairs of the vertical reference plane and the horizontal reference plane, solving a homography matrix between the two planes, and further solving the corresponding relation between any point coordinate on the horizontal reference plane image and the vertical reference plane.
Optionally, the intelligent sweeping robot is characterized in that in the step S1:
Figure BDA0003623534000000027
optionally, the intelligent sweeping robot is characterized in that in the step S2:
Figure BDA0003623534000000031
Figure BDA0003623534000000032
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003623534000000033
optionally, the intelligent sweeping robot is characterized in that in the step S3:
Figure BDA0003623534000000034
optionally, the step S4 includes:
s41, correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively;
step S42, comparing the first correction image with the second correction image to obtain all pixel points with high similarity;
step S43: removing discrete points in all pixel points with high similarity to obtain separation lines;
step S44: and extracting a part on the separation line in the first correction image and a part below the separation line in the second correction image, and then splicing the part and the part to obtain a third correction image.
Compared with the prior art, the utility model has the following beneficial effects:
the utility model comprises the vertical baffle, so that smaller garbage can be sucked into the body, and larger garbage can be pushed to a designated area, thereby avoiding the problem that the larger garbage blocks the sweeping robot, ensuring that the sweeping robot works more safely, and prolonging the service life of the sweeping robot. Meanwhile, the large garbage is pushed to a designated area, so that the garbage can be cleaned manually, the garbage can be automatically conveyed to a garbage channel in an intelligent building, the labor is saved, the efficiency is improved, the functional range of the sweeping robot is improved, and the sweeping robot is promoted and applied.
Drawings
In order to more clearly illustrate the embodiments of the present utility model or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present utility model, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present utility model will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic diagram of a working principle of a sweeping robot in an embodiment of the present utility model;
FIG. 2 is a view of horizontal and vertical speckle images taken by a sweeping robot in an embodiment of the utility model;
FIG. 3 is a flow chart of steps of a monocular structured light stereo matching method based on dual reference planes in an embodiment of the present utility model;
fig. 4 is a flow chart of steps of another monocular structured light stereo matching method based on dual reference planes in an embodiment of the present utility model.
In the figure: 100 is a robot body; 200 is an object; 1 is a light projector; 2 is an optical receiver.
Detailed Description
The present utility model will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present utility model, but are not intended to limit the utility model in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present utility model.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the utility model described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the utility model is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The following describes the technical scheme of the present utility model and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present utility model will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a working principle of a sweeping robot according to an embodiment of the present utility model, and as shown in fig. 1, the sweeping robot provided by the present utility model includes a robot body 100, a depth camera, and a processor; the robot body 100 includes a vertical barrier for pushing an object to a designated place; the depth camera is disposed on a side of the robot body 100;
the depth camera comprises a light projector 1 and a light receiver 2;
the light projector 1 is used for projecting structural light or floodlight to a target scene;
the light receiver 2 is configured to receive the structured light or floodlight reflected by any object 200 in the target scene, and generate an acquired image;
the processor is used for controlling the movement of the robot body and correcting and reconstructing the acquired image.
In some embodiments, the processor is configured to correct the acquired image based on a horizontal reference plane and a vertical reference plane.
In this embodiment, each light beam in the structured light or floodlight has a higher power density, and the projection distance is longer, so that the distribution of objects 200 at a position farther from the floor sweeping robot in the room can be obtained, and the floor sweeping robot can perform instant positioning and map construction conveniently.
In some embodiments, the number of light beams in the structured light or floodlight is between two and several thousand beams, such as 2 to 1 thousand beams.
The field angle of the depth camera is between 100 ° and 110 °.
The processor may also generate a depth image from the corrected image and determine a volume of the object from the horizontal reference plane and the depth image. When the volume of the object is judged, the object and the shadow part formed by the horizontal reference plane and the laser projection are used for estimating the volume of the object. Since the disposable suction volume of the robot for sweeping floor is limited, it is necessary to judge the volume of the object in advance in order to prevent clogging of the suction port. If the volume of the object is too large, detour is required. Here, the excessively large volume means a volume of the object regarded as the garbage, which is different from the obstacle avoidance function of the robot for sweeping the floor. The volume of the object in the obstacle avoidance function is not only particularly large, but also high. However, the excessively large volume in this embodiment means that the waste cannot be regarded as an obstacle, and the cleaning is directly performed in the prior art, but there is a risk of causing clogging of the suction port. In the prior art, the daily life needs are often met by increasing the technical route such as suction force. However, in actual life, the objects possibly causing the blockage of the suction port are still larger, so that the volume of the objects needs to be judged in advance to ensure the running safety of the sweeping robot.
The robot body 100 has a side vertical barrier thereon. For objects that may cause blockage of the suction opening, the sweeping robot pushes the objects to a designated place through a vertical barrier on the robot body 100, instead of sucking the objects into the body. Therefore, small garbage can be sucked into the body, and large garbage can be pushed to a designated place, so that the cleaning of the ground is ensured, and the safety of the sweeping robot is also ensured.
In some embodiments, the vertical baffle may be raised for waste discharge. When the sweeping robot reaches the charging and cleaning integrated device, the sweeping robot can charge and can lift the vertical baffle to enable garbage to enter the charging and cleaning integrated device. The garbage can be conveyed to the charging and cleaning integrated device by the power provided by the sweeping robot, and negative air pressure can be provided by the charging and cleaning integrated device, so that the garbage is sucked into the charging and cleaning integrated device. In intelligent building, sweeping robot can utilize vertical baffle to push bigger rubbish to rubbish mouth to can lift up vertical baffle at rubbish mouth, the rubbish in the discharge body.
The processor may also generate a depth image from the corrected image and determine whether traversal is possible. The width of the front channel needs to be calculated according to the depth map so as to judge whether the sweeping robot can smoothly pass through the front channel, and the front channel is particularly suitable for a scene with disordered objects. When judging the width of the front channel, the minimum width is the width of the channel in the height when the sweeping robot passes through.
In the existing camera calibration technology, a vertical baffle is generally adopted to shoot a reference image in advance, random scattered spots beaten by a laser projector are contained on the reference image, and the actually shot image is calibrated according to the reference image. However, in the image captured by the sweeping robot, scattered spots are projected not only on the vertical baffle, but also on the horizontal ground in a relatively large proportion. Because the difference between the horizontal surface speckle topological structure and the vertical surface is relatively large, when the vertical baffle plate is adopted for calibration, the calibration effect on the ground part is poor.
In fig. 2, the upper left image is a speckle image formed on the speckle projection horizontal plane in fig. 1, the upper right image is a speckle image of the speckle projected on the vertical baffle in fig. 1, the lower left image is one speckle region in the upper left image, and the lower right image is an image of the speckle region corresponding to the lower left image in the upper right image. As shown in FIG. 2, due to perspective imaging, the speckle near in the horizontal plane translates to the left along the epipolar direction and the speckle far away translates to the right, resulting in a topological difference in speckle, as compared to the vertical plane. Corresponding to the vertical reference plane, the scattered spots with the same horizontal reference plane translate in the horizontal direction, and then the overall topological structure of the scattered spots changes. Such topological differences can lead to inconsistent content within the region blocks and reduced similarity of matches, i.e., poor matching results at far or near locations. Because the number of scattered spots on the horizontal reference plane is only about half of that of the vertical reference plane, and the shot content has other objects besides the ground, the matching result of the two reference planes of the horizontal reference plane and the vertical reference plane is combined to ensure that the final output matching is accurate and complete.
Fig. 3 is a step flow chart of a monocular structured light stereo matching method based on a dual reference plane in an embodiment of the present utility model. As shown in fig. 3, the monocular structured light stereo matching method based on the dual reference planes provided by the embodiment of the utility model includes the following steps:
step S1: according to the calibration information of the vertical reference plane and the internal parameters of the camera, obtaining the space coordinate of each scattered spot
Figure BDA0003623534000000061
In this step, the coordinates of each scattered spot in the three-dimensional space are calculated based on the known information. The calibration information of the vertical reference plane comprises the image coordinates of scattered spots
Figure BDA0003623534000000071
Distance from reference plane z ref . The camera internal parameters are known at the time of leaving the factory of the camera, c x 、c y For the origin translation amount, f x 、f y The focal length is the focal length, wherein x and y are the corresponding values in the x and y directions respectively. The image coordinates are two obtainedCoordinates in the dimensional image. The reference plane distance is the distance of the vertical reference plane from the camera. The spatial coordinates of each scattered spot are calculated according to the following formula:
Figure BDA0003623534000000072
in monocular structured light, in order to achieve triangularization of a pair of homologous matching points in a shooting scene to obtain spatial information, a reference image is usually required to be shot in advance, then in three-dimensional reconstruction, the shot image (i.e., a search image) is matched with the reference image pixel by pixel to determine a corresponding relationship, so that depth information of each pixel can be triangulated and calculated. The matching algorithm of the embodiment can adopt block matching, which is a method for matching region blocks, and the similarity of the region blocks on the search graph and the reference graph is respectively compared in a predefined parallax range. More specifically, the similarity of two region blocks is evaluated based on a pixel-by-pixel comparison of some cost penalty (e.g., SAD) within a window centered around the current search point. Therefore, whether the contents in the two area blocks are consistent determines the matching similarity.
Step S2: based on the spatial coordinates of each speckle
Figure BDA0003623534000000073
Positional relationship between laser and camera (t x ,t y ,t z ) And imaginary horizontal planes (a, b, c, d), and determining the projection coordinates of each scattered spot on the ground
Figure BDA0003623534000000074
In this step, projection coordinates are obtained at the time of projection of each speckle onto the ground. Said spatial coordinates of scattered spots
Figure BDA0003623534000000075
Obtained in step S1. The positional relationship of the laser and the camera is a fixed value and is known. Take four valuesRepresenting imaginary horizontal plane->
Figure BDA0003623534000000076
The projection coordinates of each scattered spot on the ground are calculated by the following formula:
Figure BDA0003623534000000077
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003623534000000078
step S3: and re-projecting scattered spots on the ground back to the image coordinate system to obtain calibration information of the horizontal reference plane.
In this step, the calibration information of the horizontal reference plane includes the image coordinates of the speckle pattern
Figure BDA0003623534000000079
Distance from reference point->
Figure BDA00036235340000000710
The calibration information of the horizontal reference plane is obtained through calculation according to the following formula:
Figure BDA0003623534000000081
step S4: and correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane.
In the step, the shot image can be corrected through the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane together, so that a better correction effect is achieved.
In some embodiments, the step includes:
and S41, correcting the shot images according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively.
In this step, the first correction image and the second correction image are images obtained by calibrating the photographed image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane, respectively. In the image shot by the sweeping robot, only a part of the image can obtain a better correction effect.
And S42, comparing the first correction image with the second correction image to obtain all pixel points with high similarity.
In this step, the first correction image and the second correction image have certain differences due to different calibration information, but in the three-dimensional space, the horizontal reference plane is intersected with the vertical reference plane, so that at the intersection position of the two planes, the first correction image and the second correction image can both obtain better correction effects, namely the similarity of the two is the highest.
Step S43: and removing discrete points in all the pixel points with high similarity to obtain a separation line.
In this step, in all the pixel points with high similarity, some stray points can obtain high similarity, which is usually represented as discrete points, so that a continuous separation line can be obtained by removing the discrete points. The separation line is the line where the horizontal plane intersects the vertical plane. It should be noted that the dividing line is not a straight line, but a multi-segment line, a folding line, or even an arc line, which is mainly dependent on the shape, size, and position of the article placed on the ground.
Step S44: and extracting a part on the separation line in the first correction image and a part below the separation line in the second correction image, and then splicing the part and the part to obtain a third correction image.
In this step, in most application scenarios, the part of the ground that is photographed is usually located at the bottom of the image, so it can be considered that the image above the separation line adopts the vertical reference plane to obtain a better calibration result, while the image below the separation line adopts the horizontal reference plane to obtain a better calibration result, that is, in the first correction image, the part above the separation line has a better calibration result, in the second correction image, the part below the separation line has a better calibration result, and then the two parts of images are spliced to obtain a third correction image with a better overall calibration result. The third corrected image is used for the final three-dimensional reconstruction.
In some embodiments, step S4 includes:
and S41, correcting the shot images according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively.
And S42, comparing the first correction image with the second correction image to obtain all pixel points with high similarity.
Step S43: and removing discrete points in all the pixel points with high similarity to obtain a separation line.
Step S45: and comparing the parts of the first correction image and the second correction image on the same side of the separation line to obtain an image with a good correction effect, and extracting the corresponding parts.
In this step, considering that the correction effect of which part cannot be directly determined in a part of the application scene is good, it is necessary to determine it. And comparing the parts of the two images on the same side of the separation line, so that an image with better correction effect can be obtained. For example, comparing the portion on the separation line in the first correction image with the portion on the separation line in the second correction image, the correction effect of the portion on the separation line in the first correction image is better, and the portion on the separation line in the first correction image is extracted.
Step S46: and extracting the corresponding part in the non-extracted images in the first correction image and the second correction image from the part at the other side of the separation line, and combining the extracted part with the part extracted in the step S45 to obtain a third correction image.
In this step, since the partial information of one of the first correction image and the second correction image is extracted in step S45, it is considered that the correction effect of the other part of the other image, from which the information is not extracted, is good, and it is necessary to extract it and combine it with the part extracted in step S45. For example, in the example of step S45, since the information of the second correction image is not extracted, the portion below the dividing line in the second correction image is extracted, and then the portion above the dividing line in the first correction image is combined with the portion below the dividing line in the second correction image, to obtain the third correction image.
According to the embodiment, the parameters of the vertical reference plane and the parameters of the camera internal parameters are processed, the known parameters in the existing system are utilized, the parameters of the double reference planes can be obtained without recalibrating and additionally working the camera, the workload of early calibration and the matching calculation amount are greatly reduced, and the popularization and the application of the embodiment are facilitated.
According to the embodiment, the parameters of the horizontal reference plane are obtained through data conversion and processing, compared with the scheme of shooting calibration on the horizontal plane, the method and the device not only save the shooting workload, but also save the link of matching the horizontal reference plane with the vertical reference plane, so that each speckle can efficiently obtain the corresponding point on the two reference planes, the calculated amount is greatly saved, the requirement on hardware is reduced, and the response speed is improved.
According to the embodiment, the images are processed through the vertical reference plane and the horizontal reference plane, the processing of the images is finer, the correction effect is better, and the method has a very good effect particularly for special scenes shot at low angles such as a sweeping robot.
In the embodiment, the double-reference plane is used for carrying out twice matching, so that a twice matching result can be obtained, and at the moment, the redundant matching result is optimized through a fusion strategy, so that the matching result is optimized. The problems of poor matching effect, incomplete three-dimensional reconstruction data and errors in the special installation scene of the sweeping robot are solved.
Fig. 4 is a flow chart of steps of another monocular structured light stereo matching method based on dual reference planes in an embodiment of the present utility model. As shown in fig. 4, another monocular structured light stereo matching method based on dual reference planes provided in the embodiment of the present utility model is different from the foregoing embodiment, and further includes, between step S3 and step S4:
step S5: according to the coordinate pairs of the vertical reference plane and the horizontal reference plane, solving a homography matrix between the two planes, and further solving the corresponding relation between any point coordinate on the horizontal reference plane image and the vertical reference plane.
In the step, the information obtained by the speckle is expanded to obtain homography matrixes of a vertical reference plane and a horizontal reference plane, so that the corresponding relation between any point coordinates of the two planes is obtained. Accordingly, in step S4, the photographed image is corrected according to the information of the vertical reference plane and the horizontal reference plane.
Compared with the previous embodiment, the information of the two planes obtained in the present embodiment is information of any point and no longer depends on the light spot, so that the present embodiment can be better adapted to a scene with low light spot density or large scene, so that the correction effect is finer, a small-size object can be better reconstructed, the correction and reconstruction effects are better, and the judgment on the channel and the object in front is more accurate.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present utility model. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the utility model. Thus, the present utility model is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present utility model. It is to be understood that the utility model is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the utility model.

Claims (6)

1. An intelligent sweeping robot is characterized by comprising a robot body, a depth camera and a processor; the robot body comprises a vertical baffle plate for pushing the object to a designated place;
the depth camera is arranged on the side face of the robot body;
the depth camera includes a light projector and a light receiver;
the light projector is used for projecting structural light or floodlight to a target scene;
the light receiver is used for receiving the structured light or floodlight reflected by any object in the target scene and generating an acquisition image;
the processor is used for controlling the movement of the robot body and correcting and reconstructing the acquired image.
2. The intelligent sweeping robot of claim 1, wherein the processor is configured to correct the acquired image based on a horizontal reference plane and a vertical reference plane.
3. An intelligent sweeping robot according to claim 2, wherein objects can be pushed through the vertical barrier.
4. The intelligent sweeping robot of claim 1, wherein the vertical baffle can be lifted.
5. An intelligent cleaning robot according to claim 1, characterized in that the number of light beams in the structured light or floodlight is between two and several thousand beams.
6. The intelligent sweeping robot of claim 1, wherein the field angle of the depth camera is between 100 ° and 110 °.
CN202221020505.0U 2022-04-29 2022-04-29 Intelligent sweeping robot Active CN219126187U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202221020505.0U CN219126187U (en) 2022-04-29 2022-04-29 Intelligent sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202221020505.0U CN219126187U (en) 2022-04-29 2022-04-29 Intelligent sweeping robot

Publications (1)

Publication Number Publication Date
CN219126187U true CN219126187U (en) 2023-06-06

Family

ID=86561371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202221020505.0U Active CN219126187U (en) 2022-04-29 2022-04-29 Intelligent sweeping robot

Country Status (1)

Country Link
CN (1) CN219126187U (en)

Similar Documents

Publication Publication Date Title
WO2021031427A1 (en) Sweeping robot and automated control method for sweeping robot
CN111669566B (en) Imager for detecting visible and infrared projection patterns
CN110325938B (en) Electric vacuum cleaner
KR101950558B1 (en) Pose estimation apparatus and vacuum cleaner system
WO2018087952A1 (en) Electric vacuum cleaner
WO2019007038A1 (en) Floor sweeping robot, floor sweeping robot system and working method thereof
TWI664948B (en) Electric sweeper
CN111405862B (en) Electric vacuum cleaner
Sappa et al. An efficient approach to onboard stereo vision system pose estimation
WO2022088611A1 (en) Obstacle detection method and apparatus, electronic device, storage medium, and computer program
CN110838144B (en) Charging equipment identification method, mobile robot and charging equipment identification system
CN110325089B (en) Electric vacuum cleaner
CN108340405B (en) Robot three-dimensional scanning system and method
JP2004028727A (en) Monitoring system, monitoring method, distance correction device for the monitoring system, and distance correction method
WO2022135556A1 (en) Cleaning robot and cleaning control method therefor
CN219126187U (en) Intelligent sweeping robot
CN115381354A (en) Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment
CN113313089A (en) Data processing method, device and computer readable storage medium
JP2004171189A (en) Moving object detection device, moving object detection method and moving object detection program
CN219126188U (en) High-matching-degree sweeping robot
CN116998948A (en) Intelligent sweeping robot
JP2018196513A (en) Vacuum cleaner
CN117011361A (en) High-matching-degree sweeping robot
JP2007195061A (en) Image processor
CN115342800A (en) Map construction method and system based on trinocular vision sensor

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant