CN117011361A - High-matching-degree sweeping robot - Google Patents

High-matching-degree sweeping robot Download PDF

Info

Publication number
CN117011361A
CN117011361A CN202210465580.6A CN202210465580A CN117011361A CN 117011361 A CN117011361 A CN 117011361A CN 202210465580 A CN202210465580 A CN 202210465580A CN 117011361 A CN117011361 A CN 117011361A
Authority
CN
China
Prior art keywords
reference plane
image
matching
sweeping robot
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210465580.6A
Other languages
Chinese (zh)
Inventor
刘勖
黄龙祥
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN202210465580.6A priority Critical patent/CN117011361A/en
Publication of CN117011361A publication Critical patent/CN117011361A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The high-matching-degree sweeping robot is characterized by comprising a robot body, a structured light camera and a processor; the structured light camera is arranged on the side surface of the robot body; the structured light camera includes a light projector and a light receiver; the light projector is used for projecting lattice structure light to a target scene; the light receiver is used for receiving the lattice structure light reflected by any object in the target scene and generating an acquisition image; the processor is used for correcting the acquired image according to the horizontal reference plane and the vertical reference plane. The application better calibrates the image through the double reference planes, solves the problem of poor monocular structure light area allocation effect caused by special installation modes of the sweeping robot and the like, and further improves the performance of the sweeping robot and the like in obstacle avoidance and navigation.

Description

High-matching-degree sweeping robot
Technical Field
The application relates to a sweeping robot, in particular to a high-matching-degree sweeping robot.
Background
The floor sweeping robot is one of intelligent household appliances, and can automatically complete floor cleaning in a room by means of certain artificial intelligence. Generally, the brushing and vacuum modes are adopted, and the ground sundries are firstly absorbed into the garbage storage box of the ground, so that the function of cleaning the ground is completed.
Obstacle avoidance is an important function of the intelligent sweeping robot. Common obstacle avoidance schemes include a monocular RGB camera, line structured light, monocular structured light and the like, wherein the scheme based on 3D vision can better solve the problem of identifying obstacles in a plurality of home scenes.
Because of the structural specificity of the sweeping robot, the 3D camera is often installed at a lower height from the ground, and the included angle between the camera optical axis and the normal line of the ground is close to 90 degrees, so that the special installation mode often causes poor performance of a matching algorithm in monocular structured light, and a plurality of mismatching or unmatched areas occur. Further, the poor matching effect causes a missing or erroneous depth data, which is disadvantageous for obstacle avoidance or navigation of the robot based on stereoscopic vision information. The same problem also exists in smart cars and mobile robots with lower cameras. The prior art has no better scheme for correcting the larger angle difference.
Disclosure of Invention
Therefore, the application better calibrates the image through the double reference planes, solves the problem of poor monocular structure light area distribution effect caused by special installation modes of the sweeping robot and the like, and further improves the performance of the sweeping robot and the like in obstacle avoidance and navigation.
The application provides a high-matching-degree sweeping robot which is characterized by comprising a robot body, a structured light camera and a processor, wherein the robot body is provided with a plurality of sensors; the structured light camera is arranged on the side surface of the robot body;
the structured light camera includes a light projector and a light receiver;
the light projector is used for projecting lattice structure light to a target scene;
the light receiver is used for receiving the lattice structure light reflected by any object in the target scene and generating an acquisition image;
the processor is used for correcting the acquired image according to the horizontal reference plane and the vertical reference plane.
Optionally, the high-matching-degree sweeping robot is characterized in that the processor can also generate a depth image according to the corrected image, and judge the volume of the object according to the horizontal reference plane and the depth image.
Optionally, the high-matching-degree sweeping robot is characterized in that one side of the robot body is provided with a vertical plane, and the vertical plane is used for pushing the object to a designated place when the volume of the object is within a certain range.
Optionally, the high-matching-degree sweeping robot is characterized in that the processor can also generate a depth image according to the corrected image and judge whether the robot can walk through or not.
Optionally, the high-matching-degree sweeping robot is characterized in that when the processor corrects the collected image according to a horizontal reference plane and a vertical reference plane, the method comprises the following steps:
step S1: according to the calibration information of the vertical reference plane and the internal parameters of the camera, obtaining the space coordinate of each scattered spotWherein the calibration information of the vertical reference plane comprises the image coordinates of scattered spotsDistance from reference plane z ref
Step S2: based on the spatial coordinates of each specklePositional relationship between laser and camera (t x ,t y ,t z ) And imaginaryHorizontal planes (a, b, c, d), the projection coordinates of each scattered spot on the ground are obtained
Step S3: re-projecting scattered spots on the ground back to the image coordinate system to obtain calibration information of a horizontal reference plane; wherein the calibration information of the horizontal reference plane comprises the image coordinates of scattered spotsDistance from reference point
Step S4: and correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane.
Optionally, the high-matching-degree sweeping robot is characterized in that the step S3 and the step S4 further include:
step S5: according to the coordinate pairs of the vertical reference plane and the horizontal reference plane, solving a homography matrix between the two planes, and further solving the corresponding relation between any point coordinate on the horizontal reference plane image and the vertical reference plane.
Optionally, the high-matching-degree sweeping robot is characterized in that in the step S1:
optionally, the high-matching-degree sweeping robot is characterized in that in the step S2:
wherein,
optionally, the high-matching-degree sweeping robot is characterized in that in the step S3:
optionally, the step S4 includes:
step S41: correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively;
step S42: comparing the first correction image with the second correction image to obtain all pixel points with high similarity;
step S43: removing discrete points in all pixel points with high similarity to obtain separation lines;
step S44: and extracting a part on the separation line in the first correction image and a part below the separation line in the second correction image, and then splicing the part and the part to obtain a third correction image.
Compared with the prior art, the application has the following beneficial effects:
the application processes the parameters of the vertical reference plane and the camera internal parameters by using the known parameters in the existing system, and can obtain the parameters of the double reference planes without recalibrating and extra work on the camera, thereby greatly reducing the workload of early calibration and the matching calculation amount and being beneficial to popularization and application of the application.
Compared with the scheme of shooting calibration on the horizontal plane, the method saves the shooting workload, saves the link of matching the horizontal reference plane with the vertical reference plane, ensures that each speckle can efficiently obtain the corresponding point on the two reference planes, greatly saves the calculation amount, reduces the requirement on hardware and improves the response speed.
The application processes the images through the two dimensions of the vertical reference plane and the horizontal reference plane, processes the images more finely, has better correction effect, and particularly has very good effect on special scenes shot at low angles such as a sweeping robot.
According to the application, the double-reference plane is used for carrying out twice matching, so that the twice matching result can be obtained, and the redundant matching result is optimized through the fusion strategy, so that the matching result is optimized. The problems of poor matching effect, incomplete three-dimensional reconstruction data and errors in the special installation scene of the sweeping robot are solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic diagram of a working principle of a sweeping robot in an embodiment of the present application;
FIG. 2 is a view of horizontal and vertical speckle images taken by a sweeping robot in an embodiment of the application;
FIG. 3 is a flow chart of steps of a monocular structured light stereo matching method based on dual reference planes in an embodiment of the present application;
fig. 4 is a flow chart of steps of another monocular structured light stereo matching method based on dual reference planes in an embodiment of the present application.
In the figure: 100 is a robot body; 200 is an object; 1 is a light projector; 2 is an optical receiver.
Detailed Description
The present application will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present application, but are not intended to limit the application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the application is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a working principle of a sweeping robot according to an embodiment of the present application, and as shown in fig. 1, the sweeping robot provided by the embodiment of the present application includes a robot body 100, a structured light camera, and a processor; the structured light camera is provided on a side of the robot body 100;
the structured light camera comprises a light projector 1 and a light receiver 2;
the light projector 1 is used for projecting lattice structure light to a target scene;
the light receiver 2 is configured to receive the lattice structured light reflected by the arbitrary object 200 in the target scene, and generate an acquired image;
the processor is used for correcting the acquired image according to the horizontal reference plane and the vertical reference plane.
In this embodiment, each beam of the lattice structured light has a higher power density and a longer projection distance, so that the distribution of objects 200 at a position far from the robot can be obtained, which is convenient for the robot to perform instant positioning and map construction.
In some embodiments, the number of beams in the lattice structured light is between two and several thousand beams, such as 2 to 1 thousand beams.
The field angle of the structured light camera is between 100 ° and 110 °.
The processor may also generate a depth image from the corrected image and determine a volume of the object from the horizontal reference plane and the depth image. When the volume of the object is judged, the object and the shadow part formed by the horizontal reference plane and the laser projection are used for estimating the volume of the object. Since the disposable suction volume of the robot for sweeping floor is limited, it is necessary to judge the volume of the object in advance in order to prevent clogging of the suction port. If the volume of the object is too large, detour is required. Here, the excessively large volume means a volume of the object regarded as the garbage, which is different from the obstacle avoidance function of the robot for sweeping the floor. The volume of the object in the obstacle avoidance function is not only particularly large, but also high. However, the excessively large volume in this embodiment means that the waste cannot be regarded as an obstacle, and the cleaning is directly performed in the prior art, but there is a risk of causing clogging of the suction port. In the prior art, the daily life needs are often met by increasing the technical route such as suction force. However, in actual life, the objects possibly causing the blockage of the suction port are still larger, so that the volume of the objects needs to be judged in advance to ensure the running safety of the sweeping robot.
The robot body 100 has a side vertical plane thereon. For articles that may cause blockage of the suction opening, the sweeping robot pushes the articles to a designated place through a vertical plane on the robot body 100, instead of sucking the articles into the body. Therefore, small garbage can be sucked into the body, and large garbage can be pushed to a designated place, so that the cleaning of the ground is ensured, and the safety of the sweeping robot is also ensured.
The processor may also generate a depth image from the corrected image and determine whether traversal is possible. The width of the front channel needs to be calculated according to the depth map so as to judge whether the sweeping robot can smoothly pass through the front channel, and the front channel is particularly suitable for a scene with disordered objects. When judging the width of the front channel, the minimum width is the width of the channel in the height when the sweeping robot passes through.
In the existing camera calibration technology, a vertical plane is generally adopted to shoot a reference image in advance, random scattered spots which are shot by a laser projector are contained on the reference image, and the actually shot image is calibrated according to the reference image. However, in the image captured by the sweeping robot, scattered spots are projected not only on a vertical plane but also on a horizontal ground in a relatively large proportion. Because the difference between the horizontal surface speckle topological structure and the vertical surface is relatively large, when the vertical plane is adopted for calibration, the calibration effect on the ground part is poor.
In fig. 2, the upper left image is a speckle image formed on the speckle projection horizontal plane in fig. 1, the upper right image is a speckle image of the speckle projection on the vertical plane in fig. 1, the lower left image is one speckle region in the upper left image, and the lower right image is an image of the speckle region corresponding to the lower left image in the upper right image. As shown in FIG. 2, due to perspective imaging, the speckle near in the horizontal plane translates to the left along the epipolar direction and the speckle far away translates to the right, resulting in a topological difference in speckle, as compared to the vertical plane. Corresponding to the vertical reference plane, the scattered spots with the same horizontal reference plane translate in the horizontal direction, and then the overall topological structure of the scattered spots changes. Such topological differences can lead to inconsistent content within the region blocks and reduced similarity of matches, i.e., poor matching results at far or near locations. Because the number of scattered spots on the horizontal reference plane is only about half of that of the vertical reference plane, and the shot content has other objects besides the ground, the matching result of the two reference planes of the horizontal reference plane and the vertical reference plane is combined to ensure that the final output matching is accurate and complete.
Fig. 3 is a step flow chart of a monocular structured light stereo matching method based on a dual reference plane in an embodiment of the present application. As shown in fig. 3, the monocular structured light stereo matching method based on the dual reference planes provided by the embodiment of the application includes the following steps:
step S1: according to the calibration information of the vertical reference plane and the internal parameters of the camera, obtaining the space coordinate of each scattered spot
In this step, the coordinates of each scattered spot in the three-dimensional space are calculated based on the known information. The calibration information of the vertical reference plane comprises the image coordinates of scattered spotsDistance from reference plane z ref . The camera internal parameters are known at the time of leaving the factory of the camera, c x 、c y For the origin translation amount, f x 、f y The focal length is the focal length, wherein x and y are the corresponding values in the x and y directions respectively. The image coordinates are coordinates in the obtained two-dimensional image. The reference plane distance is the distance of the vertical reference plane from the camera. The spatial coordinates of each scattered spot are calculated according to the following formula:
in monocular structured light, in order to achieve triangularization of a pair of homologous matching points in a shooting scene to obtain spatial information, a reference image is usually required to be shot in advance, then in three-dimensional reconstruction, the shot image (i.e., a search image) is matched with the reference image pixel by pixel to determine a corresponding relationship, so that depth information of each pixel can be triangulated and calculated. The matching algorithm of the embodiment can adopt block matching, which is a method for matching region blocks, and the similarity of the region blocks on the search graph and the reference graph is respectively compared in a predefined parallax range. More specifically, the similarity of two region blocks is evaluated based on a pixel-by-pixel comparison of some cost penalty (e.g., SAD) within a window centered around the current search point. Therefore, whether the contents in the two area blocks are consistent determines the matching similarity.
Step S2: based on the spatial coordinates of each specklePositional relationship between laser and camera (t x ,t y ,t z ) And imaginary horizontal planes (a, b, c, d), and determining the projection coordinates of each scattered spot on the ground
In this step, projection coordinates are obtained at the time of projection of each speckle onto the ground. Said spatial coordinates of scattered spotsObtained in step S1. The positional relationship of the laser and the camera is a fixed value and is known. Take four values to represent an imaginary horizontal plane +.>The projection coordinates of each scattered spot on the ground are calculated by the following formula:
wherein,
step S3: and re-projecting scattered spots on the ground back to the image coordinate system to obtain calibration information of the horizontal reference plane.
In this step, the calibration information of the horizontal reference plane includes the image coordinates of the speckle patternDistance from reference point->The calibration information of the horizontal reference plane is obtained through calculation according to the following formula:
step S4: and correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane.
In the step, the shot image can be corrected through the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane together, so that a better correction effect is achieved.
In some embodiments, the step includes:
and S41, correcting the shot images according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively.
In this step, the first correction image and the second correction image are images obtained by calibrating the photographed image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane, respectively. In the image shot by the sweeping robot, only a part of the image can obtain a better correction effect.
And S42, comparing the first correction image with the second correction image to obtain all pixel points with high similarity.
In this step, the first correction image and the second correction image have certain differences due to different calibration information, but in the three-dimensional space, the horizontal reference plane is intersected with the vertical reference plane, so that at the intersection position of the two planes, the first correction image and the second correction image can both obtain better correction effects, namely the similarity of the two is the highest.
Step S43: and removing discrete points in all the pixel points with high similarity to obtain a separation line.
In this step, in all the pixel points with high similarity, some stray points can obtain high similarity, which is usually represented as discrete points, so that a continuous separation line can be obtained by removing the discrete points. The separation line is the line where the horizontal plane intersects the vertical plane. It should be noted that the dividing line is not a straight line, but a multi-segment line, a folding line, or even an arc line, which is mainly dependent on the shape, size, and position of the article placed on the ground.
Step S44: and extracting a part on the separation line in the first correction image and a part below the separation line in the second correction image, and then splicing the part and the part to obtain a third correction image.
In this step, in most application scenarios, the part of the ground that is photographed is usually located at the bottom of the image, so it can be considered that the image above the separation line adopts the vertical reference plane to obtain a better calibration result, while the image below the separation line adopts the horizontal reference plane to obtain a better calibration result, that is, in the first correction image, the part above the separation line has a better calibration result, in the second correction image, the part below the separation line has a better calibration result, and then the two parts of images are spliced to obtain a third correction image with a better overall calibration result. The third corrected image is used for the final three-dimensional reconstruction.
In some embodiments, step S4 includes:
and S41, correcting the shot images according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively.
And S42, comparing the first correction image with the second correction image to obtain all pixel points with high similarity.
Step S43: and removing discrete points in all the pixel points with high similarity to obtain a separation line.
Step S45: and comparing the parts of the first correction image and the second correction image on the same side of the separation line to obtain an image with a good correction effect, and extracting the corresponding parts.
In this step, considering that the correction effect of which part cannot be directly determined in a part of the application scene is good, it is necessary to determine it. And comparing the parts of the two images on the same side of the separation line, so that an image with better correction effect can be obtained. For example, comparing the portion on the separation line in the first correction image with the portion on the separation line in the second correction image, the correction effect of the portion on the separation line in the first correction image is better, and the portion on the separation line in the first correction image is extracted.
Step S46: and extracting the corresponding part in the non-extracted images in the first correction image and the second correction image from the part at the other side of the separation line, and combining the extracted part with the part extracted in the step S45 to obtain a third correction image.
In this step, since the partial information of one of the first correction image and the second correction image is extracted in step S45, it is considered that the correction effect of the other part of the other image, from which the information is not extracted, is good, and it is necessary to extract it and combine it with the part extracted in step S45. For example, in the example of step S45, since the information of the second correction image is not extracted, the portion below the dividing line in the second correction image is extracted, and then the portion above the dividing line in the first correction image is combined with the portion below the dividing line in the second correction image, to obtain the third correction image.
Fig. 4 is a flow chart of steps of another monocular structured light stereo matching method based on dual reference planes in an embodiment of the present application. As shown in fig. 4, another monocular structured light stereo matching method based on dual reference planes provided in the embodiment of the present application is different from the foregoing embodiment, and further includes, between step S3 and step S4:
step S5: according to the coordinate pairs of the vertical reference plane and the horizontal reference plane, solving a homography matrix between the two planes, and further solving the corresponding relation between any point coordinate on the horizontal reference plane image and the vertical reference plane.
In the step, the information obtained by the speckle is expanded to obtain homography matrixes of a vertical reference plane and a horizontal reference plane, so that the corresponding relation between any point coordinates of the two planes is obtained. Accordingly, in step S4, the photographed image is corrected according to the information of the vertical reference plane and the horizontal reference plane.
Compared with the previous embodiment, the information of the two planes obtained in the present embodiment is information of any point and no longer depends on the light spot, so that the present embodiment can be better adapted to a scene with low light spot density or large scene, so that the correction effect is finer, a small-size object can be better reconstructed, the correction and reconstruction effects are better, and the judgment on the channel and the object in front is more accurate.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the application.

Claims (10)

1. The high-matching-degree sweeping robot is characterized by comprising a robot body, a structured light camera and a processor; the structured light camera is arranged on the side surface of the robot body;
the structured light camera includes a light projector and a light receiver;
the light projector is used for projecting lattice structure light to a target scene;
the light receiver is used for receiving the lattice structure light reflected by any object in the target scene and generating an acquisition image;
the processor is used for correcting the acquired image according to the horizontal reference plane and the vertical reference plane.
2. The high-match robot of claim 1, wherein the processor is further configured to generate a depth image from the corrected image and determine a volume of the object from the horizontal reference plane and the depth image.
3. A high-matching floor sweeping robot according to claim 2, wherein one side of the robot body has a vertical plane for pushing the object to a designated place through the vertical plane when the volume of the object is within a certain range.
4. The high-matching floor sweeping robot of claim 1, wherein the processor is further configured to generate a depth image from the corrected image and determine whether the robot can walk through.
5. The high-matching floor sweeping robot of claim 1, wherein the processor corrects the acquired image according to a horizontal reference plane and a vertical reference plane, comprising the steps of:
s1, according to calibration information of a vertical reference plane and internal parameters of a camera, obtaining space coordinates of each scattered spotWherein the calibration information of the vertical reference plane comprises the image coordinates of scattered spotsDistance from reference plane z ref
Step S2: based on the spatial coordinates of each specklePositional relationship between laser and camera (t x ,t y ,t z ) And imaginary horizontal planes (a, b, c, d), and determining the projection coordinates of each scattered spot on the ground
Step S3: re-projecting scattered spots on the ground back to the image coordinate system to obtain calibration information of a horizontal reference plane; wherein the calibration information of the horizontal reference plane comprises the image coordinates of scattered spotsDistance from reference point->
Step S4: and correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane.
6. The high-matching-degree sweeping robot of claim 5, wherein between the step S3 and the step S4, further comprises:
step S5: according to the coordinate pairs of the vertical reference plane and the horizontal reference plane, solving a homography matrix between the two planes, and further solving the corresponding relation between any point coordinate on the horizontal reference plane image and the vertical reference plane.
7. The high-matching-degree sweeping robot of claim 5, wherein in the step S1:
8. the high-matching-degree sweeping robot of claim 5, wherein in step S2:
wherein,
9. the high-matching-degree sweeping robot of claim 5, wherein in the step S3:
10. the high-matching-degree sweeping robot of claim 5, wherein the step S4 comprises:
s41, correcting the shot image according to the calibration information of the vertical reference plane and the calibration information of the horizontal reference plane respectively to obtain a first correction image and a second correction image respectively;
step S42, comparing the first correction image with the second correction image to obtain all pixel points with high similarity;
step S43: removing discrete points in all pixel points with high similarity to obtain separation lines;
step S44: and extracting a part on the separation line in the first correction image and a part below the separation line in the second correction image, and then splicing the part and the part to obtain a third correction image.
CN202210465580.6A 2022-04-29 2022-04-29 High-matching-degree sweeping robot Pending CN117011361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210465580.6A CN117011361A (en) 2022-04-29 2022-04-29 High-matching-degree sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210465580.6A CN117011361A (en) 2022-04-29 2022-04-29 High-matching-degree sweeping robot

Publications (1)

Publication Number Publication Date
CN117011361A true CN117011361A (en) 2023-11-07

Family

ID=88571434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210465580.6A Pending CN117011361A (en) 2022-04-29 2022-04-29 High-matching-degree sweeping robot

Country Status (1)

Country Link
CN (1) CN117011361A (en)

Similar Documents

Publication Publication Date Title
US8446492B2 (en) Image capturing device, method of searching for occlusion region, and program
KR101928575B1 (en) Piecewise planar reconstruction of three-dimensional scenes
KR100776649B1 (en) A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method
US9529087B2 (en) Curb detection using lidar with sparse measurements
CN107843251B (en) Pose estimation method of mobile robot
US9025862B2 (en) Range image pixel matching method
KR101551026B1 (en) Method of tracking vehicle
CN105069804B (en) Threedimensional model scan rebuilding method based on smart mobile phone
WO2014064990A1 (en) Plane detection device, autonomous locomotion device provided with plane detection device, method for detecting road level difference, device for detecting road level difference, and vehicle provided with device for detecting road level difference
WO2007142643A1 (en) Two pass approach to three dimensional reconstruction
Li et al. Dense surface reconstruction from monocular vision and LiDAR
JP5501084B2 (en) Planar area detection apparatus and stereo camera system
Bileschi Fully automatic calibration of lidar and video streams from a vehicle
CN113313089B (en) Data processing method, device and computer readable storage medium
Yabuuchi et al. Visual localization for autonomous driving using pre-built point cloud maps
CN114494582B (en) Three-dimensional model dynamic updating method based on visual perception
JP2003281552A (en) Image processor and method
JP4005679B2 (en) Ambient environment recognition device for autonomous vehicles
Shacklock et al. Visual guidance for autonomous vehicles: capability and challenges
CN113869422A (en) Multi-camera target matching method, system, electronic device and readable storage medium
CN219126188U (en) High-matching-degree sweeping robot
US9674503B2 (en) Stereo matching apparatus using image property
CN117011361A (en) High-matching-degree sweeping robot
CN219126187U (en) Intelligent sweeping robot
CN116998948A (en) Intelligent sweeping robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination