CN114445494A - Image acquisition and processing method, image acquisition device and robot - Google Patents
Image acquisition and processing method, image acquisition device and robot Download PDFInfo
- Publication number
- CN114445494A CN114445494A CN202210028663.9A CN202210028663A CN114445494A CN 114445494 A CN114445494 A CN 114445494A CN 202210028663 A CN202210028663 A CN 202210028663A CN 114445494 A CN114445494 A CN 114445494A
- Authority
- CN
- China
- Prior art keywords
- line laser
- image
- robot
- points
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- General Physics & Mathematics (AREA)
- Robotics (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Manipulator (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses an image acquisition and processing method, an image acquisition device and a robot, wherein the method comprises the following steps: s1: enabling the line laser to emit line lasers with different brightness, and acquiring environment images of the line lasers with different brightness through the camera; s2: respectively realizing visual positioning and obstacle information identification based on an environment image with line laser; wherein the obstacle information includes distance information and orientation information. This application promotes camera detection effect through the mode that adopts bright dark line laser control, reduces ambient light to the location and keeps away the interference that the barrier detected the function.
Description
Technical Field
The invention relates to the field of intelligent robots, in particular to an image acquisition and processing method, an image acquisition device and a robot.
Background
The positioning of present indoor robot and keeping away the barrier all are strong demands, and the location is used for better planning the robot, keeps away the barrier and is used for avoiding the operation that the barrier hinders the robot, makes the robot seem more intelligent. At present, robots in the market acquire environmental information through line lasers and cameras. However, when the existing line laser is used for detection, the line laser with the same brightness is emitted by the line laser, when the robot is in a dark environment, the line laser with the same brightness is used for detection, the exposure time is prolonged due to automatic exposure of the camera, the laser line is seriously overexposed and is difficult to track, and the influence of ambient light on detection is also increased. The existing method adopts an interframe difference method and adopts a mode of turning on and turning off one frame, but the detection frame rate is reduced by half, and the detection efficiency of the robot is reduced.
Disclosure of Invention
In order to solve the problems, the invention provides an image acquisition and processing method, an image acquisition device and a robot. The specific technical scheme of the invention is as follows:
an image acquisition and processing method, the method comprising the steps of: s1: enabling the line laser to emit line lasers with different brightness, and acquiring environment images of the line lasers with different brightness through the camera; s2: respectively realizing visual positioning and obstacle information identification based on an environment image with line laser; wherein the obstacle information includes distance information and orientation information.
Further, in step S1, the method for making the line laser emit line laser light with different brightness and acquiring an environment image of the line laser light with different brightness by the camera includes the following steps: the robot switches the power of the line laser in the blanking period of each frame of image acquired by the camera, so that the brightness of the line laser in each frame of environment image acquired by the camera is different.
Further, in step S2, based on the environment image with line laser, the visual positioning is realized, including the following steps: acquiring two frames of images, extracting feature points in the second frame of image, and acquiring the pose of the robot by acquiring IMU data and odometer integrals of the two images; acquiring epipolar lines of the feature points in the second frame image on the first frame image based on the feature points in the second frame image and the robot pose; searching points matched with the characteristic values of the characteristic points in the second frame image on the epipolar line to obtain corresponding characteristic points in the first frame image; and based on the matched feature points between the two frames of images and the pose of the robot, calculating by minimizing a reprojection error to obtain the visual positioning pose between the two frames of images.
Further, the method for selecting the feature points from the second frame image comprises the following steps: and setting a container for storing the feature points, selecting corner points from positions of the corner points selected from the second frame image at positions with pixel gray value changes larger than a set threshold value, taking the selected corner points as the identification feature points in the second frame image, and storing the identified feature points in the container.
Further, the minimizing reprojection error calculation includes the steps of: performing projection calculation through the matched feature points between the two frames of images and the pose of the robot to obtain pixel values of the feature points after projection; acquiring a difference value between a pixel value of a feature point matched between two frames of images and a pixel value of a feature point after projection; and minimizing the sum of the difference values to obtain the camera pose parameters and the coordinates of the three-dimensional space points of the characteristic points, and determining the visual positioning pose to realize the visual positioning.
Further, in step S2, acquiring the orientation information of the obstacle based on the environment image with the line laser includes the steps of: the robot acquires internal reference of the camera through calibration, and then acquires a light plane of the line laser plane relative to the camera through the internal reference of the camera; the robot tracks the line laser in the acquired image, and when the line laser is found not to be positioned on the ground, a plurality of points are selected as calculation points according to a specific distance from the line laser by taking the end point of one end of the line laser as a starting point; calculating a linear equation of a straight line passing through the central point of the camera and the calculation point, and then acquiring a three-dimensional coordinate of an intersection point of the straight line and the light plane based on the linear coordinate and the light plane; and obtaining the direction information of the barrier according to the three-dimensional coordinates of the intersection points corresponding to the plurality of calculation points.
Further, acquiring three-dimensional coordinates of an intersection of a straight line and the light plane based on a straight line equation and the light plane, comprising the steps of: and converting the linear equation into a parameter equation, then substituting the parameter equation into an equation of the light plane to obtain parameters of the parameter equation, and substituting the parameters into the parameter equation to obtain the three-dimensional coordinates of the intersection point.
Further, in step S2, acquiring distance information of the obstacle based on the environment image with the line laser includes the following steps: acquiring the proportion of the position of the line laser on the image and the distance between the robot and the obstacle in advance; acquiring line laser in an environment image with the line laser; and obtaining the distance between the robot and the obstacle according to the position of the line laser on the image and the proportion, namely obtaining the distance information of the obstacle.
The utility model provides an image acquisition device, the device includes camera and line laser instrument module, the camera sets up or the level sets up before the dead ahead of robot with first angle of predetermineeing to one side, line laser instrument module is located the upper end of camera, and the axis of this line laser instrument module is with the angle of second predetermineeing to one side down the setting.
Furthermore, the line laser module comprises a line laser and N signal control switches, the signal control switches are arranged in parallel, the line laser and the signal control switches are arranged in series, and the signal control switches are switched on or off according to received signals to switch the power of the line laser; wherein N is a natural number of 2 or more.
The robot is provided with the image acquisition device and executes the image acquisition and processing method.
Compared with the prior art, the technical scheme of the application realizes the positioning function and the obstacle avoidance function of the robot synchronously by adopting the single camera, so that the production cost of the robot is reduced, and the complexity of a robot system is reduced; by adopting a bright and dark line laser control mode, the detection effect of the camera is improved, and the interference of ambient light on the positioning and obstacle avoidance detection functions is reduced.
Drawings
FIG. 1 is a flow diagram of an image acquisition and processing method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a connection between a straight line and a light plane according to an embodiment of the present invention;
fig. 3 is a schematic circuit diagram of a line laser module according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings in the embodiments of the present invention. It should be understood that the following specific examples are illustrative only and are not intended to limit the invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, operations, and/or components, but do not preclude the presence or addition of one or more other features, operations, or components. All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
As shown in fig. 1, an image acquisition and processing method is mainly characterized in that a robot acquires an image with line laser through a camera to identify and position obstacles, and the mobile robot further comprises an IMU module, an odometer and other necessary sensors, and a laser radar and the like for constructing a clean map, wherein the working mode and acquired data of the sensor modules are the same as the conventional working mode and acquired data. The method comprises the following steps: s1: the robot enables the line laser to emit line lasers with different brightness, and obtains environment images of the line lasers with different brightness through the camera; s2: the robot respectively realizes visual positioning and obstacle information identification based on an environment image with line laser; wherein the obstacle information includes distance information and orientation information.
As an embodiment, in step S1, the robot makes the line laser emit line laser with different brightness, and obtains an environment image of the line laser with different brightness through the camera, including the following steps: the robot switches the power of the line laser in the blanking period of each frame of image acquired by the camera, so that the brightness of the line laser in each frame of environment image acquired by the camera is different. In the process of drawing a realistic graph by a camera, the depth information is lost by projection transformation, so that the ambiguity of the graph is often caused. To remove such ambiguities, it is necessary to remove the hidden invisible lines or surfaces during rendering, which is conventionally referred to as removing hidden lines and hidden surfaces, or simply blanking. The time during which the camera performs the blanking operation in acquiring the image is referred to as a blanking period. The robot is in the blanking period that the camera acquires a frame of image, and the signal control switch is turned on or off by sending a control signal to the signal control switch, so that the power of the line laser is influenced, the brightness of the line laser emitted by the line laser is changed, and then the camera acquires an environment image with the line laser. The line laser is emitted in a bright-dark combined mode, so that exposure time is not increased when the camera is automatically exposed, an environment image which can easily track the line laser is obtained, and the structure is simple. Acquiring an environmental image with line laser, comprising the steps of: the robot controls the line laser to work, so that the line laser projects line laser, then controls the camera to work, obtains a plurality of environment images with the line laser, then respectively uses the environment images for visual positioning and obstacle information identification, and when the obstacle identification is carried out, the robot is divided into point cloud calculation of the obstacle and distance calculation between the obstacle and the robot, and the two are combined to obtain the obstacle information.
As an embodiment, in step S2, the robot performs visual positioning based on the environment image with the line laser, including the following steps: acquiring two frames of images, extracting feature points in the second frame of image, and acquiring the pose of the robot by acquiring IMU data and odometer integral when the two frames of images are acquired, namely determining the movement track of the robot, and taking the robot as a conventional function; and acquiring epipolar lines of the feature points in the second frame image on the first frame image based on the feature points in the second frame image and the pose of the robot, namely acquiring the relative positions of the two frame images in space and the corresponding shooting centers of the cameras when the images are acquired according to the moving track of the robot and the acquisition time of the two frame images, and then acquiring the corresponding positions of the epipolar lines formed by the feature points of the second frame image on the first frame image according to 2D-2D epipolar geometry. And searching for points matched with the characteristic values of the characteristic points in the second frame image on the epipolar line to obtain corresponding characteristic points in the first frame image, namely, selecting points corresponding to the characteristic points in the second frame image on the epipolar line by matching the similarity of the characteristic points in the two frame images or the distance between the characteristic points and the characteristic points. And based on the matched feature points between the two frames of images and the pose of the robot, calculating by minimizing a reprojection error to obtain the visual positioning pose between the two frames of images. Selecting feature points from the second frame image, comprising the steps of: a container for storing the characteristic points is provided, the container is a special data structure, and the detected corner points are usually stored to form a container (vector) of a KeyPoint type. Then setting a detection template by taking the angular points or edge points as feature points; and detecting corner points or edge points in the second frame image according to the detection template by calling a function (the corner points or the edge points can be selected from positions with large pixel gray value changes in the image), and then storing the identified corner points or edge points as feature points in a container (vision SLAM fourteen: a vision mileometer from theory to practice (2 nd edition) 7: Gaoyang/Tao/et al. Press: electronics industry Press: 2019-8). Of course, this is one of the ways of obtaining the feature points, and the feature points may also be obtained in other ways. The calculation of the minimized reprojection error comprises the following steps: performing projection calculation through the matched feature points between the two frames of images and the pose of the robot to obtain pixel values of the feature points after projection; acquiring a difference value between a pixel value of a feature point matched between two frames of images and a pixel value of a feature point after projection; and minimizing the sum of the difference values to obtain the camera pose parameters and the coordinates of the three-dimensional space points of the characteristic points, and determining the visual positioning pose to realize the visual positioning. In minimizing the reprojection error, reprojection refers to the second projection: the first projection means that a three-dimensional space point is projected onto an image when a camera takes a picture; then, triangulating (triangulating) some feature points by using the images, and constructing a triangle by using geometric information (epipolar geometry) to determine the position of a three-dimensional space point; and finally, carrying out second projection, namely re-projection by using the calculated coordinates (not real) of the three-dimensional points and the calculated camera pose (not real of course). Reprojection error: the difference between the projection (i.e. the pixel point on the image) and the reprojection (i.e. the virtual pixel point obtained by using the calculated value) of the real three-dimensional space point on the image plane is not completely consistent with the actual situation for various reasons, that is, the difference cannot be exactly 0, and at this time, the sum of the differences needs to be minimized to obtain the optimal camera pose parameter and the coordinate of the three-dimensional space point. (visual SLAM fourteen: visual odometer from theory to practice (2 nd edition) 7 th author: Gaoyang/billows/et al, Press: electronics industry Press.Press: 2019-8).
As an example, in step S2, acquiring obstacle information based on the environment image with the line laser includes the following steps: the robot acquires internal reference of the camera through calibration, and then acquires a light plane of the line laser plane relative to the camera through the internal reference of the camera; the robot tracks the line laser in the acquired image, and when the line laser is found not to be positioned on the ground, a plurality of points are selected as calculation points according to a specific distance from the line laser by taking the end point of one end of the line laser as a starting point; if the line segment of the line laser is divided into two crossed line segments, the longest line segment or the line segment with the smallest included angle with the horizontal line on the image is taken for calculation. Calculating a linear equation of a straight line passing through the central point of the camera and the calculation point, and then acquiring a three-dimensional coordinate of an intersection point of the straight line and the light plane based on the linear equation and the light plane; and obtaining the direction information of the barrier according to the three-dimensional coordinates of the intersection points corresponding to the plurality of calculation points. As shown in fig. 2, in the figure, 1 is a central point of the camera, 2 is an obstacle, 3 is a calculation point, 4 is a line laser, 5 is an intersection point of a straight line and a light plane, 6 is the light plane, and 7 is a coincidence point of the calculation point and the intersection point of the straight line and the light plane. When the surface of the obstacle is a plane, the actual line laser is projected on the surface of the obstacle to be a straight line, the line laser on the image is also a straight line, the line laser is positioned on the light plane, the calculated point on the line laser is coincided with the intersection point of the straight line and the light plane, namely the point 7, the three-dimensional coordinate of the calculated point on the line laser is the three-dimensional coordinate of the point on the obstacle, and the position of the obstacle on the robot can be known according to the three-dimensional coordinate. When the surface of the obstacle is an arc surface, the actual line laser is projected on the surface of the obstacle to be an arc line, the line laser on the image is a straight line, the calculation point 3 selected from the line laser is not a point on the obstacle, and the intersection point 5 of the straight line and the light plane is a point on the obstacle, so that the three-dimensional coordinates of the point of the actual obstacle need to be obtained by calculating the point 5 through the point 3, and the direction of the robot in which the obstacle is located is known. Wherein, the intersection points corresponding to the plurality of calculation points are points on the obstacle, and the position of the obstacle in the robot can be obtained according to the three-dimensional coordinates of the points. After the three-dimensional coordinates of the intersection points corresponding to the plurality of calculation points are obtained, the intersection points can be screened, so that the calculation result is more accurate. Because the calculation points are selected on the same laser line segment according to specific intervals, coordinates with large change of three-dimensional coordinates of intersection points corresponding to a plurality of calculation points can be deleted and calculated. Acquiring three-dimensional coordinates of an intersection of a straight line and a light plane based on a straight line equation and the light plane, comprising the steps of: and converting the linear equation into a parameter equation, substituting the parameter equation into an equation of the light plane to obtain parameters of the parameter equation, and substituting the parameters into the parameter equation to obtain the three-dimensional coordinates of the intersection point. Knowing the equation of a straight line L: (x-a)/m = (x-b)/n = (z-c)/p and equation pi for light plane: ax + By + Cz + D =0, and the coordinates of the intersection of the straight line L and the plane pi are obtained. The linear equation is rewritten to parametric form: let (x-a)/m = (x-b)/n = (z-c)/p = t; then x = mt + a; y = nt + b; z = pt + c; substituting the equation of the plane pi to obtain: a (mt + a) + B (nt + B) + C (pt + C) + D =0; thus solving t = - (Aa + Bb + Cc + D)/(Am + Bn + Cp); and substituting the coordinate into a parameter equation to obtain the coordinate (x, y, z) of the intersection point. In step S2, acquiring obstacle position information based on the environment image with the line laser includes the steps of: acquiring the proportion of the position of the line laser on the image and the distance between the robot and the obstacle in advance; acquiring line laser in an environment image with the line laser; and obtaining the distance between the robot and the obstacle according to the position of the line laser on the image and the proportion, namely obtaining the distance information of the obstacle. The line laser obliquely projects a laser line segment downwards and projects the laser line segment on the barrier, when the environment with the barrier and the line laser is obtained, the farther the barrier is away from the robot, the lower the position of the line laser on the image is, the proportion can be obtained by calibration calculation in advance, and the proportion is correspondingly changed according to different installation angles of the line laser. The length of the line laser in the image can be used for calculating, but the calculation is not good when the obstacle is too small, but the length or height information of the obstacle can be calculated according to the length of the line laser in the image, so that the robot can conveniently avoid the obstacle.
The utility model provides an image acquisition device, the device includes camera and line laser instrument module, the camera sets up or the level sets up before the dead ahead of robot with first predetermined angle slant, line laser instrument module is located the upper end of camera, and the laser emission direction of this line laser instrument module is the second downward and with horizontal line and predetermines the angle. The camera is a monochrome camera without a color filter and an infrared filter. As shown in fig. 3, the line laser module includes a line laser and N signal control switches, the signal control switches are connected in parallel, the line laser and the signal control switches are connected in series, and the signal control switches are turned on or off according to a received signal, so as to switch the power of the line laser; wherein N is a natural number of 2 or more. In the figure, one end of the line laser is connected to a power supply terminal VCC, and the other end is connected to signal control switches I1 and I2, respectively, the signal control switches I1 and I2 may be triodes, or MOS transistors, one end of the signal transmission terminals of the signal control switches I1 and I2 is connected to the line laser, and the other end is connected to a ground terminal, the signal receiving terminals of the signal control switches I1 and I2 are used for receiving an external signal to turn on or off the signal transmission terminals, and the turning on and off of the signal control switches I1 and I2 affects the power change of the line laser, so that the brightness of the line laser emitted by the line laser at each time is different.
As shown in fig. 4, the robot is provided with the above-mentioned image acquisition device, and in the figure, 101 positions of the ground are a robot main body 102, a camera 103, a line laser 104, a horizontal line 105, a camera central axis 106, a camera upper view line 107, a camera lower view line 108 and a line laser projection line 109. The robot executes the image acquisition and processing method through the image acquisition device.
Compared with the prior art, the technical scheme of the application realizes the positioning function and the obstacle avoidance function of the robot synchronously by adopting the single camera, so that the production cost of the robot is reduced, and the complexity of a robot system is reduced; by adopting a bright and dark line laser control mode, the detection effect of the camera is improved, and the interference of ambient light on the positioning and obstacle avoidance detection functions is reduced.
Obviously, the above-mentioned embodiments are only a part of embodiments of the present invention, not all embodiments, and the technical solutions of the embodiments may be combined with each other. Furthermore, if terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., appear in the embodiments, their indicated orientations or positional relationships are based on those shown in the drawings only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred devices or elements must have a specific orientation or be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. If the terms "first", "second", "third", etc. appear in the embodiments, they are for convenience of distinguishing between related features, and they are not to be construed as indicating or implying any relative importance, order or number of features.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. These programs may be stored in a computer-readable storage medium (such as a ROM, a RAM, a magnetic or optical disk, or various other media that can store program codes). Which when executed performs steps comprising the method embodiments described above.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.
Claims (11)
1. An image acquisition and processing method, characterized in that it comprises the following steps:
s1: enabling the line laser to emit line lasers with different brightness, and acquiring environment images of the line lasers with different brightness through the camera;
s2: respectively realizing visual positioning and obstacle information identification based on an environment image with line laser;
wherein the obstacle information includes distance information and orientation information.
2. The image obtaining and processing method according to claim 1, wherein the step S1 of making the line laser emit line laser light with different brightness and obtaining the environment image of the line laser light with different brightness through the camera comprises the following steps:
the robot switches the power of the line laser in the blanking period of each frame of image acquired by the camera, so that the brightness of the line laser in each frame of environment image acquired by the camera is different.
3. The image capturing and processing method as claimed in claim 1, wherein the step S2, based on the environment image with line laser, implementing visual positioning, comprises the following steps:
acquiring two frames of images, extracting feature points in the second frame of image, and acquiring the pose of the robot by acquiring IMU data and odometer integrals of the two images;
acquiring epipolar lines of the feature points in the second frame image on the first frame image based on the feature points in the second frame image and the robot pose;
searching points matched with the characteristic values of the characteristic points in the second frame image on the epipolar line to obtain corresponding characteristic points in the first frame image;
and based on the matched feature points between the two frames of images and the pose of the robot, calculating by minimizing a reprojection error to obtain the visual positioning pose between the two frames of images.
4. The image acquisition and processing method according to claim 3, wherein the step of selecting feature points from the second frame image comprises the steps of:
and setting a container for storing the feature points, selecting corner points from positions of the corner points selected from the second frame image at positions with pixel gray value changes larger than a set threshold value, taking the selected corner points as the identification feature points in the second frame image, and storing the identified feature points in the container.
5. The image acquisition and processing method according to claim 3, characterized in that minimizing the reprojection error calculation comprises the steps of:
performing projection calculation through the matched feature points between the two frames of images and the pose of the robot to obtain pixel values of the feature points after projection;
acquiring a difference value between a pixel value of a feature point matched between two frames of images and a pixel value of a feature point after projection;
and minimizing the sum of the difference values to obtain the camera pose parameters and the coordinates of the three-dimensional space points of the characteristic points, and determining the visual positioning pose to realize the visual positioning.
6. The image acquisition and processing method according to claim 1, wherein the step of acquiring the orientation information of the obstacle based on the environment image with the line laser in step S2 comprises the steps of:
the robot acquires internal reference of the camera through calibration, and then acquires a light plane of the line laser plane relative to the camera through the internal reference of the camera;
the robot tracks the line laser in the acquired image, and when the line laser is found not to be positioned on the ground, a plurality of points are selected as calculation points according to a specific distance from the line laser by taking the end point of one end of the line laser as a starting point;
calculating a linear equation of a straight line passing through the central point of the camera and the calculation point, and then acquiring a three-dimensional coordinate of an intersection point of the straight line and the light plane based on the linear equation and the light plane;
and obtaining the direction information of the barrier according to the three-dimensional coordinates of the intersection points corresponding to the plurality of calculation points.
7. The image acquisition and processing method according to claim 6, wherein acquiring three-dimensional coordinates of an intersection of a straight line and a light plane based on a straight line equation and the light plane comprises the steps of:
and converting the linear equation into a parameter equation, substituting the parameter equation into an equation of the light plane to obtain parameters of the parameter equation, and substituting the parameters into the parameter equation to obtain the three-dimensional coordinates of the intersection point.
8. The image acquiring and processing method according to claim 1, wherein the step S2 of acquiring distance information of an obstacle based on an environment image with line laser comprises the steps of:
acquiring the proportion of the position of the line laser on the image and the distance between the robot and the obstacle in advance;
acquiring line laser in an environment image with the line laser;
and obtaining the distance between the robot and the obstacle according to the position of the line laser on the image and the proportion, and obtaining the distance information of the obstacle.
9. The utility model provides an image acquisition device, its characterized in that, the device includes camera and line laser instrument module, the camera sets up or the level sets up before the dead ahead of robot with first angle of predetermineeing to one side, line laser instrument module is located the upper end of camera, and the axis of this line laser instrument module is predetermine the angle with the second and is set up downwards to one side.
10. The image capturing apparatus as claimed in claim 9, wherein the line laser module includes a line laser and N signal control switches, the signal control switches are connected in parallel, the line laser and the signal control switches are connected in series, and the signal control switches are turned on or off according to the received signal to switch the power of the line laser; wherein N is a natural number of 2 or more.
11. A robot, characterized in that the robot is provided with the image acquisition apparatus according to any one of claims 9 to 10, and the robot performs the image acquisition and processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210028663.9A CN114445494A (en) | 2022-01-11 | 2022-01-11 | Image acquisition and processing method, image acquisition device and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210028663.9A CN114445494A (en) | 2022-01-11 | 2022-01-11 | Image acquisition and processing method, image acquisition device and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114445494A true CN114445494A (en) | 2022-05-06 |
Family
ID=81366901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210028663.9A Pending CN114445494A (en) | 2022-01-11 | 2022-01-11 | Image acquisition and processing method, image acquisition device and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114445494A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116852374A (en) * | 2023-08-08 | 2023-10-10 | 荆州双金再生资源有限公司 | Intelligent robot control system based on machine vision |
WO2024131271A1 (en) * | 2022-12-23 | 2024-06-27 | 速感科技(北京)有限公司 | Autonomous mobile device, obstacle detection method thereof, and computer readable medium |
-
2022
- 2022-01-11 CN CN202210028663.9A patent/CN114445494A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024131271A1 (en) * | 2022-12-23 | 2024-06-27 | 速感科技(北京)有限公司 | Autonomous mobile device, obstacle detection method thereof, and computer readable medium |
CN116852374A (en) * | 2023-08-08 | 2023-10-10 | 荆州双金再生资源有限公司 | Intelligent robot control system based on machine vision |
CN116852374B (en) * | 2023-08-08 | 2024-04-26 | 深圳创劲鑫科技有限公司 | Intelligent robot control system based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112793564B (en) | Autonomous parking auxiliary system based on panoramic aerial view and deep learning | |
CN108226938B (en) | AGV trolley positioning system and method | |
CN110458161B (en) | Mobile robot doorplate positioning method combined with deep learning | |
CN110142785A (en) | A kind of crusing robot visual servo method based on target detection | |
CN111037552B (en) | Inspection configuration and implementation method of wheel type inspection robot for power distribution room | |
CN110568447A (en) | Visual positioning method, device and computer readable medium | |
CN114445494A (en) | Image acquisition and processing method, image acquisition device and robot | |
Yan et al. | Joint camera intrinsic and lidar-camera extrinsic calibration | |
CN115014338A (en) | Mobile robot positioning system and method based on two-dimensional code vision and laser SLAM | |
KR102490521B1 (en) | Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system | |
CN105116886A (en) | Robot autonomous walking method | |
CN106370160A (en) | Robot indoor positioning system and method | |
CN114612786A (en) | Obstacle detection method, mobile robot and machine-readable storage medium | |
CN111780744A (en) | Mobile robot hybrid navigation method, equipment and storage device | |
CN114662587A (en) | Three-dimensional target sensing method, device and system based on laser radar | |
CN113869422A (en) | Multi-camera target matching method, system, electronic device and readable storage medium | |
CN116160458B (en) | Multi-sensor fusion rapid positioning method, equipment and system for mobile robot | |
Panzieri et al. | A low cost vision based localization system for mobile robots | |
CN111380535A (en) | Navigation method and device based on visual label, mobile machine and readable medium | |
CN115019167B (en) | Fusion positioning method, system, equipment and storage medium based on mobile terminal | |
US20240249427A1 (en) | Position measurement system | |
CN113112551B (en) | Camera parameter determining method and device, road side equipment and cloud control platform | |
CN114445487A (en) | Method for identifying and synchronously positioning obstacles, image acquisition device and robot | |
CN113011212A (en) | Image recognition method and device and vehicle | |
CN116228849B (en) | Navigation mapping method for constructing machine external image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |