WO2024055788A1 - 基于图像信息的激光定位方法及机器人 - Google Patents

基于图像信息的激光定位方法及机器人 Download PDF

Info

Publication number
WO2024055788A1
WO2024055788A1 PCT/CN2023/112380 CN2023112380W WO2024055788A1 WO 2024055788 A1 WO2024055788 A1 WO 2024055788A1 CN 2023112380 W CN2023112380 W CN 2023112380W WO 2024055788 A1 WO2024055788 A1 WO 2024055788A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
pixel
column
line laser
current
Prior art date
Application number
PCT/CN2023/112380
Other languages
English (en)
French (fr)
Inventor
王悦林
赖钦伟
Original Assignee
珠海一微半导体股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珠海一微半导体股份有限公司 filed Critical 珠海一微半导体股份有限公司
Publication of WO2024055788A1 publication Critical patent/WO2024055788A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Definitions

  • the invention relates to the technical field of laser data processing, and in particular to a laser positioning method and robot based on image information.
  • Structured light modules generally refer to any laser module that includes a line laser emitter and a camera module.
  • the line laser emitter is used to emit line laser outwards.
  • the line laser emitted by the line laser emitter can be located in front of the robot; the camera module can collect environmental images, and can also receive the reflected light returned from the line laser hitting the object.
  • the line laser emitted by the line laser emitter The laser is located within the field of view of the camera module; the line laser can help detect information such as the contour, height and/or width of objects in the robot's direction of travel, collectively referred to as the position information of the laser; among them, the camera used to collect the line laser Generally, an infrared bandpass filter or an infrared high-pass filter is provided. The reason is that after the line laser transmitter emits infrared light to the object surface, if the camera module accurately detects the position of the line laser on the imaging plane, the camera itself It not only needs to be sensitive to the infrared light band reflected by the line laser on the surface of the object, but also to the light of visible light and other bands.
  • the camera module uses the aforementioned infrared bandpass filter or infrared high-pass filter to filter the reflected light of the laser emitted by the line laser transmitter on the surface of the obstacle. Specifically, it transmits the infrared band carried in the laser and filters other light. The ambient light in the wavelength band is absorbed or reflected to separate the infrared light carried by the reflected light from other interfering ambient light. However, while filtering out the non-infrared light band, a large amount of environmental information is also lost.
  • the structured light mode The cost of the infrared bandpass filter or infrared high-pass filter used in the set is relatively high, and it is easy to introduce interference when the ambient light intensity is large, resulting in more interference points.
  • the present invention discloses a laser positioning method and robot based on image information under the condition that the camera is not equipped with an infrared filter, in order to receive the infrared light emitted by the linear laser transmitter and reflected back on the surface of the obstacle. Light, enabling the robot to track the reflected light of the laser on the surface of the obstacle.
  • the execution subject of the laser positioning method is a robot equipped with a structured light module.
  • the structured light module includes a line laser emitter and a camera without an infrared filter, so that the image collected by the camera is in The imaging information of infrared light and the imaging information of visible light are retained;
  • the laser positioning method includes: the robot controls the camera to collect the image of the light reflected back by the line laser emitted by the line laser transmitter on the surface of the object to be measured, and detects the image collected by the camera The bright and dark type; when the robot detects that the current frame image collected by the camera is a bright frame image, the robot searches for the line laser position from the current frame image by executing an inter-frame tracking algorithm, and then sets the coordinates of the line laser position to the line laser The positioning coordinates of the line laser emitted by the emitter in the current frame image; when the robot detects that the current frame image collected by the camera is a dark frame image, the robot extracts the line laser position from the current frame image by
  • the method for the robot to search for the line laser position from the current frame image by executing an inter-frame tracking algorithm includes: Step 1.
  • the robot traverses the current frame image column by column, and in the corresponding column of the current frame image Obtain the initial pixel position, and at the same time exclude pixels that do not have a line laser position in the current frame image according to the pixels in the corresponding column that meet the preset brightness distribution characteristics, where the line laser position is used to indicate that the line laser is under test.
  • the reflection position of the object surface Step 2.
  • the robot sets the initial pixel position of the current column as the search center, and then starts from The search center starts to search upward along the current column for a pixel within the search radius, and starts from the search center to search downward along the current column for a pixel within the search radius; then based on the brightness value of the pixel searched upward and the downward search
  • the difference between the brightness values of the searched pixels in the search states corresponding to the two adjacent search centers, and the difference between the frames formed by the same type of values in the same column of pixels in the current frame image relative to the reference frame image.
  • Matching relationship filter out the convex hull center pixel in the current column to update the convex hull center pixel last determined in the current column of the current frame image; where the reference frame image is configured to be before the current frame image is collected, A bright frame image where the robot's latest line laser position is located; whenever the search center in the current column is updated, the convex hull center pixel set in the current column is also updated; Step 3.
  • step 2 whenever a convex hull center pixel is screened out for a search center, the adjacent pixel searched upward or downward from the search center along the current column will be updated.
  • step 2 For the search center, perform step 2 again to obtain a new convex hull center pixel and update the new convex hull center pixel to the convex hull center pixel; each search center is relative to the initial pixel
  • the positions are all within the coverage area of a search radius, where the search radius is set to the first preset pixel distance; the filtered convex hull center pixel is the convex hull center pixel in the current frame image.
  • the last updated convex hull center pixel is the convex hull center pixel in the current frame image that deviates from the current frame image in each column of the convex hull center pixel.
  • the origin of the coordinate system is the nearest convex hull center pixel; the robot sets the set of pixels that conform to the convex hull characteristics on the current column of the current frame image as the brightness value starting from the center of the convex hull along the current column up and down both sides.
  • a set of pixels composed of decreasing pixels and the center of the convex hull forms a convex hull.
  • the center of the convex hull is the pixel with the largest brightness value in the set of pixels, and the convex hull center pixel is set to belong to the center of the convex hull.
  • the first gradient value is generated between the brightness values, and starting from the center of the convex hull in the downward direction along the same column, the brightness value of the pixel points decreases downward along the current column and the brightness values of the two adjacent pixel points
  • a second gradient value is generated between the two, so that the convex hull center belongs to the search center.
  • the difference between the brightness value of the pixel point searched upward and the brightness value of the pixel point searched downward in the search state corresponding to the search center determined twice adjacently, and The inter-frame matching relationship formed by the same type of values in the same column of pixels in the current frame image relative to the reference frame image.
  • the method of filtering out the convex hull center pixels includes: in the current column of the current frame image, control
  • the brightness value of the search center is compared with the brightness value of the convex hull center pixel in the same column that was searched last time; the convex hull center pixel that was found in the same column last time was determined for the last time
  • the search center is the convex hull center pixel filtered out in the same column of the current frame image.
  • the last determined search center is the one adjacent downward or upward to the currently determined search center in the current column of the current frame image.
  • the column sorting of pixels in the same column of the current frame image is equal to the column sorting of the current column of the current frame image; if the brightness value of the currently determined search center is higher than the convex pixel in the same column that was searched last time, If the brightness value of the center pixel is large, then in the current column of the current frame image, search for pixels upward from the search center, and count the pixels whose brightness value decreases according to the first gradient value.
  • the number of decreasing pixels is marked as the number of downward gradient descents, and the downward search for pixels is stopped to wait for the next update of the search center; when the robot determines that the upward gradient descent counted in the current column of the current frame image
  • the number is greater than or equal to the number of upward gradient descents counted in the last search to find the convex hull center pixel in the same column, and/or the number of downward gradient descents counted in the current column of the current frame image.
  • the absolute value of the difference between the brightness value of the pixel with the smallest brightness value and the brightness value of the pixel at the currently determined search center is greater than the difference in brightness values of the same type formed by upward search in the same column of pixels in the reference frame image
  • the absolute value of the value, and the absolute value of the difference between the brightness value of the pixel with the smallest brightness value searched down the current column and the brightness value of the pixel at the currently determined search center is greater than the same column of the reference frame image
  • the absolute value of the difference between the brightness values of the same type formed by upward search in the same column of pixels in the reference frame image is finally obtained from a column with the same column order as the current column in the reference frame image.
  • the brightness value of the pixel with the smallest brightness value searched upward is the same as the brightness value of the pixel at the final determined search center on the same column.
  • the absolute value of the difference wherein the distance between the pixel with the smallest brightness value searched upward and the finally determined search center on the same column is less than or equal to the search radius; the pixels in the same column of the reference frame image
  • the absolute value of the difference between the brightness values of the same type formed by the downward search in the point is in the reference frame image, starting from the search center finally determined in a column with the same column order as the current column, along with the The absolute value of the difference between the brightness value of the pixel with the smallest brightness value found in the current column and the brightness value of the pixel at the final search center in the same column, where, downward The distance between the searched pixel with the smallest brightness value and the final determined search center on the same column is less than or equal to the search radius.
  • the step 2 also includes: if the brightness value of the pixel at the search center is greater than the brightness value of the convex hull center pixel located in the same column in the last search, Then in the current column of the current frame image, pixel points are searched upward from the search center, and pixel points are searched downward from the search center; if the robot detects the pixel point in the process of searching upward from the search center, If the brightness value does not decrease according to the first gradient value, the preset upward gradient anomaly count is counted once, and then the robot determines whether it has searched all the pixels covered within the search radius along the current column of the current frame image.
  • the robot stops searching for pixel points upward along the current column of the current frame image and determines that the upward counting stop condition is reached. Otherwise, when the abnormal upward gradient frequency is greater than the first preset error number, the robot stops searching along the current column of the current frame image.
  • the current column searches upwards for pixels and determines that the upward counting stop condition is met; and, during the downward search from the search center, it is detected that the brightness value of the pixel does not decrease according to the second gradient value, then the preset The downward gradient anomaly count is counted once, and then the robot determines whether it has searched the pixels covered within the search radius along the current column of the current frame image. If so, the robot stops moving along the current column of the current frame image. Search pixels downward and determine that the downward counting stop condition is met.
  • the robot counts the pixels whose brightness value is 255 and whose positions are adjacent along the current column of the current frame image, and the The number of pixels with a brightness value of 255 and adjacent positions is marked as the number of upward overexposure.
  • the robot When the robot detects that the number of upward overexposure is greater than the third preset error number, and/or upward along the current column of the current frame image After counting the pixels covered within the search radius, the robot stops searching upwards for pixels along the current column of the current frame image and determines that the upward counting stop condition is met; and the robot is in the process of searching downwards from the search center. , count the pixels with a brightness value of 255 and adjacent positions along the current column of the current frame image, and mark the number of pixels with a brightness value of 255 and adjacent positions. is the number of downward overexposures.
  • the robot detects that the number of upward overexposures is greater than the fourth preset error number, and/or counts down the pixels covered within the search radius along the current column of the current frame image, The robot stops searching pixels downward along the current column of the current frame image and determines that the downward counting stop condition is met.
  • the brightness value in the effective coverage area corresponding to the positioning coordinates of the line laser emitted by the line laser emitter in the previous dark frame image is equal to the convex hull center pixel in the
  • the method of eliminating interference points from the filtered convex hull center pixels according to the magnitude relationship of the brightness value in the current frame image includes: the robot traverses the pixel points of all columns of the current frame image and obtains each column. , and save the positioning coordinates of the line laser emitted by the line laser emitter in the previous dark image frame, for each convex hull center pixel in the current frame image, The positioning coordinates of the line laser emitted by the line laser emitter in the previous dark image are the center of the circle and the radius is the detection pixel distance.
  • the robot determines that there is at least one pixel in the circle, the brightness value ratio If the brightness value of the pixel point in the center of the convex hull with the same coordinates as the center of the circle in the current frame image is greater than a preset ambient light brightness threshold, the robot determines the convex hull with the same coordinates as the center of the circle in the current frame image.
  • the central pixel point is an interference point. The robot cannot find the line laser position at this interference point and eliminates this interference point from the current frame image.
  • the method of excluding pixels that do not have a line laser position in the current frame image based on the pixels in the corresponding column that conform to the preset brightness distribution characteristics includes: if the current frame image The brightness value of the initial pixel position in the current column is greater than the brightness value of the pixel point located at the line laser position of the same column found in the previous round by the first preset brightness threshold, or the brightness value in the current column of the current frame image If the brightness value of the initial pixel position is greater than the brightness value of the pixel point located at the line laser position of the same column found in the previous round by the second preset brightness threshold, then the distance from the current column along the current frame image upward is Starting from a reference pixel distance from the initial pixel position in the current column of the current frame image, search for pixels downward along the current column of the current frame image; if it is detected that the brightness value of a currently searched pixel is higher than the previous If the brightness value of the pixel point located at the line laser position of the same column found by the round is greater than
  • the currently searched pixel is a pixel that conforms to the preset brightness distribution characteristics; when the robot detects that the number of error position counters is greater than the reference pixel count threshold, it is determined that there is no line laser position in the current column of the current frame image, Then the pixels in the current column of the current frame image are set to pixels where there is no line laser position, and then the pixels in the current column of the current frame image are excluded from the pixel search range of step 2, At the same time, it is determined that the light intensity of the environment where the robot is located is greater than the first preset light intensity threshold; where, the reference pixel distance is expressed by the number of pixel points, so that the reference pixel count threshold is equal to the reference pixel distance; where, the reference pixel distance found in the previous round is located
  • the line laser position of the same column is the position of the finally determined convex hull center pixel among the pixels of the same column belonging to the reference frame image.
  • the method of excluding pixels that do not have a line laser position in the current frame image based on the pixels in the corresponding column that conform to the preset brightness distribution characteristics includes: using the current frame image
  • the initial pixel position in the current column is the center of the ring.
  • the ring area located below the center of the ring with the inner diameter being the first positioning radius and the outer diameter being the second positioning radius
  • the covered pixel points are marked as the first pixel points to be measured, and then the average value of the brightness value of the first pixel point to be measured is calculated.
  • the first pixel point to be measured is a pixel point that conforms to the preset brightness distribution characteristics, and it is determined that there is no line laser position in the current column of the current frame image
  • the pixels in the current column of the current frame image are set to pixels where there is no line laser position, and then the pixels in the current column of the current frame image are excluded from the pixel search range of step 2, and at the same time Determine that the light intensity of the environment where the robot is located is greater than the first preset light intensity threshold; where the first positioning radius is smaller than the second positioning radius, and the line laser position found in the previous round and located in the same column belongs to the same column of the reference frame image
  • the second pixel point to be measured meets the preset brightness.
  • Distribute characteristic pixels and determine that there is no line laser position in the current column of the current frame image then set the pixels in the current column of the current frame image to pixels where there is no line laser position, and then set all The pixels in the current column of the current frame image are excluded from the pixel search range in step 2.
  • the light intensity of the environment where the robot is located is greater than the first preset light intensity threshold; wherein the first positioning radius is smaller than the first positioning radius. 2.
  • the line laser position in the same column found in the previous round is the position of the convex hull center pixel that is finally determined among the pixels in the same column of the reference frame image.
  • the initial pixel position is formed in the image collected by the camera after the line laser emitted by the line laser emitter is reflected back to the field of view of the camera on the traveling plane of the robot when there is no obstacle in front of the robot.
  • the position of the original pixel point; each original pixel point is a reflection position on the traveling plane of the corresponding robot, used to represent the search starting point for searching for the line laser position in each column of the same frame image;
  • the reference frame image is the configuration It is a bright frame image where the latest line laser position found by the robot is located before the current frame image is collected.
  • the latest line laser position found by the robot is derived from the convex hull center pixel set in the corresponding column of the reference frame image.
  • step 1 if the initial pixel position cannot be obtained in the current column of the current frame image, the line laser position found in the previous round in the same column is updated to the initial pixel position, And update the second preset pixel distance to the search radius, and then repeat step 2 to search for the convex hull center pixel point in the corresponding column; among them, the line laser position found in the same column in the previous round belongs to the reference The position of the finally determined convex hull center pixel in the same column of pixels in the frame image or the initial pixel position in the same column of pixels in the first bright frame image; if the robot is repeatedly executing step 2 , if the convex hull center pixel cannot be found in the same column, it is determined that the robot cannot find the line laser position in the same column.
  • the method for the robot to extract the line laser position from the current frame image by executing the brightness center of gravity algorithm includes: the robot traverses the current frame image column by column; the robot sequentially searches for each pixel point of the current column, and based on the current frame image The relationship between the brightness value of the currently searched pixel in the current column and the brightness value of the pixel at the corresponding position of the previous bright frame image and the brightness value of the pixel at the corresponding position of the previous bright frame image , filter out legal pixels from the current column of the current frame image; then connect at least two legal pixels with adjacent positions in the current column of the current frame image to form a positioning line segment; when the adjacent positions are connected, After all legal pixels are found, select the positioning line segment with the largest length; if the length of the selected positioning line segment with the largest length is greater than the preset continuous length threshold, then the center of the selected positioning line segment with the largest length is set as the line laser position .
  • the method of filtering out legal pixels from the current column of the current frame image includes: subtracting the brightness value of the currently searched pixel in the current frame image from the brightness value of the previous bright frame image with the same row and column position.
  • the brightness value of the pixel at the position is used to obtain the relative difference of the dark frame image; when it is detected that the opposite number of the relative difference of the dark frame image is greater than the preset brightness difference threshold, and the pixels of the previous bright frame image have the same row and column position
  • the brightness value of a point is greater than the brightness threshold of the reference bright frame image
  • the currently searched pixel point in the current frame image is set as the legal pixel point.
  • the image sequence formed by the camera collecting the light reflected from the line laser emitted by the line laser emitter on the surface of the object to be measured is configured to alternately generate bright frame images and dark frame images, so that: the current image collected by the camera When the frame image is a bright frame image, the next frame image collected by the camera is a dark frame image; during the time interval between the camera collecting the current frame bright frame image and the camera collecting the next frame bright frame image, the camera collects the current frame dark frame image; After the camera collects the next bright frame image, the camera collects the next dark frame image; wherein, during the execution of the laser positioning method, the first frame image of the image sequence is a bright frame image.
  • the laser positioning method also includes: when the robot detects that the light intensity of the environment it is in is greater than the first preset light intensity threshold, the robot reduces the gain of the camera so that the line laser collected by the camera is in the waiting state. The image of the light reflected from the surface of the measured object does not appear overexposed; when the robot detects that the light intensity of its environment is greater than the first preset light intensity threshold, the robot reduces the exposure time of the camera so that the camera captures the The image of the light reflected by the linear laser on the surface of the object to be measured does not appear overexposed; when the robot detects that the light intensity of its environment is less than the second preset light intensity threshold, the robot increases the gain of the camera so that the camera can capture The image of the light reflected by the linear laser on the surface of the object to be measured does not appear underexposed; when the robot detects that the light intensity of the environment it is in is less than the second preset light intensity threshold, the robot increases the exposure time of the camera, This ensures that the image of the image
  • the power level of the line laser emitter for emitting line laser is increased to increase the intensity of the line laser emitted by the line laser emitter. It is configured to be equal to the product of the smoothing coefficient and the current exposure value; when the robot detects that the current exposure value of the camera is less than the second preset exposure threshold, the power level of the line laser transmitter for emitting line laser is lowered so that the line
  • the intensity of the line laser emitted by the laser transmitter is configured to be equal to the product of the smoothing coefficient and the current exposure value; where the first preset exposure threshold is greater than the second preset exposure threshold, and the current exposure value of the camera is used to reflect the current light brightness of the camera.
  • the exposure amount in the environment; the smoothing coefficient is used to smooth the step size of the exposure value adjustment, so that the robot can search for the line laser position from the current frame image.
  • a robot whose body is equipped with a structured light module.
  • the structured light module includes a line laser emitter and a camera without an infrared filter, so that the imaging information of infrared light and visible light are retained in the images collected by the camera. imaging information;
  • a controller is provided inside the robot, and the controller is electrically connected to the structured light module.
  • the controller is configured to execute the laser positioning method to obtain the line laser emitted by the line laser emitter in the current frame image. positioning coordinates; among them, the line laser emitted by the line laser transmitter is located within the field of view of the camera.
  • the horizontal viewing angle of the camera is configured to receive the light reflected back by the linear laser within the width range of the robot in front of the robot; and/or the installation height of the structured light module on the robot's body is configured to be equal to
  • the height of the obstacle to be detected is positively correlated, so that the obstacle to be detected occupies the effective field of view space of the camera.
  • the light reflected back by the linear laser on the obstacle surface in front of the robot's body; and/or the heading angle formed by the deflection of the camera relative to the central axis of the robot is maintained within the preset error angle range, so that the optical axis of the camera is consistent with the
  • the traveling direction of the robot is parallel, and the camera is placed in front of the robot to receive the light reflected back by the line laser within the width of the body; and/or the roll angle generated by the rotation of the camera along its optical axis is maintained at a preset error angle within the range, so that the camera in front of the robot receives the light reflected back by the line laser within the width range of the body, wherein the camera is rotatably assembled on the body of the robot.
  • the installation distance between the camera and the line laser module is larger, in the image collected by the camera, the pixel points used to represent the reflection position of the line laser on the surface of the obstacle are relatively small.
  • the coordinate offset from the center of the camera increases.
  • the emission angle of the line laser transmitter and the receiving angle of the camera are set as follows: the line laser transmitter emits line laser to a preset detection position in front of the body, and the line laser is reflected back to the camera at the preset detection position. , wherein the length of the laser line segment formed by the line laser at the preset detection position is greater than the width of the robot's body; whenever the robot walks a preset traveling distance in the direction from the current position to the preset detection position, the preset detection The horizontal distance between the position and the robot becomes smaller, and the coordinate offset of the pixel point in the image collected by the camera used to represent the same reflection position of the line laser in the preset detection position relative to the center of the camera increases. big.
  • the technical effect of the present invention is that in the process of executing the laser positioning method to track the image of the reflected light of the line laser, there is no need to use an infrared filter to filter the ambient light, and the infrared and visible light bands are retained in the collected image. all the details, so that the robot can search for the position information associated with the pixels formed by the line laser in the imaging plane of the camera from the current frame image, including adopting appropriate algorithms (such as pixel matching) in the bright frame image and the dark frame image respectively.
  • Class algorithm, pixel search class algorithm extracts the line laser position to achieve laser positioning, which can then be used for map navigation and deep learning for identifying obstacles.
  • the robot When the robot detects that the current frame image collected by the camera is a bright frame image, it chooses to input the current frame image into the processing rule model corresponding to the inter-frame tracking algorithm to output a valid laser position, which is effective in scenes where the camera is not too close to obstacles. Filters out various ambient light interference and reduces dependence on infrared filters; specifically, among the pixels that conform to the convex hull characteristics, the brightness value gradient generated in the pixels based on the upward search and the pixels in the downward search will be generated The numerical relationship between the brightness value gradient and the difference between the search states corresponding to the two adjacent search centers, the brightness value of the currently searched pixel and the convexity value determined last time in the same column of the same frame image.
  • the robot detects that the current frame image collected by the camera is a dark frame image, it chooses to input the current frame image into the processing rule model corresponding to the brightness center of gravity algorithm to output a positioning line segment with a reasonable connection length.
  • the present invention combines the inter-frame tracking algorithm and the brightness center of gravity algorithm to learn from each other's strengths and weaknesses in various ambient light intensity scenes, and realize the completion of laser processing among the bright frame images and dark frame images generated alternately to reflect the reflected light of the line laser. position.
  • the present invention also introduces a dynamic adjustment method for the exposure value of the camera, and adjusts the gain and exposure time of the camera according to the current environmental conditions, so that the image seen in the camera does not appear to be overexposed or underexposed; on this basis
  • the power level of the line laser transmitter for emitting line laser is adjusted to achieve the use of a stronger line laser emission power level in a high-brightness environment, so that obstacles
  • the line laser can also be seen under bright ambient light, and the image will not be overexposed (for example, under strong ambient light outdoors, the line laser will reduce the gain or exposure time of the camera after being reflected back to the camera by white obstacles) , to avoid overexposure of images collected by the camera due to too bright environment, so as to find a more accurate position of the line laser; use a weaker line laser emission power level in a low-brightness environment, so that the obstacle image will appear in darker ambient light In this case, the overexposure is not so strong, so that no
  • FIG. 1 is a flow chart of a laser positioning method based on image information according to an embodiment of the present invention.
  • Embodiments of the present invention disclose a laser positioning method based on image information, specifically targeting the reflection position of the laser light on the surface to be measured, and based on changes in the brightness values of pixels in the relevant two frames of images collected by the camera (corresponding to the environment Changes in light intensity) adaptively filter out representative laser line position information to overcome the interference of ambient light to improve obstacle detection accuracy and the robot's obstacle avoidance efficiency.
  • the execution subject of the laser positioning method disclosed in the embodiment of the present invention is a robot that relies on structured light navigation and positioning.
  • the robot is equipped with a structured light module.
  • the structured light module includes a line laser emitter and a camera without an infrared filter.
  • the image collected by the camera retains the imaging information of infrared light and the imaging information of visible light; the line laser emitted by the line laser transmitter is located within the field of view of the camera, and the line laser emitted by the line laser sensor can be projected to obstacles On the surface, the camera's field of view covers all or part of the outline of the obstacle.
  • the robot's forward direction can be detected through the structured light module set on the robot. Whether there is an obstacle, when the robot walks toward the obstacle, the laser positioning method is executed to affect the detection accuracy of the reflection position on the obstacle surface, thereby improving the positioning and obstacle avoidance accuracy.
  • the structured light module used in the embodiments of this application generally refers to any sensor module including a line laser emitter and a camera.
  • the line laser emitter is used to emit line laser outwards.
  • the line laser emitted by the line laser emitter can be located in the effective detection area in front of the robot, and the camera can sequentially collect multiple frames of images under various ambient light conditions, including infrared light imaging information and visible light imaging information.
  • visible light imaging information can be directly used to construct maps and mark the locations of obstacles in the map.
  • the main purpose is to receive the image of the reflected light returned from the line laser hitting the object to be measured. It needs to overcome the interference of ambient light in different bands.
  • the line laser emitted by the line laser transmitter is located in the field of view of the camera. Within the field range, a laser line segment is formed on the surface of the object to be measured or on the horizontal ground.
  • the line laser can help detect information such as the contour, height and/or width of the object in the direction of travel of the robot. This embodiment mainly extracts the height of the object. Information-based, adapting to the needs of inter-frame tracking algorithms. Compared with perception solutions based on image sensors, line laser emitters can provide cameras with more accurate pixel height and direction information, which can reduce the complexity of perception operations and improve real-time performance.
  • the working principle of the structured light module is: the line laser emitter emits line laser outwards. After the emitted line laser reaches the obstacle surface, part of it is reflected back and forms an image through the optical imaging system in the camera. of pixels. Since the distance from the surface of the object to the return point is different, the flight time of the reflected light is different. By measuring the flight time of the reflected light, each pixel can obtain independent distance information and direction information, and then use the trigonometric conversion relationship to obtain height information. and width information, and marked as coordinate information of pixels on the image, collectively called position information. When the robot is traveling, on the one hand, it can control the line laser transmitter in the structured light module to emit line laser to the outside world.
  • the camera in the structured light module is controlled to collect the environment image in the front area.
  • a laser line segment will be formed on the surface of the object.
  • This laser line segment can be collected by the camera, that is, the image collected by the camera will contain the light emitted by the line laser transmitter.
  • the laser line segment formed after the line laser encounters an object. There is no limit to the angle between the laser line segment formed by the line laser on the surface of the object and the horizontal plane.
  • each laser line segment contains multiple pixels, and each pixel corresponds to a point on the obstacle surface.
  • the points on the obstacle surface represented by the pixels on the laser line segments in a large number of environmental images can form obstacle points.
  • Cloud data can be the coordinate system where the robot is located. Then the robot can convert the pixel point coordinates on the laser line segment to the coordinate system where the robot is located based on the conversion relationship between the image coordinate system where the camera is located and the coordinate system where the robot is located. Under the coordinate system, the obstacle point cloud data is obtained.
  • the coordinate system used by these obstacle point cloud data can also be the world coordinate system.
  • the robot can convert the pixel points on the laser line segment based on the conversion relationship between the coordinate system of the camera, the coordinate system of the robot and the world coordinate system.
  • the coordinates are converted to the coordinate system where the robot is located to obtain obstacle point cloud data;
  • the obstacle point cloud data can include but is not limited to the three-dimensional coordinate information of the point, color information, reflection intensity information, etc.
  • the height information and length and width information of the obstacle are obtained, and the type of obstacle can be identified based on the obstacle point cloud data. In this embodiment of the present application, identifying the type of obstacle and the area it occupies through obstacle point cloud data is not limited.
  • obstacle point cloud data can be input into a deep learning model to identify the type of obstacle.
  • obstacles can be depicted based on the obstacle point cloud data to obtain obstacle points and obstacle outlines, and the type of obstacle can be determined based on the obstacle outlines. Cluster analysis, threshold filtering, and Confidence judgment.
  • the implementation form of the line laser emitter is not limited, and it can be any device/product form capable of emitting line laser.
  • the line laser emitter may be, but is not limited to, a laser tube.
  • the implementation form of the camera is not limited. Any visual device that can collect environmental images is applicable to the embodiment of the present application.
  • cameras may include but are not limited to monocular cameras, binocular cameras, etc.
  • the wavelength of the line laser emitted by the line laser emitter can be limited to the wavelength of infrared light, for example, it can be an infrared laser.
  • the camera can not be equipped with a filter on the lens.
  • the installation position relationship between the transmitter and the camera module is not limited.
  • the number of line laser emitters is not limited. For example, it may be one, two or more.
  • the number of cameras is not limited, for example, it can be one, or two or more.
  • the field of view of the camera includes a vertical field of view and a horizontal field of view.
  • a camera with a suitable field of view can be selected according to application requirements, as long as the line laser emitted by the line laser emitter is within the field of view of the camera.
  • the laser line segments formed by the line laser on the surface of the object The angle with the horizontal plane is not limited, for example, it can be parallel or perpendicular to the horizontal plane, or it can be at any angle with the horizontal plane, which can be determined according to the application requirements.
  • the installation height of the structured light module composed of a line laser emitter and a camera needs to be determined based on the size of the obstacle to be detected. If the installation height of the structured light module in the robot is higher, the higher the installation height, the higher the installation height of the structured light module in front of the robot. The larger the vertical space covered, the worse the detection accuracy of smaller obstacles will be. If the installation height of the structured light module in the robot is lower, the smaller the vertical space covered in front of the robot will be. Improved detection accuracy for smaller obstacles.
  • the line laser emitter is installed above the camera without an infrared filter, and the center line of the line laser emitter intersects the center line of the camera at one point.
  • the laser positioning method based on image information disclosed by the present invention includes: the robot controls the camera to collect the image of the light reflected back by the line laser emitted by the line laser emitter on the surface of the object to be measured, and detects the brightness and darkness of the image collected by the camera.
  • Type specifically distinguishing between bright frame images and dark frame images, that is, the robot detects whether each frame of image collected by the camera is a bright frame image or a dark frame image; in some embodiments, while the robot is traveling to the predetermined target position, The line laser emitter is controlled to emit line laser, and the camera is controlled to collect the image of the light reflected by the line laser on the surface of the object to be measured.
  • the structured light module works in a certain way, and the line laser emitter emits according to the preset modulation period and The power level emits line laser to the outside; the camera periodically collects images to obtain a set of image sequences.
  • a set of image sequences includes the data of at least one frame of images. Each frame of image contains the lines formed by the line laser hitting the surface of the object or the ground.
  • Laser line segment contains multiple coordinate data. The coordinate data on the laser line segment in a large number of environmental images can form point cloud data.
  • the method for detecting whether the image collected by the camera is a bright frame image or a dark frame image includes: controlling the line laser emitter to emit the line laser according to the preset modulation period.
  • the line laser is an infrared laser modulation signal
  • the infrared laser modulation signal The first level (corresponding to the logic high level) is output in the first modulation sub-period.
  • a bright frame image is formed after being collected by the camera; the infrared laser modulation signal outputs the second level in the first modulation sub-period.
  • the robot will set an image structure (structural information of image data) for each frame of image in the image taken from the camera, and then cache it and mark the light and dark attributes of the on-line laser accordingly, where each frame of image It can be saved as the previous frame image for tracking and matching.
  • image structure structural information of image data
  • the image sequence formed in the imaging plane of the camera is configured to alternately generate bright frame images and dark frame images, so that : When the current frame image collected by the camera is a bright frame image, the next frame image collected by the camera is a dark frame image; during the time interval between the camera collecting the current frame bright frame image and the camera collecting the next frame bright frame image, the camera collects the current frame image frame dark frame image, the time interval is equal to a sampling period of the camera; after the camera collects the next frame of bright frame image, the camera collects the next frame of dark frame image.
  • the first frame image of the light reflected by the line laser on the surface of the object to be measured collected by the camera is a bright frame image, which is recorded as the line laser emitter emits the line laser.
  • the first frame image collected can also be recorded as the first bright frame image; then the second frame image of the light reflected by the line laser on the surface of the object to be measured collected by the camera is a dark frame image, recorded as line laser.
  • the second frame image after the emitter emits the line laser can also be recorded as the first dark frame image; then, the third frame image of the light reflected by the line laser on the surface of the object to be measured collected by the camera is the bright frame
  • the image is recorded as the third frame image after the line laser emitter emits the line laser, or it can also be recorded as the second frame bright frame image, and then the next frame image collected by the camera is a dark frame image, then the robot is based on the above alternating generation method From a set of image sequences collected by the camera, the bright frame images and
  • distinguishing between bright frame images and dark frame images can be achieved based on the average gray value of the image. Specifically, for each frame of image collected, all pixels of the frame are traversed first, the sum of the brightness values of all pixels is calculated cumulatively, and then the quotient of the sum of the brightness values and the number of pixels is calculated as the frame The average brightness value of the image.
  • the robot detects that the brightness values of the preset threshold number of pixels in the frame image are greater than the average brightness value, the robot sets the frame image as a bright frame image, which can be used to search for the positioning scene with strong ambient light.
  • Pixels that meet the optimal convex hull condition allow the camera's exposure to be increased; when the robot detects that the brightness value of the preset threshold number of pixels in the frame image is less than the average brightness value, the frame image is set to dark
  • the frame image can be used to position pixels that meet the optimal convex hull conditions in scenes with weak ambient light, allowing the camera's exposure to be reduced; and improving the adaptability of the laser positioning method to ambient light intensity.
  • the preset threshold number is greater than or equal to the number of all pixels in the frame image.
  • the robot When the robot detects that the current frame image collected by the camera is a bright frame image, the robot searches for the line laser position from the current frame image by executing an inter-frame tracking algorithm, and then sets the coordinates of the line laser position to the line emitted by the line laser transmitter.
  • the line laser position within the frame image is used to obtain the laser line segment connected by each line laser position, which facilitates the positioning of the object to be measured;
  • the image in the processing rule model corresponding to the input inter-frame tracking algorithm can be divided into the current frame image and the previous frame image, or the current frame image and the next frame image, you can use the matching relationship between the previous frame bright frame image and the current frame bright frame image, and/or the matching relationship between the previous frame dark frame image and the current frame bright frame image Matching relationship, tracking the reflection position of the line laser, effectively filtering out various ambient light interference, especially interference from strong ambient light, in scenes where the camera is not too close to the obstacle, and overcoming the distance changes between the camera and the object to be measured.
  • the robot When the robot detects that the current frame image collected by the camera is a dark frame image, the robot extracts the line laser position from the current frame image by executing the brightness center of gravity algorithm, and then sets the coordinates of the line laser position to the line laser emitted by the line laser transmitter.
  • the positioning coordinates in the current frame image thereby realizing the positioning of the complete pixel position of the line laser in the process of alternately executing the inter-frame tracking algorithm and the brightness center of gravity algorithm; wherein, the robot chooses to input each frame image into the brightness center of gravity algorithm corresponding to In the processing rule model, the effective line laser position in the dark frame image of the corresponding frame is output.
  • the effective line laser position is the line laser position with predictive significance determined at the moment when the dark frame image is collected to assist the corresponding bright frame.
  • the line laser positions within the frame image connect relatively accurate laser line segments. Since the robot only focuses on the sensitivity to changes in ambient light intensity when executing the inter-frame tracking algorithm and ignores the distance between the robot and the obstacle, it is easy for the robot to collide with obstacles during walking.
  • the robot turns to the brightness center of gravity algorithm to realize that in a scene where the camera is too close to an obstacle, it can not only notice the existence of the obstacle, but also promptly identify the reflection position of the line laser, preventing the robot from When it hits an obstacle in front, as for the misjudgment of the line laser position due to ambient light interference, after the current frame image collected is switched from a dark frame image to a bright frame image, the robot overcomes the corresponding problem by executing an inter-frame tracking algorithm. problem of misjudgment.
  • the embodiment of the present invention responds to the interference of ambient light intensity by switching between the inter-frame tracking algorithm and the brightness center of gravity algorithm, enhances the stability of the laser positioning of the robot in various walking environments, and realizes the alternate generation of reflective line lasers.
  • the laser positioning is stably completed in the bright frame image and dark frame image of the reflected light.
  • the robot in the process of executing the laser positioning method to track the image of the reflected light of the line laser, there is no need to use an infrared filter to filter the ambient light, so as to retain all the details of the infrared and visible light bands in the collected image. It is convenient for the robot to search for the position information associated with the pixel points formed by the line laser in the imaging plane of the camera from the current frame image, including adopting appropriate algorithms (such as pixel matching algorithms, pixel Point search algorithm), complementary extraction of each line laser position to achieve laser positioning of obstacles at different distances and near in various ambient light intensity scenarios, the robot can overcome ambient light of different intensities in various walking environments interference, which can then be used for map navigation and deep learning for identifying obstacles.
  • appropriate algorithms such as pixel matching algorithms, pixel Point search algorithm
  • the method for the robot to search for the line laser position from the current frame image by executing an inter-frame tracking algorithm includes:
  • Step 1 The robot traverses the current frame image column by column and obtains the initial pixel position in the current column of the current frame image. Generally, one initial pixel position can be obtained in each column of the current frame image.
  • the initial pixel position is used as the starting point for searching for pixels in the column that meet the best convex hull conditions; it should be noted that the initial pixel position is in front of the robot without obstacles (or within the field of view of the camera). In the case of obstacles), the line laser emitted by the line laser emitter is reflected back to the camera's field of view on the robot's traveling plane (generally the ground), and is formed at the position of the original pixel point in the image collected by the camera.
  • the robot's traveling plane can be represented by the surface of the object to be measured; each original pixel is a reflection position corresponding to the robot's traveling plane, expressed by For the search starting point for searching for the position of the line laser in each column representing the same frame image, each original pixel point obtained in the same frame image is preferably located on the same row, and may include adjacent pixel points in the same row.
  • the object to be measured may be an obstacle protruding on the traveling plane of the robot.
  • pixels that do not have a line laser position in the current frame image are excluded based on the pixels in the corresponding column that conform to the preset brightness distribution characteristics, so as to exclude strong pixels before starting to search for pixels that conform to the convex hull characteristics in the current frame image.
  • the line laser position is not in the current frame image. Pixels with line laser positions are pixels with strong ambient light interference.
  • Step 2 In addition to the column in which the pixel point of the line laser position does not exist in step 1, the robot sequentially traverses the relevant columns of the current frame image, specifically traversing the pixel points in the column where the initial pixel position exists; in the current In the current column of the frame image, the robot sets the initial pixel position of the current column as the search center, and then searches upward from the search center along the current column for pixels within a search radius. Optionally, the robot starts from the search center of the current column.
  • the first preset pixel distance is set as the search radius, and pixels that meet the convex hull characteristics are searched upward and downward along the column direction respectively; where the current column is the column currently traversed by the robot, and the search radius is applicable at the same time
  • the image is an inter-frame matching relationship formed by the same type of values in the same column of pixels relative to the reference frame image, filtering out the convex hull center pixels in the current column, and then updating the filtered convex hull center pixels in the current column.
  • the convex hull center pixel last determined in the current column of the current frame image; whenever the search center in the current column is updated, the convex hull center pixel set in the current column is also updated once, and is in the current Each pixel within the search radius relative to the initial pixel position is traversed within the column and the convex hull center pixel is updated to determine the final convex hull center pixel of the current column.
  • the convex hull center pixel is updated at the same time.
  • the convex hull center pixel is the convex hull center pixel closest to the ground in the current column of the current frame image.
  • the reference frame image is a bright frame image configured as a frame where the robot's latest line laser position is located before the current frame image is collected; the robot's latest line laser position is from the convex hull center pixel of the corresponding column. filter out.
  • the difference between the brightness value of the pixels searched upward and the brightness value of the pixels searched downwards can be extended to the difference between the brightness value gradient generated in the pixels searched upward and the brightness value gradient generated in the pixels searched downwards.
  • Numerical relationship can be compared in the search states corresponding to two adjacent search centers, which is conducive to filtering out the convex hull center pixels from the searched pixels that conform to the convex hull characteristics, and for the currently determined
  • the search centers determined twice are in the same column.
  • the currently determined search center and the last determined search center can be two rounds of searching for pixels within the search radius, filtering and updating the same search center starting from the initial pixel position in the upward direction of the current column.
  • Two adjacent search centers determined by the convex hull center pixel of the column can also be started from the initial pixel position in the downward direction of the current column, searching for pixels within the search radius in two rounds, and filtering And update two adjacent search centers determined by the convex hull center pixels of the same column.
  • One round of search corresponds to one search center, and corresponds to a search state in the pixel area of different columns.
  • the update range of the search center is relatively Since the initial pixel positions are all within a coverage area of a search radius, including the initial pixel positions, it is convenient to update the convex hull center pixels in the same column and continuously screen out more accurate convex hull centers to represent the line laser position; the inter-frame matching relationship includes the matching in the number of pixels between the two frames of images, and the matching in the brightness value.
  • the two frames of images can be two adjacent frames of images, or they can be separated by one or more frames of images.
  • the specific matching of two bright frame images can be based on the changes in the brightness value and ordinate of the pixel points at the same reflection position of the line laser on the surface of the obstacle in the images collected in real time during the robot's walking process. The change.
  • the robot traverses every pixel within a search radius and filters out and updates the convex hull center pixel, and also starts from the search center along the current column. After the current column has traversed downwards every pixel within a search radius and filtered and updated the convex hull center pixel, the robot begins to traverse the search radius starting from the initial pixel position of the next column of the current frame image. pixels within.
  • step 1 when the current frame image is a bright frame image and the previous frame image is a dark frame image, the robot saves the previous frame image. If the robot has already searched from the previous frame image before executing the current step 1, If the corresponding line laser position is found (including the line laser position on the corresponding column, or the line laser position on all columns), then the previous frame image is marked as the reference frame image; preferably, the reference frame image is configured to collect Before the current frame image, there is a bright frame image where the latest line laser position found by the robot is located.
  • the latest line laser position found by the robot is derived from the convex hull center pixel of the corresponding column, specifically within the corresponding column, so With the initial pixel position as the center, within a search radius coverage area corresponding to the upward direction of the column and within a search radius coverage area corresponding to the downward direction of the column, after all pixel points are sequentially updated to the search center, step 2 is performed. Set a new convex hull center pixel.
  • Step 3 For the convex hull center pixels that have been selected by the robot and belong to each column of pixels, the robot determines the convex hull center pixels in the effective coverage area corresponding to the positioning coordinates of the line laser emitted by the line laser emitter in the previous dark frame image.
  • the relationship between the brightness value and the brightness value of the convex hull center pixel in the current frame image is to first determine the convex hull center pixel that is an interference point, and then eliminate it from the filtered convex hull center pixels.
  • interference points you can select column-by-column traversal to eliminate interference points existing in the current frame image to eliminate interference from ambient light.
  • the robot After the robot traverses the convex hull center pixels among all columns of pixels in the current frame image to eliminate all interference points, the coordinates of the remaining convex hull center pixels are set to the line laser emitted by the line laser emitter at the current positioning coordinates in the frame image, then the robot searches for the line laser position determined in each column in the current frame image to connect the laser line segment formed by the line laser emitted by the line laser transmitter on the surface of the object to be measured, and Determine that the robot has searched for the line laser position from the current frame image by executing the inter-frame tracking algorithm; among them, the line laser position determined in the same column is the last updated in the same column after the robot has traversed all the pixels in the same column.
  • the coordinates of a line laser position are represented by the corresponding positioning coordinates. Since the robot performs steps 1 to 3, it traverses the current frame image column by column to obtain the line laser position belonging to the corresponding column. Therefore, after determining the currently traversed column number, for each line laser position
  • the coordinates can be expressed by selecting only the ordinate value to identify the height information of the reflection position of the line laser on the obstacle surface, and can also be used for robot obstacle avoidance.
  • step 2 except for the column where the pixel point of the line laser position does not exist, the robot sets the initial pixel position obtained in the current column in step 1 as the search center, that is, in the previous embodiment Initial pixel position from step 2.
  • the search center that is, in the previous embodiment
  • Initial pixel position from step 2.
  • the adjacent pixel searched upward or downward from the search center along the current column is updated to the search center, and the steps are re-executed. 2.
  • the first preset pixel distance is smaller than the maximum pixel distance covered by the current frame image and is within the detection range of the camera, so as to be limited to the The convex hull center pixel is searched near the initial pixel position.
  • the convex hull center pixel is a pixel that conforms to the convex hull feature within the coverage range of the search radius.
  • the convex hull feature here is Used to represent the characteristics of a pattern formed by a line laser hitting the surface of an obstacle.
  • the pixel characteristics of the coverage area of the pattern can be the brightness value of the pixels within the coverage area of the pattern itself, or the inscribed circle coverage area of the pattern.
  • the filtered convex hull center pixel is the last updated convex hull center pixel in each column of the convex hull center pixel in the current frame image (which can be searched); so
  • the filtered convex hull center pixel is the convex hull center pixel in each column of the convex hull center pixel in the current frame image that is closest to the origin of the coordinate system of the current frame image.
  • the convex hull center pixels are not necessarily updated in each column, so the final connected laser line segments are not continuous and can be used to represent raised obstacles on the ground where the robot is traveling.
  • the robot sets the set of pixel points that conform to the convex hull characteristics on the current column of the current frame image to the pixel points whose brightness values decrease from the center of the convex hull along the current column to the upper and lower sides respectively, and the convex hull points.
  • the center of the convex hull is the pixel point.
  • the brightness The value decreases upward respectively along the current column to generate a first gradient value
  • the brightness value decreases downward respectively along the current column to generate a second gradient value
  • the required traversed pixel distance that decreases upward from the center of the convex hull can be less than or Equal to the search radius
  • the required traversed pixel distance that decreases downward from the center of the convex hull can also be less than or equal to the search radius to form a predetermined gradient change pattern of brightness values within the convex hull.
  • the brightness value of the search center is a value of 255, that is, the maximum gray value (maximum gray level) of the brightness value of the image divided according to the binary method.
  • the brightness of a pixel is used to represent the brightness of the light shining on the surface of the object to be measured.
  • the gray value is used to represent its brightness value, if the higher the gray value, the brighter the image, then the brightness The larger the value.
  • the grayscale image formed by image binarization only contains brightness information and does not contain color information. Just like a black and white picture, the brightness changes continuously from dark to light. Therefore, to represent the grayscale image, the brightness value needs to be quantified. Usually It is divided into a total of 256 levels from 0 to 255.
  • the value 255 is used in this embodiment to represent a brightness value.
  • this embodiment uses the value range of the brightness value of the pixel.
  • each frame of image collected by the camera can be regarded as converted into a grayscale image, where brightness is grayscale.
  • the larger the grayscale value the larger the brightness value.
  • the value 0 can represent the maximum pixel point.
  • Black the value 255 can represent the whitest pixel.
  • the pixels mentioned in this embodiment are irreducible units in a frame of image. Each frame of image is composed of many pixels. It exists as a small grid of a single color and can be mapped to a grid. A cell (grid) in the map, a grayscale image uses one byte of capacity to store one pixel.
  • step 2 for each column of each frame of image that has an initial pixel position, except for the column where the pixel point does not have a line laser position, the brightness value of the pixel point based on the upward search is It is formed by the difference in the brightness value of the downwardly searched pixels in the search state corresponding to the two adjacent determined search centers, and the same type of values in the same column of pixels in the current frame image relative to the reference frame image.
  • the matching relationship between frames, the method of filtering out the convex hull center pixel includes:
  • the robot starts from the search center and searches upward or downward along the column direction for the pixel points that conform to the convex hull feature, controlling the brightness value of the search center and Compare the brightness values of the center pixels of the convex hulls located in the same column that were searched last time. You can, but are not limited to, use the difference in brightness values to determine the size relationship between the two; the convex hulls that were searched last time and are located in the same column.
  • the center pixel is the convex hull center pixel filtered out in the same column of the current frame image for the last determined search center.
  • the last determined search center is the current determined search center in the current frame image.
  • the column ordering of pixels in the same column of the reference frame image is equal to the column ordering of the current column of the current frame image.
  • the convex hull center pixels do not necessarily have the same row ordering.
  • the robot detects that the brightness value of the search center is greater than the brightness value of the convex hull center pixel in the same column that was last searched, then in the current column, it searches upward from the search center for pixels and counts all the pixels.
  • the number of pixels whose brightness value decreases according to the first gradient value that is, every time a pixel whose brightness value decreases is searched upward, and the brightness value of the currently searched pixel is relative to the last searched pixel
  • the brightness value the brightness value of a pixel below the currently searched pixel
  • the number of pixels whose brightness value decreases according to the first gradient value is added Counting once can be understood as the robot searching for pixel points that conform to the convex hull feature along the current column of the current frame image until the upward counting stop condition is met.
  • the first gradient value changes adaptively as the number of searches changes. For example, the closer the currently searched pixel is to the upper edge of the convex hull, the greater the first gradient value becomes.
  • the brightness value of the pixel point closer to the upper edge of the convex hull decreases more sharply, satisfying an established brightness value gradient change rule; when the upward counting stop condition is met, the counting stops.
  • the robot also stops searching for pixels along the column direction, and then marks the number of pixels whose brightness value decreases according to the first gradient value. is the number of upward gradient descents.
  • the robot also searches downward for pixels from the search center, and counts the number of pixels whose brightness value decreases according to the second gradient value, that is, whenever a pixel whose brightness value decreases is searched downward, and If the brightness value of the currently searched pixel is reduced by one of the second gradient values relative to the brightness value of the last searched pixel (the brightness value of a pixel above the currently searched pixel), then the The number of pixels whose brightness value decreases according to the second gradient value is counted once. It can be understood that the robot searches downwards along the current column of the current frame image for pixels that conform to the convex hull feature until the direction is satisfied. Count stop condition.
  • the second gradient value changes adaptively as the number of searches changes. For example, the closer the currently searched pixel is to the lower edge of the convex hull, the larger the second gradient value becomes.
  • the brightness value of the pixel point closer to the lower edge of the convex hull decreases more drastically, satisfying an established brightness value gradient change rule; when the downward counting stop condition is met, stop Count the number of pixel points whose brightness value decreases according to the second gradient value, and then mark the number of pixel points whose brightness value decreases according to the second gradient value as a downward gradient descent number.
  • the robot After determining to stop searching and counting upward from the search center along the current column of the current frame image, and determining to stop searching and counting downward from the search center along the current column of the current frame image, when the robot It is determined that the number of upward gradient descents counted in the current column of the current frame image is greater than or equal to the same column of the reference frame image (the column ordering of a column participating in the comparison in the reference frame image is the same as the column order of the current column of the current frame image). The ordering is equal, which can also be counted as the number of upward gradient descents counted in the current column of the reference frame image), and/or it is determined that the number of downward gradient descents counted in the current column of the current frame image is greater than or equal to the reference frame.
  • the number of downward gradient descents counted in the same column of the image (the column sorting of a column participating in the comparison in the reference frame image is equal to the column sorting of the current column of the current frame image, and can also be counted as the current column of the reference frame image)
  • the current frame image switches to a dark frame image.
  • the brightness center of gravity algorithm After or after collecting the next frame image (dark frame image), it will switch to execute the brightness center of gravity algorithm to avoid obstacles; on this basis, among the pixels traversed by the current column of the current frame image, if It is detected that neither the first gradient value nor the second gradient value is equal to the first preset gradient parameter, and the absolute value of the difference between the first gradient value and the second gradient value is less than the second preset gradient parameter, and along the current column
  • the absolute value of the difference between the brightness value of the pixel with the smallest brightness value found by the upward search and the brightness value of the pixel at the currently determined search center is greater than the same type of brightness formed by the upward search in the same column of pixels in the reference frame image.
  • the absolute value of the difference between the values, and the absolute value of the difference between the brightness value of the pixel with the smallest brightness value searched down the current column and the brightness value of the pixel at the currently determined search center is greater than the reference frame image
  • the robot marks the currently determined search center as the convex hull center pixel and determines the position of the current frame image relative to the reference frame image.
  • the matching relationship formed by the same type of values in the same column of pixels is consistent with the expected changes in the position of the pixels within the convex hull during the robot's walking process.
  • the absolute value of the difference between the brightness values of the same type formed by upward search in the same column of pixels in the reference frame image is the final value in the reference frame image from a column with the same column order as the current column. Starting from the determined search center, in the same column as the current column, search upwards along the difference between the brightness value of the pixel with the smallest brightness value and the brightness value of the finally determined search center on the same column. Absolute value, where the distance between the pixel with the smallest brightness value searched upward and the final search center on the same column is less than or equal to the search radius.
  • the absolute value of the difference between the brightness values of the same type formed by searching downwards in the same column of pixels in the reference frame image is finally obtained from a column with the same column order as the current column in the reference frame image.
  • search downwards Starting from the determined search center, along the same column as the current column, search downwards to find the difference between the brightness value of the pixel with the smallest brightness value and the brightness value of the finally determined search center on the same column.
  • Absolute value where the distance between the pixel with the smallest brightness value searched downward and the final determined search center on the same column is less than or equal to the search radius.
  • the pixel offset (which can also be understood as the coordinate offset in the image coordinate system relative to the origin of the coordinate system) increases, further confirming that the robot is approaching the obstacle to be measured, and in this process, the pixel used to characterize the obstacle is The number of pixels in the same local area has increased compared to before the installation height became larger, then the number of pixels that can be searched in the current frame image increases, and the robot can detect more detailed parts of the obstacle, which proves that The convex hull center pixel among the pixels searched in the current column of the current frame image is a point that relatively accurately represents the laser line segment of the line laser hitting the surface of the obstacle to be measured, until all the pixels in the same frame image are traversed and updated.
  • the line laser position in each column is obtained and connected or fitted to a laser line segment representing the line laser, so as to realize the positioning of the obstacle where the laser line segment is located, so as to facilitate the robot to avoid obstacles in time.
  • the robot first searches for pixels from the search center upward along a column of the current frame image until all pixels within a search radius are searched upward along a column of the current frame image. point, and then search for pixels from the same search center downwards along a column of the current frame image until all pixels within a search radius are searched along a column of the current frame image.
  • the robot first searches for pixels from the search center along a column of the current frame image until all pixels within a search radius are searched along a column of the current frame image, and then starts from the same search center. Search pixels downward along a column of the current frame image until all pixels within a search radius are searched upward along a column of the current frame image.
  • the reference frame is the same column of the reference frame image when the camera collects the first frame image of the light reflected by the line laser on the surface of the object to be measured during the execution of the laser positioning method.
  • the initial pixel position in the same column of the reference frame image is the line laser position in the same column of the reference frame image, which can best Represents the point of the laser line segment where the line laser hits the surface of the object to be measured.
  • the first frame image of the light reflected by the line laser on the surface of the object to be measured collected by the camera is a bright frame image, which is recorded as the line laser after the line laser emitter emits the line laser.
  • the first frame image can also be recorded as the first bright frame image.
  • the first preset gradient parameter is smaller than the second preset gradient parameter; the first preset gradient parameter is preferably a value of 0 to avoid pixel areas with constant brightness values (such as local overexposure areas, although its The internal pixels may be the center of the convex hull) to select pixels that conform to the characteristics of the convex hull; the second preset gradient parameter is preferably a value of 25 to control the coordinate jump of the pixels at the same reflection position used to represent the obstacle.
  • the controllable range avoids introducing pixels with drastic changes in brightness value, and only focuses on the effective detection area of the obstacle to be measured; it can not only reduce the amount of search and calculation, but also improve the detection accuracy.
  • step 2 also includes stopping conditions for searching pixels along the column direction (the up-counting stop condition and the down-counting stop condition), specifically including:
  • the search center of the current column of the current frame image (initially the search center)
  • the brightness value of the pixel at the initial pixel position) is not equal to the brightness value at the center of the reasonable convex hull found last time, and as the robot approaches the obstacle, the difference in brightness values between the two increases.
  • the last searched convex hull center pixel in the same column is the convex hull center pixel filtered out in the same column of the current frame image based on the last determined search center.
  • the last determined search center is the same as
  • the currently determined search center is a pixel point adjacent downward or upward in the current column of the current frame image.
  • the robot starts to search pixels upward from the search center in the current column of the current frame image.
  • the purpose is to search pixels upward from the search center in order to filter out the current pixels by counting the pixels that conform to the convex hull characteristics.
  • the convex hull center pixel in the column The convex hull center pixel in the column.
  • the current column of the current frame image count the pixels whose brightness value decreases according to the first gradient value upward from the search center, preferably starting from the search center and counting each pixel upward along the current column. If a pixel that conforms to the convex hull feature is found, count by one to obtain the number of pixels whose brightness value decreases according to the first gradient value; and in the current column of the current frame image, from the search center to Count down the number of pixels whose brightness value decreases according to the second gradient value, so as to search upward and downward along the column direction respectively starting from the corresponding initial pixel position for pixels that conform to the convex hull feature, preferably from the Starting from the search center, every time a pixel matching the convex hull feature is found along the current column, the count is incremented by one to obtain the number of pixels whose brightness value decreases according to the second gradient value.
  • the robot if the robot detects that the brightness value of a pixel is not decreasing according to the first gradient value during the upward search from the search center, the robot counts the preset upward gradient anomaly count once, and then The robot determines whether it has completed searching for pixels covered within the search radius along the current column of the current frame image. If so, the robot stops searching for pixels along the current column of the current frame image and determines that the upward counting stop condition is reached. Then perform step 2 to filter out the convex hull center pixel, and then start to update the adjacent pixel searched from the search center along the current column to the search center, and then repeat step 2 to update the convex hull.
  • step 2 filter out convex hull center pixel, and then start to update the adjacent pixel searched from the search center along the current column to the search center, and then repeat step 2 to update the convex hull center pixel; until the relative The initial pixel position searches upward for all pixel points covered within the search radius.
  • the preset downward gradient exception count is counted once, and then The robot determines whether it has completed searching the pixels covered within the search radius along the current column of the current frame image. If so, the robot stops searching for pixels along the current column of the current frame image and determines that the downward count has been reached.
  • Stop condition and then according to the difference in the brightness value of the pixel point searched upward and the brightness value of the pixel point searched downward in the search state corresponding to the two adjacent search centers determined as described in step 2, and
  • the inter-frame matching relationship formed by the same type of values in the same column of pixels in the current frame image relative to the reference frame image is used to filter out the convex hull center pixels, and then the search starts from the search center and goes down the current column.
  • the adjacent pixel point is updated as the search center, and step 2 is repeated to update the convex hull center pixel point; otherwise, when the abnormal upward gradient frequency is greater than the second preset error number, stop moving along the current frame image Search the pixels downwards in the current column and determine that the downward counting stop condition is met.
  • step 2 determine the brightness value of the pixels searched upward and the brightness value of the pixels searched downwards in two consecutive times.
  • the difference in the search state corresponding to the search center, and the inter-frame matching relationship formed by the same type of values in the same column of pixels in the current frame image relative to the reference frame image filter out the convex hull center pixels, and then start Update an adjacent pixel point searched upward from the search center along the current column to the search center, and then repeat step 2 to update the convex hull center pixel point until the upward search relative to the initial pixel position is completed. All pixels covered within the search radius.
  • the pixels searched within the search radius do not conform to the convex hull feature within a certain error allowable range.
  • the search center is not necessarily is the center of the convex hull, you need to set the preset number of errors for judgment.
  • the source of the error is that the reflection position of the line laser with the same emission angle collected during the walking process of the robot on the surface of the same object to be measured will jump longitudinally. This is reflected in the fact that the pixel points used to represent the same reflection position in different frame images are shifted upward along the ordinate axis.
  • the robot detects that the brightness value of a pixel is not decreasing according to the first gradient value during counting upward from the search center, and/or detects a pixel during counting downward from the search center, If the brightness value of the point does not decrease according to the second gradient value, it is determined that the gradient value between the two adjacent pixel points searched along one of the column directions is abnormal, and the preset gradient abnormality count is counted once. ; When the robot detects that the gradient anomaly frequency is greater than the preset error number and/or counts the pixels covered within the search radius, the robot stops counting and stops searching for pixels that conform to the convex hull characteristics.
  • the robot during the process of searching upward from the search center, the robot counts the pixels whose brightness value is 255 and whose positions are adjacent along the current column of the current frame image, and counts all the pixels.
  • the brightness value is 255 and the number of adjacent pixels is marked as the upward overexposure number, forming a count of the overexposure area in which the pixels at the search center in the upward direction have a brightness value of 255 continuously.
  • the robot determines whether it has completed searching for pixels covered within the search radius along the current column of the current frame image. If so, the robot stops searching for pixels along the current column of the current frame image and determines that the upward counting stop condition is reached.
  • step 2 to filter out the convex hull center pixel, and then start to update the adjacent pixel searched from the search center along the current column to the search center, and then repeat step 2 to update The center pixel of the convex hull; otherwise, when the robot detects that the number of upward overexposures is greater than the third preset error number, it stops searching for pixels upward along the current column of the current frame image and determines that the upward counting stop condition is met, and then continues to step 2.
  • the robot during the process of searching downward from the search center, the robot counts pixels whose brightness value is 255 and whose positions are adjacent along the current column of the current frame image, and Mark the number of pixels with a brightness value of 255 and adjacent positions as the number of downward overexposure, forming a count of the overexposure area in which the pixels at the search center in the downward direction have a brightness value of 255 continuously. .
  • the robot determines whether it has completed searching for pixels covered within the search radius along the current column of the current frame image. If so, the robot stops searching for pixels along the current column of the current frame image and determines whether it has reached the point where the downward search radius has been reached.
  • Step 2 Count the stop conditions, and then perform step 2 to filter out the convex hull center pixel, and then start to update the adjacent pixel searched from the search center along the current column to the search center, and then repeat the execution.
  • Step 2 to update the convex hull center pixel otherwise, when the robot detects that the number of downward overexposures is greater than the fourth preset error number, it stops searching downwards for pixels along the current column of the current frame image and determines that the downward counting is satisfied.
  • Condition then continue to perform step 2 to filter out the convex hull center pixel, and then start to update the adjacent pixel searched from the search center along the current column to the search center, and then repeat step 2.
  • the brightness value in the effective coverage area corresponding to the positioning coordinates of the line laser emitted by the line laser emitter in the previous dark frame image is consistent with the convex hull center
  • the relationship between the brightness values of pixels in the current frame image, and the method of eliminating interference points from the filtered convex hull center pixels includes: the robot traverses the pixels in all columns of the current frame image and selects them from When the convex hull center pixel in the current frame image is obtained, and the positioning coordinates of the line laser emitted by the line laser emitter in the previous frame dark image are also saved, for each convex hull in the current frame image Including the center pixel point, in the circle area where the positioning coordinates of the line laser emitted by the line laser emitter in the previous dark image are the center of the circle, and the radius is the detection pixel distance, if the robot determines that there is at least one in the circle area If the brightness value of the pixel is greater than the brightness value of the convex hull center pixel
  • the convex hull center pixel with the same coordinates is an interference point. There is ambient light interference in the area near the convex hull center pixel with the same coordinates as the center of the circle in the current frame image, causing the robot to not find the interference point.
  • the interference point needs to be eliminated from the current frame image to overcome the interference of ambient light and reduce misjudgment of positioning;
  • the circular domain is the effective coverage area corresponding to the positioning coordinates, preferably, the The radius of the circular domain (detection pixel distance) is not equal to the search radius; in the process of executing the inter-frame tracking algorithm, the current frame image is a bright frame image, and the previous frame image is a dark frame image, that is, the previous frame Dark image, at this time, the line laser position in the previous dark image frame has been output by the brightness center of gravity algorithm, and the positioning coordinates in the previous dark image frame are used (the coordinates of the line laser position determined in one column); In this embodiment, if the coordinates of the pixel point selected by the robot as the center of the circle in the previous dark image frame are equal to the coordinates of a convex hull center pixel point obtained in the current frame image, then the current frame can be used Compare the brightness values of the convex hull center pixels in
  • the robot's traveling speed or rotation speed The larger the value, the more severe the position jump occurs for the pixels representing the same reflection position in the image collected by the robot in real time, and the gradient difference between the brightness values of the two pixels becomes larger, then the predetermined Let the ambient light brightness threshold be set larger to adapt to the denoising accuracy.
  • the method of excluding pixels that do not have a line laser position in the current frame image based on pixels in the corresponding column that conform to the preset brightness distribution characteristics includes: if The brightness value of the initial pixel position in the current column of the current frame image is greater than the brightness value of the pixel point located at the line laser position of the same column found in the previous round by a first preset brightness threshold, or the current value of the current frame image If the brightness value of the initial pixel position in the column is greater than the brightness value of the pixel point located at the line laser position of the same column found in the previous round by the second preset brightness threshold, then the current column along the current frame image upwards Starting from a position one reference pixel distance away from the initial pixel position in the current column of the current frame image, pixel points are searched downward along the current column of the current frame image; wherein the first preset brightness threshold is smaller than the second preset brightness threshold.
  • the first preset brightness threshold is preferably a value of 10
  • the second preset brightness threshold is preferably a value of 235.
  • the brightness value of the initial pixel point in the current column of the current frame image is located at When the brightness value produced by the line laser position in the same column has a small change, or the brightness value produced by the line laser position located in the same column found in the previous round is large enough to be close to the value 255 (the highest level gray value), the current column may There is the influence of ambient light, and a reference position is needed to start searching for pixels along the current column of the current frame image to exclude pixels with abnormal brightness values or their columns, and the first preset brightness threshold and the second preset brightness If the sum of the thresholds is less than 255 (the highest level grayscale value), then the first preset brightness threshold and the second preset brightness threshold serve as the brightness value judgment condition for the excluded columns required for coarse screening.
  • the error position count will be counted once, and it will be determined that the currently searched pixel is a pixel that conforms to the preset brightness distribution characteristics.
  • the area near the initial pixel point in the current column of the current frame image can either have a brightness value greater than the first preset brightness threshold value of the pixel point located at the line laser position of the same column found in the previous round, or it can There is a value equal to 255 (the highest level gray value), which is easily affected by ambient light; among them, the reference pixel distance is expressed by the number of pixel points, so that the reference pixel count threshold is equal to the reference pixel distance.
  • the robot detects that the error position count is greater than the reference pixel count threshold and determines that there is no line laser position in the current column of the current frame image, then the pixel points in the current column of the current frame image are set to the absence of line laser pixels at the position, and then exclude the pixels in the current column of the current frame image from the pixel search range in step 2, and at the same time determine that the light intensity of the environment where the robot is located is greater than the first preset light intensity threshold, It means starting from a position along the current column of the current frame image that is one reference pixel distance away from the initial pixel position in the current column of the current frame image to the bottom pixel point of the current column of the current frame image.
  • the reference pixel count threshold is preferably a value of 25, and is set equal to the reference pixel distance, then the error position count number is greater than the pixel points that meet the preset brightness distribution characteristics in this embodiment.
  • the reference pixel distance is equal to a pixel distance composed of 25 pixel points, then starting from a position 25 pixel points upward from the initial pixel position in the current column of the current frame image, along the current Search pixels in the column downwards until traversing to the bottom of the current column of the current frame image, then a reference test area is formed within the current column, where the distance from the upward distance is the initial pixel position in the current column of the current frame image.
  • the area formed starting from a reference pixel distance and extending downward along the current column of the current frame image to the bottom of the current column of the current frame image is the reference test area; after traversing each of the reference test areas
  • the brightness value of a currently traversed pixel is detected to be 10 greater than the brightness value of the pixel located at the line laser position of the same column found in the previous round, or a currently traversed pixel is detected If the brightness value of the point is equal to the value 255 (the highest level gray value), count once and determine that the currently searched pixel is a pixel that conforms to the preset brightness distribution characteristics, until the number of pixels that conforms to the preset brightness distribution characteristics is reached. greater than the value 25.
  • the sum of the reference pixel distance (or reference pixel count threshold) and the second preset brightness threshold is greater than the value 255 (the highest level gray value), and the sum of the first preset brightness threshold and the second preset brightness threshold is less than
  • the value 255 compares the change of the brightness value produced by the initial pixel position in the current column of the current frame image with respect to the line laser position found in the previous round and located in the same column, and the changes in the brightness value in the current column of the current frame image.
  • the changes in the brightness value of the searched pixels reflect the ambient light intensity of the area corresponding to the current column.
  • the line laser position located in the same column found in the previous round is the position of the convex hull center pixel that is finally determined in the same column of pixels belonging to the reference frame image, that is, in the reference frame image
  • the position of the line laser is determined in the same column of pixels (the position of the convex hull center pixel set after eliminating interference points in the previous embodiment), and the same column of pixels in the reference frame image is within the reference frame image.
  • the search images corresponding to each round are different frame images.
  • step 1 the method of excluding pixels that do not have a line laser position in the current frame image based on pixels in the corresponding column that meet the preset brightness distribution characteristics includes:
  • the inner diameter located below the ring center is the first positioning radius and the outer diameter is the second positioning radius.
  • the pixel points covered by the circular area of the positioning radius are marked as the first pixel points to be measured; equivalent to: taking the initial pixel position in the current column of the current frame image as the center of the circle, setting the first radius as the first positioning radius The first circle; at the same time, taking the initial pixel position in the current column of the current frame image as the center of the circle, set the second radius to be the second circle with the second positioning radius, where the first positioning radius is smaller than the second positioning radius; then Below the initial pixel position (in the downward direction of the current column), mark the pixel points covered by the annular area between the second circle and the first circle in the current column of the current frame image as the first to-be-used Measure pixels.
  • the ratio of the total number of first pixels to be measured is used as the average brightness value of the first pixels to be measured; where, the annular coverage area formed between the first circle and the second circle is used as a criterion for judging changes in light intensity.
  • the transition area depends on the setting of the first positioning radius and the second positioning radius.
  • the first positioning radius is preferably 3, and the second positioning radius is preferably 12. It can be represented by the pixel distance, and its unit is the number of pixels, forming a large enough to judge the transition area of light intensity changes.
  • the first pixel point to be measured conforms to the preset brightness distribution characteristics. pixel points, and determine that there is no line laser position in the current column of the current frame image, and there is strong ambient light interference in the reflection area corresponding to the current column of the current frame image, then the pixels in the current column of the current frame image The points are set to pixels that do not have a line laser position, and then the pixels in the current column of the current frame image are excluded from the pixel search range in step 2.
  • the line laser position found in the same column in the previous round is the position of the convex hull center pixel finally determined in the same column of pixels belonging to the reference frame image, preferably the reference frame image
  • the initial pixel position in the same column of pixels of the reference frame image is a column that is equal to the column ordering of the current column of the current frame image.
  • the inner diameter located above the center of the ring is the first positioning radius and the outer diameter is the first positioning radius.
  • the pixel points covered by the circular area with the diameter of the second positioning radius are marked as the second pixel points to be measured; which is equivalent to: taking the initial pixel position in the current column of the current frame image as the center of the circle, setting the first radius as a first circle with a first positioning radius; and taking the initial pixel position in the current column of the current frame image as the center of the circle, setting a second radius as a second circle with a second positioning radius, where the first positioning radius is smaller than the second Position the radius; then mark the pixel points covered by the annular area between the second circle and the first circle in the current column of the current frame image above the initial pixel position (in the upward direction of the current column) as The second pixel point to be measured is different from the first pixel point to be measured.
  • the ratio of the total number of the second pixel points to be measured is used as the average brightness value of the second pixel point to be measured; among them, the ring coverage area formed between the first circle and the second circle is used as a criterion for judging changes in light intensity.
  • the transition area depends on the setting of the first positioning radius and the second positioning radius.
  • the first positioning radius is preferably 3, and the second positioning radius is preferably 12. It can be represented by the pixel distance, and its unit is the number of pixels, forming a large enough to judge the transition area of light intensity changes.
  • the second pixel point to be measured conforms to the preset brightness distribution characteristics. pixel point, and determine that there is no line laser position in the current column of the current frame image, then the pixel point in the current column of the current frame image is set to a pixel point where there is no line laser position, and then the current frame The pixels in the current column of the image are excluded from the pixel search range in step 2.
  • the line laser position is the position of the finally determined convex hull center pixel in the same column of pixels belonging to the reference frame image, preferably the initial pixel position in the same column of pixels in the reference frame image, the same column of pixels in the reference frame image
  • a point is a column that is equal to the column order of the current column of the current frame image.
  • step 1 if the initial pixel position cannot be obtained in the current column of the current frame image, the line laser position in the same column found in the previous round is updated to the initial pixel position, and update the second preset pixel distance to the search radius, and then repeat step 2. Specifically, the line laser position in the same column found in the previous round will be updated to the current position of the current frame image.
  • the corresponding search center will be searched upward or downward along the current column.
  • Update the adjacent pixel point to the search center and then perform step 2 again to obtain a new convex hull center pixel point and update the new convex hull center pixel point to the convex hull center pixel point, where each of the search
  • the center is within a coverage area of a search radius relative to the initial pixel position, wherein the search radius is set to a second preset pixel distance.
  • the second preset pixel distance is not equal to the first preset pixel distance.
  • the search center in the current column is updated, the convex hull center pixel set in the current column is also updated; among them, the line laser position in the same column found in the previous round belongs to the reference frame image The location of the final determined convex hull center pixel in the same column of pixels.
  • the robot cannot always search for the convex hull center pixel in the same column (in the current column of the current frame image) while repeatedly executing step 2, then it is determined that the robot cannot find the pixel in the same column. to the line laser position, and then exclude the pixels in the current column of the current frame image from the pixel search range in step 2, and at the same time determine that the light intensity of the environment where the current column of the current frame image is located is relatively large. So that it is impossible to identify the reflection position of the line laser hitting the object to be measured.
  • the robot when the robot detects that the current frame image collected by the camera is a bright frame image, it chooses to input the current frame image into the processing rule model corresponding to the inter-frame tracking algorithm to output the effective laser position.
  • the camera When the camera is not too close to the obstacle, Effectively filter out various ambient light interference in the scene and reduce the dependence on infrared filters; specifically, among the pixels that conform to the convex hull characteristics, based on the brightness value gradient generated among the pixels searched upward and the pixels searched downward
  • the numerical relationship between the brightness value gradients generated among the points and their differences in the search states corresponding to the search centers determined twice adjacently, the brightness value of the currently searched pixel point and the last time determined in the same column of the same frame image
  • the relationship between the brightness values of the pixels in the center of the convex hull and the inter-frame matching relationship formed by the same type of values in the same column of pixels in the current frame image relative to the reference frame image are used to filter out the convex hull center of the current column.
  • the final convex hull pixels can reduce the misjudgment of interference points when the ambient light is strong, thereby searching for more accurate pixels by tracking the number and brightness values of pixels in the reference frame image that conform to the convex hull characteristics.
  • the coordinates of the remaining convex hull center pixels are set as the positioning coordinates of the line laser emitted by the line laser transmitter in the current frame image, so as to more accurately realize the robot's laser control.
  • the tracking of reflected light on the surface of obstacles is suitable for robot navigation and walking scenarios to achieve the effect of robot positioning obstacles.
  • the inter-frame tracking algorithm in order to find the position of the line laser in the dark frame image, to overcome the scenes that the inter-frame tracking algorithm cannot cope with, and to ensure the obstacle avoidance effect, it is necessary to switch to executing the brightness center of gravity after collecting the dark frame image. algorithm, and allows to save the previous frame image (belonging to the bright frame image, corresponding to the bright frame image of the light reflected back by the line laser emitted by the line laser transmitter on the surface of the object to be measured) for use by the brightness center of gravity algorithm, and the frame
  • the tracking algorithm achieves the effect of complementary advantages, which can not only overcome ambient light interference, but also continuously track the reflection position of the line laser.
  • the method for the robot to extract the line laser position from the current frame image by executing the brightness center of gravity algorithm includes: the robot traverses the current frame image column by column, wherein the current frame image is a dark frame image and is configured To divide by columns, the brightness values of the pixels in each column can be taken out in sequence by column and the number of pixels in a certain search area can be counted, so as to filter out the line laser position that predicts the reflection position of the line laser, possibly Different from the aforementioned attributes of the pixel at the center of the convex hull, including the brightness value and ordinate position, since it is a dark frame image, the brightness value of the pixel displayed is not large, so the initial pixel position disclosed in the previous embodiment cannot be collected.
  • the robot searches each pixel in the current column in sequence, specifically starting from the pixel in the top row of the current column and traversing to the pixel in the bottom row of the current column; or starting from the pixel in the bottom row of the current column, in sequence Traverse to the pixels in the top row of the current column to complete the search for each pixel in the same column, detection of brightness values, and statistics of the number of pixels; during the process of searching for pixels in the current column, based on the current frame image
  • each positioning line segment is a line segment formed by connecting legal pixels with consecutive pixel positions in the same column. Then select the positioning line segment. Compare the lengths to obtain the positioning line segment with the largest length.
  • the reason is that the difference between the brightness value of the legal pixel point and the brightness value of the pixel point at the same position in the previous bright frame image is controlled within a reasonable threshold range, and the above The brightness value of the pixels at the same position in a bright frame image is controlled within a certain brightness value range to prevent strong light interference. Then, based on the brightness values of the continuously arranged legal pixels, the line laser on the surface of the object to be measured can be predicted.
  • the general range of reflection positions can also enhance the anti-interference ability of ambient light.
  • the positioning line segment with the largest length is preferably a straight segment and parallel to the ordinate axis of the image coordinate system; preferably, the selected positioning line segment with the largest length is set as the predicted laser line segment, and then the positioning line segment at the center of the predicted laser line segment is The coordinates of the pixel points are set to the positioning coordinates of the line laser emitted by the line laser emitter in the current frame image. If the length of the selected positioning line segment with the largest length is greater than the preset continuous length threshold, then the center of the selected positioning line segment with the largest length is set as the line laser position, which is equivalent to the aforementioned embodiment of executing the inter-frame tracking algorithm.
  • the mentioned convex hull center to improve the accuracy of detecting obstacles.
  • the coordinates of a line laser position are represented by the corresponding positioning coordinates.
  • a column-by-column traversal method is used to obtain the line laser position belonging to the corresponding column. Therefore, after determining the currently traversed column number, only the ordinate value can be selected to represent the height information of the reflection position of the line laser on the obstacle surface, which can also be used for robot obstacle avoidance.
  • the relationship between the brightness value of the currently searched pixel in the current frame image and the brightness value of the pixel at the corresponding position of the previous bright frame image and the previous bright frame image are
  • the method of filtering out legal pixels from the current column of the current frame image includes: subtracting the brightness value of the currently searched pixel in the current frame image from the brightness value of the previous frame.
  • the brightness value of the pixels at the same row and column position of the frame image is used to obtain the relative difference of the dark frame image; when it is detected that the opposite number of the relative difference of the dark frame image is greater than the preset brightness difference threshold, and the brightness value of the previous bright frame image is
  • the pixel currently searched in the current frame image is set as a legal pixel as filtered out from the current column of the current frame image. legal pixels, indicating that the pixels in the current frame image are not interfered by strong ambient light; where the preset brightness difference threshold is the brightness value of the pixels at the same position in the two adjacent frame images.
  • the robot when the robot detects that the current frame image collected by the camera is a dark frame image, it chooses to input the current frame image into the processing rule model corresponding to the brightness center of gravity algorithm to output positioning line segments with reasonable connection lengths.
  • the processing rule model corresponding to the brightness center of gravity algorithm to output positioning line segments with reasonable connection lengths.
  • the present invention combines the inter-frame tracking algorithm and the brightness center of gravity algorithm to learn from each other's strengths and weaknesses in various ambient light intensity scenes, and realize the completion of the laser among the bright frame images and dark frame images generated alternately to reflect the reflected light of the line laser. position.
  • the method is based on the relationship between the brightness value of the currently searched pixel in the current frame image and the brightness value of the pixel at the corresponding position of the previous bright frame image and the corresponding relationship between the previous bright frame image.
  • the brightness value of the pixel at the position, the method of filtering out legal pixels from the current column of the current frame image can also be expressed as: the brightness value of the pixel currently traversed in the previous bright frame image minus the current.
  • the brightness value of the pixels with the same row and column position in the frame image is used to obtain the relative difference of the dark frame image; when it is detected that the relative difference of the dark frame image is greater than the preset brightness difference threshold, and the current traversed in the previous bright frame image
  • the pixels with the same row and column positions in the current frame image are set as legal pixels, as legal pixels filtered out from the current frame image, indicating that the The dark frame image of the current frame is not interfered by strong ambient light.
  • the image sequence formed by the line laser emitted by the line laser emitter and reflected back on the surface of the object to be measured in the imaging plane of the camera is configured to alternately generate bright frame images and dark frame images, so that: the camera When the current frame image collected is a bright frame image, the next frame image collected by the camera is a dark frame image; during the time interval between the camera collecting the current frame bright frame image and the camera collecting the next frame bright frame image, the camera collects the current frame dark frame image. frame image; after the camera collects the next bright frame image, the camera collects the next dark frame image.
  • the laser positioning method also includes adjusting the exposure information of the camera, specifically including:
  • the robot detects that the light intensity of the environment it is in is greater than the first preset light intensity threshold, it means that the robot detects that the intensity of visible light in the current environment is relatively large, and the exposure of the camera becomes relatively large, and the robot lowers the camera
  • the gain image signal amplification parameter
  • the gain is obtained to obtain the first gain so that the image of the light reflected back by the line laser on the surface of the object to be measured collected by the camera will not be overexposed, especially the image information in the visible light part will not be easily overexposed.
  • the first preset light intensity threshold is mainly set based on the degree of overexposure of the image collected by the camera due to the strong visible light in the environment. threshold.
  • the robot detects that the light intensity of the environment it is in is greater than the first preset light intensity threshold, it means that the robot detects that the intensity of visible light in the current environment is relatively large, and the exposure of the camera becomes relatively large, and the robot lowers the camera
  • the exposure time is to obtain the first exposure time so that the image of the light reflected back by the line laser on the surface of the object to be measured collected by the camera does not appear overexposed, especially the image information in the visible light part is not easily overexposed, so as to improve the The accuracy of extracting the position of the aforementioned line laser in a scene with strong ambient light;
  • the first preset light intensity threshold is mainly a strong light threshold set based on the degree of overexposure of the image collected by the camera due to strong visible light in the environment.
  • the robot detects that the light intensity of the environment it is in is less than the second preset light intensity threshold, it means that the robot detects that the intensity of visible light in the current environment is small, and the exposure of the camera becomes relatively small, and the robot raises the camera
  • the gain image signal amplification parameter
  • the first gain is less than The second gain; however, if the gain before the first gain is adjusted in the foregoing embodiment is large enough to cope with the light intensity of the environment, the first gain is not necessarily smaller than the second gain; thus improving the performance in scenes with weak ambient light.
  • the second preset light intensity threshold is mainly set based on the exposure of the darker visible light in the environment to the image collected by the camera, and the second preset light intensity threshold is much smaller than The first preset light intensity threshold.
  • the robot detects that the light intensity of the environment it is in is less than the second preset light intensity threshold, it means that the robot detects that the intensity of visible light in the current environment is small, and the exposure of the camera becomes relatively small, and the robot raises the camera
  • the exposure time is such that the second exposure time is obtained so that the image of the light reflected back by the linear laser on the surface of the object to be measured collected by the camera does not appear underexposed.
  • the first exposure time is shorter than the second exposure time; but if In the aforementioned embodiment, the exposure time before adjusting the first exposure time is itself very large to cope with the light intensity of the environment, so the first exposure time is not necessarily smaller than the second exposure time.
  • this embodiment adjusts the gain and exposure time of the camera according to the current environmental conditions, so that the image seen in the camera does not appear overexposed or underexposed, and realizes dynamic exposure adjustment of the camera.
  • the camera gain should be adjusted within a reasonable range to avoid noise; noise generally occurs when the camera's gain is adjusted too high in a low-exposure environment.
  • the exposure information of the camera is used to adjust the power level of the line laser transmitter, the following situations exist:
  • the power level of the line laser emitter for emitting line laser is increased, so that the intensity of the line laser emitted by the line laser emitter is configured to be equal to The product of the smoothing coefficient and the current exposure value;
  • the current exposure value of the camera includes the third gain and/or the third exposure time, then when the light intensity of the environment where the robot is located becomes greater, the third gain and/or the third exposure time are adjusted.
  • the smoothing coefficient is set to a reasonable value, which is used to smooth the step size of the exposure value adjustment, thereby suppressing overexposure;
  • the current exposure value of the camera includes the first gain and/or the first exposure time.
  • the first exposure value adjusted according to the previous embodiment will be used.
  • the gain and/or the first exposure time become smaller, the smoothing coefficient at this time is set to a reasonable value, and the step size used for smoothing the exposure value adjustment can be adjusted in the first gain and/or the first exposure time.
  • the intensity of the line laser emitted by the line laser emitter also becomes smaller to adapt to the exposure of the current ambient light intensity.
  • the power level of the line laser transmitter for emitting line laser is automatically adjusted until the intensity of the line laser emitted by the line laser transmitter (the emission intensity of the line laser transmitter) Power) is equal to the product of the smoothing coefficient and the current exposure value, thereby enabling the use of stronger line laser gears in high-brightness environments, and also avoiding the drastic changes in the current exposure value and the line laser emitted by the line laser emitter collected by the camera.
  • the image of the light reflected from the surface of the object to be measured will not be overexposed, so that the robot can accurately search for the line laser position from the current frame image according to the inter-frame tracking algorithm disclosed in the previous embodiment, and at least ensure the brightness value of the pixel.
  • the obstacle can also be captured by the camera with the reflected light image of the line laser on its surface even when the ambient light is bright.
  • the power level of the line laser emitter for emitting line laser is lowered so that the intensity of the line laser emitted by the line laser emitter is configured to be equal to The product of the smoothing coefficient and the current exposure value.
  • the first preset exposure threshold is greater than the second preset exposure threshold to reflect that the current ambient light is dark.
  • the current exposure value of the camera includes the fourth gain and/or the fourth exposure time; then, the smaller the light intensity of the environment where the robot is, the pre-adjusted third gain and/or Or the third exposure time becomes smaller to adapt to the exposure required by the current ambient light intensity.
  • the smoothing coefficient is set to a reasonable value, which is used to smooth the step size of the exposure value adjustment to suppress underexposure. Effect.
  • the current exposure value of the camera includes the second gain and/or the second exposure time.
  • the intensity of the line laser emitted by the line laser emitter also becomes larger to adapt to the exposure of the current ambient light intensity.
  • the power level of the line laser transmitter for emitting line laser is automatically adjusted until the intensity of the line laser emitted by the line laser transmitter (the emission intensity of the line laser transmitter) Power) is equal to the product of the smoothing coefficient and the current exposure value to prevent the current exposure value from changing more drastically and the image of the line laser emitted by the line laser emitter collected by the camera reflected back on the surface of the object to be measured from being underexposed.
  • the current exposure value of the camera is used to reflect the exposure of the camera in an environment with current light intensity; the current exposure value of the camera can reflect the light intensity of the current environment; it can also reflect the light intensity of the current environment; it can also reflect the light intensity of the current environment.
  • the smaller the current exposure value of the camera is adjusted it proves that the current ambient light is stronger, which will guide the line laser transmitter to increase the power level for emitting line laser, making the obstacle in the Even when the ambient light is bright, the reflected light of the line laser on the surface of the obstacle is also collected.
  • the intensity of the line laser emitted by the line laser transmitter can describe the mapping relationship between the power level of the line laser transmitter for emitting line laser and the current exposure value. Combined with the adjustment effect of the smoothing coefficient, it is suitable for all
  • the structured light module collects the image data of the reflected light of the line laser under a series of different exposures, thereby adjusting the power level of the line laser transmitter for emitting line laser according to the adjusted camera gain and exposure time.
  • the present invention also discloses a robot whose body is equipped with a structured light module.
  • the structured light module includes a line laser emitter and a camera without an infrared filter.
  • the camera is The lens receives light of various wavelengths from the line laser emitted by the line laser transmitter without being equipped with a filter (such as an infrared filter), so that the image collected by the camera retains the imaging information of infrared light and the imaging of visible light. information.
  • a controller is provided inside the robot, and the controller is electrically connected to the structured light module.
  • the controller is configured to execute the laser positioning method to obtain the positioning coordinates of the line laser emitted by the line laser emitter in the current frame image, That is, the line laser position in the bright frame image and the line laser position in the dark frame image are obtained; wherein, the line laser emitted by the line laser emitter is located within the field of view of the camera.
  • the controller can control the line laser emitter and the camera to work.
  • the controller controls the exposure of the camera, and on the other hand, it can control the line laser emitter to emit line laser to the outside during the exposure period of the camera, so that the camera can collect the environmental image detected by the line laser.
  • the controller can control the camera and the line laser emitter to work simultaneously or alternately, and there is no limit to this. It should be noted that the laser light reflected by the photographic object (the surface of the object to be measured) is projected onto the photosensitive sheet through the lens of the camera, causing it to undergo chemical changes and produce an image. This process is called exposure.
  • the controller may adopt an image processing hardware device with FPGA and DSP.
  • FPGA has obvious advantages in processing streaming parallel computing
  • some operations involving morphological processing of images are performed in FPGA, and the remaining operations are performed in DSP.
  • the image resolution is increased, it does not consume more processing time.
  • the image processing system can process up to 6 frames/s, which can simultaneously meet the accuracy and real-time requirements of robot obstacle avoidance.
  • the horizontal viewing angle of the camera is configured to receive the light reflected back by the line laser within the width of the robot in front of the robot, and obtain an image of the environment detected by the line laser.
  • a wide-angle lens or a non-wide-angle lens can be used in order to obtain the corresponding horizontal viewing angle of the camera. The specific use depends on the width of the body, as long as the line laser can be collected within the entire body range.
  • the installation height of the structured light module on the body of the robot is configured to be positively correlated with the height of the obstacle to be measured, so that the obstacle to be measured occupies the effective field of view space of the camera.
  • the installation height of the line laser emitter and camera needs to be determined according to the size of the obstacle to be measured.
  • the higher the installation height of the structured light module on the robot body the larger the vertical space that can be covered, but relative to If the small obstacle to be measured deviates further, less local details will be collected, which will reduce the detection accuracy of the small obstacle to be measured.
  • the smaller the installation height of the structured light module on the robot body the smaller the installation height of the structured light module on the robot body.
  • the line laser emitter and the camera can be located at different heights at the installation height of the structured light module.
  • the line laser emitter is higher than the camera; or the camera is higher than the line laser emitter; of course, the line laser emitter and the camera can also be located at the same height.
  • the structured light module will be installed on an autonomous mobile robot (such as a sweeping robot, patrol robot and other autonomous mobile equipment).
  • the line laser emitter and camera are directed to the working surface of the robot (such as The distance between the ground and the ground is different.
  • the distance between the camera and the working surface is 32mm
  • the distance between the line laser transmitter and the working surface is 47mm.
  • the coverage range of the upper viewing angle of the camera is configured to cover the bottom of the plane formed by the line laser emitted by the line laser emitter, specifically a laser surface composed of multiple line laser beams, along which the line laser is emitted.
  • the emission direction of the camera is extended and emitted to the moving ground of the robot.
  • the angle between the laser surface and the working surface of the robot is 15 degrees; the coverage range of the downward viewing angle of the camera is configured to cover the linear laser emission
  • the line laser emitted by the laser is reflected back on the surface of the obstacle in front of the robot's body; therefore, the pitch angle of the camera can be appropriately adjusted according to the requirements of the map image required for navigation;
  • the camera's downward viewing angle is a top-down detection
  • the angle formed forms the downward angle of view of the camera;
  • the upper angle of view of the camera is the angle formed by detecting from bottom to upward, forming the upward angle of the camera; wherein the pitch angle of the camera is divided into the downward angle of view of the camera and the angle of the camera
  • the upper viewing angle of the camera is preferably set to 24 degrees, and the lower viewing angle of the camera is set to 18 degrees.
  • the heading angle formed by the deflection of the camera (optical axis of the lens) relative to the central axis of the robot is maintained within the preset error angle range, so that the optical axis of the camera is parallel to the direction of travel of the robot, and the camera is positioned in the direction of the robot.
  • the front of the robot receives the light reflected back by the linear laser within the width of the body, so as to detect obstacles directly in front of the robot in real time during its walking process.
  • the roll angle generated by the rotation of the camera along its optical axis is maintained within the preset error angle range, so that the camera in front of the robot receives the light reflected back by the line laser within the body width range, where,
  • the camera is rotatably assembled on the body of the robot, and the preset error angle range is set to -0.01 to 0,01 degrees, so that the camera (optical axis of the lens) deflects relative to the central axis of the robot to form a heading angle Keeping it at about 0 degrees, the roll angle generated by the rotation of the camera along its optical axis is also kept at about 0 degrees.
  • the angle between the center line of the line laser emitter emitting line laser and the installation baseline of the line laser emitter is equivalent to the angle between the laser surface and the working surface of the robot. It is preferably 15 degrees; the installation baseline refers to the straight line where the line laser emitter is located when the line laser emitter is located at an installation height above the robot, or when the line laser emitter and camera are at the same installation height, the line laser A straight line between the transmitter and the camera.
  • the emission angle of the line laser emitter is not limited.
  • the emission angle is related to the detection distance that the robot where the structured light module is located needs to meet, the width of the robot's body, and the mechanical distance between the line laser emitter and the camera.
  • the emission angle of the line laser emitter can be obtained directly through the trigonometric function relationship, that is, The launch angle is a fixed value.
  • the emission angle of the line laser emitter can be within a certain angle range. Variety.
  • the line laser is used to represent the reflection position of the line laser on the surface of the obstacle.
  • the coordinate offset of the pixel point relative to the center of the camera increases, where the pixel point used to represent the reflection position of the line laser on the surface of the obstacle includes but is not limited to the convex hull center pixel point obtained in the previous embodiment, Pixel points and line laser positions that conform to the convex hull feature.
  • the installation distance refers to the mechanical distance (or baseline distance) between the line laser transmitter and the camera.
  • the mechanical distance between the line laser emitter and the camera can be flexibly set according to the application requirements of the structured light module. Among them, information such as the mechanical distance between the line laser emitter and the camera, the detection distance that the robot where the structured light module is located, and the width of the robot's body can determine the size of the measurement blind zone to a certain extent. For the robot where the structured light module is located, its body width is fixed, and the measurement range and the mechanical distance between the line laser emitter and the camera can be flexibly set according to needs, which means that the mechanical distance and blind zone range are not fixed. value.
  • the blind zone range should be minimized.
  • the greater the mechanical distance between the line laser transmitter and the camera the greater the distance range that can be controlled, which is conducive to more Control the size of the blind zone well to improve the accuracy of obstacle detection.
  • structured light modules are used on sweeping robots.
  • they can be installed on the impact plate of the sweeping robot or on the robot body.
  • the mechanical distance between the line laser emitter and the camera may be greater than 20 mm; further optionally, the mechanical distance between the line laser emitter and the camera may be greater than 30 mm.
  • the mechanical distance between the line laser emitter and the camera is greater than 41mm. It should be noted that the range of mechanical distance given here is not only applicable to the scenario where structured light modules are used in sweeping robots, but also applies to structured light modules whose specifications are close to or similar to those of sweeping robots. Applications on other devices.
  • the emission angle of the line laser transmitter and the receiving angle of the camera are set as follows: the line laser transmitter emits line laser to a preset detection position in front of the body, and the line laser reflects back at the preset detection position.
  • the camera is used to form the pixel points or convex hull center pixel points that conform to the convex hull characteristics in the image collected by the camera, wherein the length of the laser line segment formed by the line laser at the preset detection position is greater than the width of the robot's body;
  • the reflection position of the line laser after hitting the ground depends on the lateral emission angle of the line laser (i.e., the emission angle of the line laser reflector) and the lateral pixel viewing angle of the camera (i.e., the receiving angle of the camera, corresponding to the horizontal viewing angle).
  • the line laser hits the front so that the horizontal length of the line laser extracted by the camera is slightly wider than the width of the robot's body.
  • the robot Whenever the robot walks a preset travel distance in the direction from the current position to the preset detection position, the horizontal distance between the preset detection position and the robot becomes smaller. In some embodiments, the robot approaches the preset detection position. Obstacles at the detection position; in the image collected by the camera, the coordinate offset of the pixel point representing the same reflection position of the line laser in the preset detection position increases relative to the center of the camera, that is, the line The closer the laser hits the ground in front to the machine, the greater the vertical jump produced by the pixels representing the reflection position of the line laser in the image collected by the camera under the same traveling distance of the robot, and more local information is captured. , the detection accuracy of obstacles is higher.
  • the image collected by the camera is used to reflect the same reflection position of the line laser.
  • the pixels used to reflect the reflection position of the line laser include pixels that conform to the convex hull characteristics.
  • the pixels that conform to the convex hull characteristics The position will jump longitudinally (vertical coordinate change), allowing the obstacles in the image collected by the robot's camera to change from the original overall outline to a local outline. At least if the outline height of the obstacle covered in the longitudinal direction changes, the relative The number of pixels required for the local contour collected before approaching the obstacle will increase, improving the accuracy of detecting obstacles. Furthermore, the greater the installation distance between the camera and the line laser emitter in the robot, for example, the greater the installation height of the line laser emitter relative to the camera, it is used to represent the reflection position of the line laser on the surface of the obstacle.
  • the change amount of the pixels in the ordinate increases, and the obstacles in the same frame image collected by the camera change from the original overall outline to the local outline, then the number of pixels in the same local area used to represent the obstacles is relative to the installation height It increases before it becomes larger, or it can be that every time the robot walks a test distance, the number of pixels in the same local area used to represent obstacles in the current frame image collected in real time increases relative to before the installation height becomes larger. , improve the accuracy of detecting obstacles, and the obstacles that the robot can detect can become smaller compared to the existing technology.
  • the pixels that conform to the convex hull characteristics are configured as part or all of the point information of the laser line segment formed by projecting the simulated line laser onto the surface of the object to be measured; the robot sets a set of pixels that conform to the convex hull characteristics of the image. It is the pixel point whose brightness value decreases from the center of the convex hull along the current column to the upper and lower sides respectively, and the set of pixel points composed of the center of the convex hull.
  • the set of pixel points forms a convex hull, where the center of the convex hull is the pixel point
  • the pixel point with the largest brightness value in the set in the set of pixel points that conform to the convex hull characteristics, starting from the center of the convex hull, the brightness value decreases upward along the current column to generate the first gradient value, and the brightness value increases along the current column.
  • the columns respectively decrease toward the lower side to generate the second gradient value.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Non-exhaustive list of computer readable media include the following: electrical connections with one or more wires (electronic device), portable computer disk cartridges (magnetic device), random access memory (RAM), Read-only memory (ROM), erasable and programmable read-only memory (EPROM or flash memory), fiber optic devices, and portable compact disc read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, and subsequently edited, interpreted, or otherwise suitable as necessary. process to obtain the program electronically and then store it in computer memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于图像信息的激光定位方法及机器人,激光定位方法的执行主体是装配有结构光模组的机器人,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息;激光定位方法包括机器人控制摄像头采集线激光发射器发射的线激光在待测物体表面反射回的光线的图像,并检测摄像头采集的图像的亮暗类型;当机器人检测到摄像头采集的当前帧图像是亮帧图像时,机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置,当机器人检测到摄像头采集的当前帧图像是暗帧图像时,机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置。

Description

基于图像信息的激光定位方法及机器人 技术领域
本发明涉及激光数据处理的技术领域,尤其涉及基于图像信息的激光定位方法及机器人。
背景技术
结构光模组泛指任何包含线激光发射器和摄像头模组的激光模组。在结构光模组中,线激光发射器用于向外发射线激光。其中,线激光发射器发射出去的线激光可以位于机器人的前方;摄像头模组可以采集环境图像,也可以接收线激光打到物体上返回来的反射光,其中,线激光发射器发射出去的线激光位于摄像头模组的视场范围内;线激光可帮助探测机器人的行进方向上的物体的轮廓、高度和/或宽度等信息,统称为激光的位置信息;其中,用于采集线激光的摄像头一般是设置有红外带通滤光片或红外高通滤光片,原因在于,线激光发射器发射红外光至物体表面后,若摄像头模组准确地检测线激光在成像平面的位置,则摄像头本身不仅需要对线激光在物体表面反射回的红外光波段敏感,还对可见光等波段的光线敏感。
摄像头模组采用前述的红外带通滤光片或红外高通滤光片对线激光发射器发射的激光在障碍物表面的反射光进行过滤,具体是透射过激光中携带的红外波段,并对其他波段环境光进行吸收或反射,以将所述反射光携带的红外光线与其它干扰环境光分离开,然而在过滤掉非红外光波段的同时,也丢失了大量的环境信息,所述结构光模组配套使用的红外带通滤光片或红外高通滤光片的造价较高,在环境光强较大的情况容易引入干扰,产生的干扰点较多。
发明内容
为了解决上述技术问题,本发明在摄像头不装配红外滤光片的条件下,公开基于图像信息的激光定位方法及机器人,以期接收到线激光发射器发射的红外光线在障碍物的表面反射回来的光线,实现机器人对激光在障碍物表面的反射光线的跟踪。具体的技术方案如下:
基于图像信息的激光定位方法,激光定位方法的执行主体是装配有结构光模组的机器人,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息;所述激光定位方法包括:机器人控制摄像头采集线激光发射器发射的线激光在待测物体表面反射回的光线的图像,并检测摄像头采集的图像的亮暗类型;当机器人检测到摄像头采集的当前帧图像是亮帧图像时,机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置,再将线激光位置的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标;当机器人检测到摄像头采集的当前帧图像是暗帧图像时,机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置,再将线激光位置的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标。
进一步地,所述机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置的方法包括:步骤1、机器人逐列遍历所述当前帧图像,并在所述当前帧图像的对应列中获取初始像素位置,同时根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点,其中,线激光位置用于表示所述线激光在待测物体表面的反射位置;步骤2、除了不存在线激光位置的像素点所在列之外,在所述当前帧图像的当前列中,机器人将当前列存在的初始像素位置设置为搜索中心,再从搜索中心开始沿着当前列向上搜索一个搜索半径内的像素点,并从搜索中心开始沿着当前列向下搜索一个搜索半径内的像素点;然后根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出当前列中的凸包中心像素点以更新上一次在当前帧图像的当前列中确定的凸包中心像素点;其中,参考帧图像是配置为在采集到当前帧图像之前,机器人最新找到的线激光位置所在的一帧亮帧图像;每当当前列中的搜索中心被更新一次,则当前列中设置出的凸包中心像素点也被更新一次;步骤3、根据线激光发射器发射的线激光在上一帧暗帧图像中的定位坐标对应的有效覆盖区域内的亮度值与所述凸包中心像素点在所述当前帧图像当中的亮度值的大小关系,从已经筛选出的凸包中心像素点当中剔除干扰点;在机器人遍历完所述当前帧图像内所有列像素点当中的凸包中心像素点以剔除所有干扰点后,将剩余的凸包中心像素点的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,则机器人在所述当前帧图像内搜索出每一列中确定出的线激光位置,以连接出线激光发射器发射的线激光在待测物体的表面形成的激光线段,并确定机器人已经通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置;其中,同一列中确定出的线激光位置是在机器人遍历完同一列内所有像素点后,由同一列内最后更新出的凸包中心像素点所在的位置,一个线激光位置的坐标使用对应的定位坐标表示。
进一步地,在所述步骤2中,每当针对一个搜索中心筛选出一个凸包中心像素点,则将从所述搜索中心开始沿着当前列向上或向下搜索到的相邻一个像素点更新为所述搜索中心,再重新执行步骤2,获得一个新的凸包中心像素点并将新的凸包中心像素点更新为凸包中心像素点;每个所述搜索中心相对于所述初始像素位置都在一个搜索半径的覆盖区域内,其中,所述搜索半径设置为第一预设像素距离;所述已筛选出的凸包中心像素点是所述当前帧图像内存在凸包中心像素点的每一列当中,最后更新出的凸包中心像素点;所述已筛选出的凸包中心像素点是所述当前帧图像内存在凸包中心像素点的每一列当中偏离所述当前帧图像的坐标系的原点最近的一个凸包中心像素点;机器人将当前帧图像的当前列上的符合凸包特征的像素点的集合设置为亮度值从凸包中心开始沿着当前列分别向上下两侧递减的像素点、以及凸包中心组成的像素点集合以形成一个凸包,凸包中心是该像素点集合内亮度值最大的像素点,并将凸包中心像素点设置为属于凸包中心处的像素点;在符合凸包特征的像素点的集合内,从凸包中心开始沿着同一列向上的方向上,像素点的亮度值沿着当前列向上递减并在相邻两个像素点的亮度值之间产生第一梯度值,并且,从凸包中心开始沿着同一列向下的方向上,像素点的亮度值沿着当前列向下递减并在相邻两个像素点的亮度值之间产生第二梯度值,以使得凸包中心属于所述搜索中心。
进一步地,在所述步骤2中,所述根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出凸包中心像素点的方法包括:在所述当前帧图像的当前列中,控制所述搜索中心的亮度值与上一次搜索到的位于同一列的凸包中心像素点的亮度值进行比较;所述上一次搜索到的位于同一列的凸包中心像素点是针对上一次确定的搜索中心来在当前帧图像的同一列中筛选出的凸包中心像素点,上一次确定的搜索中心是与当前确定的搜索中心在所述当前帧图像的当前列向下或向上相邻的一个像素点,当前帧图像的同一列像素点的列排序与所述当前帧图像的当前列的列排序相等;若当前确定的所述搜索中心的亮度值比上一次搜索到的位于同一列的凸包中心像素点的亮度值大,则在所述当前帧图像的当前列中,自所述搜索中心向上搜索像素点,并对所述亮度值按照所述第一梯度值递减的像素点进行计数,直至满足向上计数停止条件,再将亮度值按照所述第一梯度值递减的像素点的数量标记为向上梯度下降数量,并停止向上搜索像素点以待下一次更新所述搜索中心;并且,自所述搜索中心向下搜索像素点,并对所述亮度值按照所述第二梯度值递减的像素点进行计数,直至满足向下计数停止条件,再将亮度值按照所述第二梯度值递减的像素点的数量标记为向下梯度下降数量,并停止向下搜索像素点以待下一次更新所述搜索中心;当机器人判断到所述当前帧图像的当前列中计数出的向上梯度下降数量大于或等于上一次搜索到位于同一列的凸包中心像素点所需计数出的所述向上梯度下降数量、和/或判断到所述当前帧图像的当前列中计数出的向下梯度下降数量大于或等于上一次搜索到位于同一列的凸包中心像素点所需计数出的所述向下梯度下降数量时,机器人在当前帧图像的当前列所遍历的像素点当中,若检测到第一梯度值与第二梯度值都不等于第一预设梯度参数,且第一梯度值与第二梯度值的差值的绝对值小于第二预设梯度参数,且沿着当前列向上搜索到的亮度值最小的像素点的亮度值与当前确定的搜索中心处的像素点的亮度值的差值的绝对值大于参考帧图像的同一列像素点中向上搜索形成的同一类型的亮度值的差值的绝对值,且沿着当前列向下搜索到的亮度值最小的像素点的亮度值与当前确定的搜索中心处的像素点的亮度值的差值的绝对值大于参考帧图像的同一列像素点中向下搜索形成的同一类型的亮度值的差值的绝对值,则机器人将当前确定的搜索中心标记为凸包中心像素点;其中,第一预设梯度参数小于第二预设梯度参数。
进一步地,所述参考帧图像的同一列像素点中向上搜索形成的同一类型的亮度值的差值的绝对值是在参考帧图像中,从与所述当前列的列排序相同的一列中最终确定的搜索中心开始,沿着与所述当前列的列排序相同的一列中,向上搜索到的亮度值最小的像素点的亮度值与同一列上最终确定的搜索中心处的像素点的亮度值的差值的绝对值,其中,向上搜索到的亮度值最小的像素点相对于同一列上最终确定的搜索中心之间的距离小于或等于所述搜索半径;所述参考帧图像的同一列像素点中向下搜索形成的同一类型的亮度值的差值的绝对值是在参考帧图像中,从与所述当前列的列排序相同的一列中最终确定的搜索中心开始,沿着与所述当前列的列排序相同的一列,向下搜索到的亮度值最小的像素点的亮度值与同一列上最终确定的搜索中心处的像素点的亮度值的差值的绝对值,其中,向下搜索到的亮度值最小的像素点相对于同一列上最终确定的搜索中心之间的距离小于或等于所述搜索半径。
进一步地,针对当前确定的一个搜索中心,所述步骤2还包括:若所述搜索中心处的像素点的亮度值比上一次搜索到的位于同一列的凸包中心像素点的亮度值大,则在当前帧图像的当前列中,自所述搜索中心向上搜索像素点,并且自所述搜索中心向下搜索像素点;若机器人在自所述搜索中心向上搜索的过程中检测到像素点的亮度值不是按照所述第一梯度值递减,则对预先设置的向上梯度异常计数量计数一次,然后机器人判断其沿着当前帧图像的当前列向上是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向上搜索像素点并确定达到向上计数停止条件,否则在所述向上梯度异常频数大于第一预设误差次数时,停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件;并且,在自所述搜索中心向下搜索的过程中检测到像素点的亮度值不是按照所述第二梯度值递减,则对预先设置的向下梯度异常计数量计数一次,然后机器人判断其沿着当前帧图像的当前列向下是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向下搜索像素点并确定达到向下计数停止条件,否则在所述向上梯度异常频数大于第二预设误差次数时,停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件;或者,机器人在自所述搜索中心向上搜索的过程中,沿着当前帧图像的当前列向上对所述亮度值为数值255且位置相邻接的像素点进行计数,并将所述亮度值为数值255且位置相邻接的像素点的数量标记为向上过曝数量,当机器人检测到向上过曝数量大于第三预设误差次数、和/或沿着当前帧图像的当前列向上计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件;并且机器人在自所述搜索中心向下搜索的过程中,沿着当前帧图像的当前列向下对所述亮度值为数值255且位置相邻接的像素点进行计数,并将所述亮度值为数值255且位置相邻接的像素点的数量标记为向下过曝数量,当机器人检测到向上过曝数量大于第四预设误差次数、和/或沿着当前帧图像的当前列向下计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件。
进一步地,在所述步骤3中,所述根据线激光发射器发射的线激光在上一帧暗帧图像中的定位坐标对应的有效覆盖区域内的亮度值与所述凸包中心像素点在所述当前帧图像当中的亮度值的大小关系,从已经筛选出的凸包中心像素点当中剔除干扰点的方法包括:机器人遍历完所述当前帧图像的所有列的像素点并获取到每列中最新的凸包中心像素点,且保存线激光发射器发射的线激光在上一帧暗图像中的定位坐标的情况下,对于所述当前帧图像中的每个凸包中心像素点,在以线激光发射器发射的线激光在上一帧暗图像中的定位坐标所在位置为圆心,且半径为探测像素距离的圆域内,若机器人判断到该圆域内存在至少一个像素点的亮度值比所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点的亮度值大一个预设环境光亮度阈值,则机器人确定所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点是干扰点,机器人在该干扰点处找不到线激光位置,并将该干扰点从所述当前帧图像剔除。
进一步地,在所述步骤1中,所述根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点的方法包括:若所述当前帧图像的当前列中的初始像素位置的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,或者所述当前帧图像的当前列中的初始像素位置的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第二预设亮度阈值,则从沿着所述当前帧图像的当前列向上距离所述当前帧图像的当前列中的初始像素位置一个参考像素距离的位置开始,沿着所述当前帧图像的当前列向下搜索像素点;若检测到当前搜索的一个像素点的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,或检测到当前搜索的一个像素点的亮度值等于数值255,则对误差位置计数量计数一次,并确定当前搜索到的像素点是符合预设亮度分布特征的像素点;当机器人检测到误差位置计数量大于参考像素计数阈值时,确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,参考像素距离使用像素点的数量表示,以使参考像素计数阈值等于参考像素距离;其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置。
进一步地,在所述步骤1中,所述根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点的方法包括:以所述当前帧图像的当前列中的初始像素位置为圆环中心,在所述当前帧图像的当前列中,将位于圆环中心下方的、内径为第一定位半径且外径为第二定位半径的圆环区域所覆盖的像素点标记为第一待测像素点,然后计算第一待测像素点的亮度值的平均值,若第一待测像素点的亮度值的平均值大于上一轮找到的位于同一列的线激光位置处的像素点的亮度值,则确定第一待测像素点是符合预设亮度分布特征的像素点,并确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,第一定位半径小于第二定位半径,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置;或者,以所述当前帧图像的当前列中的初始像素位置为圆环中心,在所述当前帧图像的当前列中,将位于圆环中心上方的、内径为第一定位半径且外径为第二定位半径的圆环区域所覆盖的像素点标记为第二待测像素点,然后计算第二待测像素点的亮度值的平均值,若第二待测像素点的亮度值的平均值大于上一轮找到的位于同一列的线激光位置处的像素点的亮度值,则确定第二待测像素点是符合预设亮度分布特征的像素点,并确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,第一定位半径小于第二定位半径,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置。
进一步地,所述初始像素位置是在机器人的前方无障碍物的情况下,线激光发射器发射的线激光在机器人的行进平面反射回摄像头的视场范围后,形成于摄像头采集的图像中的原始像素点的位置;每个原始像素点是对应机器人的行进平面上的一个反射位置,用于表示同一帧图像的各列中用于搜索所述线激光位置的搜索起点;参考帧图像是配置为在采集到当前帧图像之前,机器人最新找到的线激光位置所在的一帧亮帧图像,其中,机器人最新找到的线激光位置是来源于参考帧图像对应列中设置出凸包中心像素点。
进一步地,在所述步骤1中,若在所述当前帧图像的当前列中无法获取到初始像素位置,则将上一轮找到的位于同一列的线激光位置更新为所述初始像素位置,并将第二预设像素距离更新为搜索半径,再重复执行所述步骤2以搜索出对应列中的凸包中心像素点;其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置或第一帧亮帧图像的同一列像素点中的初始像素位置;若机器人在重复执行所述步骤2的过程中,在同一列内始终搜索不出凸包中心像素点,则确定机器人在同一列内找不到线激光位置。
进一步地,所述机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置的方法包括:机器人逐列遍历所述当前帧图像;机器人依次搜索当前列的各个像素点,并根据当前帧图像的当前列内当前搜索出的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点;然后在当前帧图像的当前列中,将位置相邻接的至少两个合法像素点连接形成定位线段;当连接完位置相邻接的所有合法像素点后,选择出长度最大的定位线段;若选择出的长度最大的定位线段的长度大于预设连续长度阈值,则将选择出的长度最大的定位线段设的中心设置为线激光位置。
进一步地,根据当前帧图像内当前搜索的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点的方法包括:将在所述当前帧图像内当前搜索的像素点的亮度值减去上一帧亮帧图像的具有相同行列位置处的像素点的亮度值,获得暗帧图像相对差值;当检测到暗帧图像相对差值的相反数大于预设亮度差阈值,且上一帧亮帧图像的具有相同行列位置处的像素点的亮度值大于参考亮帧图像亮度阈值时,将在所述当前帧图像内当前搜索的像素点设置为所述合法像素点。
进一步地,摄像头采集所述线激光发射器发射的线激光在待测物体表面反射回的光线所形成的图像序列是配置为亮帧图像与暗帧图像依次交替产生,以使:摄像头采集的当前帧图像是亮帧图像时,摄像头采集的下一帧图像是暗帧图像;在摄像头采集当前帧亮帧图像与摄像头采集下一帧亮帧图像的时间间隔内,摄像头采集当前帧暗帧图像;在摄像头采集下一帧亮帧图像之后,摄像头采集下一帧暗帧图像;其中,执行所述激光定位方法的过程中,所述图像序列的第一帧图像是亮帧图像。
进一步地,所述激光定位方法还包括:当机器人检测到其所处的环境的光强大于第一预设光强阈值时,机器人降低摄像头的增益,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现过曝;当机器人检测到其所处的环境的光强大于第一预设光强阈值时,机器人降低摄像头的曝光时间,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现过曝;当机器人检测到其所处的环境的光强小于第二预设光强阈值时,机器人提高摄像头的增益,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现欠曝;当机器人检测到其所处的环境的光强小于第二预设光强阈值时,机器人提高摄像头的曝光时间,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现欠曝。
进一步地,当机器人检测到摄像头的当前曝光值大于第一预设曝光阈值时,调高线激光发射器的用于发射线激光的功率档位,以使线激光发射器发射的线激光的强度配置为等于平滑系数与当前曝光值的乘积;当机器人检测到摄像头的当前曝光值小于第二预设曝光阈值时,调低线激光发射器的用于发射线激光的功率档位,以使线激光发射器发射的线激光的强度配置为等于平滑系数与当前曝光值的乘积;其中,第一预设曝光阈值大于第二预设曝光阈值,摄像头的当前曝光值用于反映摄像头在当前光照亮度的环境内的曝光量;平滑系数用于平滑曝光值调整的步长,以便于机器人从所述当前帧图像中搜索出线激光位置。
一种机器人,该机器人的机体装配有结构光模组,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息;机器人内部设置控制器,控制器与结构光模组电性连接,控制器被配置为执行所述激光定位方法,以获得所述线激光发射器发射的线激光在当前帧图像中的定位坐标;其中,线激光发射器发射出去的线激光位于摄像头的视场范围内。
进一步地,所述摄像头的水平视角被配置为在机器人的前方接收所述线激光在机体宽度范围内反射回的光线;和/或结构光模组在机器人的机体上的安装高度被配置为与待测的障碍物的高度成正相关关系,以使得待测的障碍物占据所述摄像头的有效视场空间。
进一步地,所述摄像头的上视角的覆盖范围被配置为覆盖到线激光发射器发射的线激光形成的平面的底部;所述摄像头的下视角的覆盖范围被配置为覆盖到线激光发射器发射的线激光在机器人的机体前方的障碍物表面反射回的光线;和/或所述摄像头相对于机器人的中轴线偏转形成的航向角保持在预设误差角度范围内,以使得摄像头的光轴与机器人的行进方向平行,且让摄像头在机器人的前方接收所述线激光在机体宽度范围内反射回的光线;和/或所述摄像头沿着其光轴转动产生的翻滚角保持在预设误差角度范围内,以使摄像头在机器人的前方接收所述线激光在机体宽度范围内反射回的光线,其中,所述摄像头是可转动装配地在机器人的机体上。
进一步地,若所述摄像头与所述线激光模块之间的安装距离越大,则在所述摄像头采集的图像中,用于表示所述线激光在障碍物的表面的反射位置的像素点相对于摄像头的中心的坐标偏移量增大。
进一步地,线激光发射器的发射角度和摄像头的接收角度被设置为:线激光发射器发射线激光至机体的前方的预设探测位置处,线激光在预设探测位置处反射回所述摄像头,其中,线激光在预设探测位置处形成的激光线段的长度大于机器人的机体宽度;每当机器人沿着由当前位置指向所述预设探测位置的方向行走预设行进距离时,预设探测位置与机器人之间的水平距离变小,摄像头采集的图像中的用于表示所述线激光在所述预设探测位置中的同一反射位置的像素点相对于摄像头的中心的坐标偏移量增大。
本发明的技术效果在于:在执行所述激光定位方法以对线激光的反射光线的图像进行跟踪的过程中,无需使用红外滤光片过滤环境光,为采集的图像中保留下红外和可见光波段的全部细节,便于机器人从当前帧图像中搜索出线激光在摄像头的成像平面内形成的像素点关联的位置信息,包括分别在亮帧图像和暗帧图像中采取相适应的算法(比如像素点匹配类算法、像素点搜索类算法)提取出线激光位置,以实现激光定位,进而可用于地图导航定位和用于识别障碍物的深度学习。
当机器人检测到摄像头采集的当前帧图像是亮帧图像时,选择将当前帧图像输入帧间追踪算法对应的处理规则模型中以输出有效的激光位置,在摄像头距离障碍物不过近的场景中有效过滤掉各种环境光干扰,减少对红外滤光片的依赖;具体会在符合凸包特征的像素点中,基于向上搜索的像素点当中产生的亮度值梯度与向下搜索的像素点当中产生的亮度值梯度之间的数值关系及其在相邻两次确定的搜索中心对应的搜索状态下的差异、当前搜索的像素点的亮度值与上一次在同一帧图像的同一列确定出的凸包中心像素点的亮度值之间的关系、及当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出当前列的凸包中心像素点,并在当前列内遍历完相对于初始像素位置的搜索半径内的每个像素点并更新凸包中心像素点,并排除不存在线激光位置的像素点的干扰后,确定出当前列最终的凸包像素点,能够在环境光较强的情况下减少干扰点误判现象,从而以跟踪参考帧图像的符合凸包特征的像素点的数量及亮度值的方式来搜索出更加准确的凸包中心像素点,进而中剔除所有干扰点后,将剩余的凸包中心像素点的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,较为准确地实现机器人对激光在障碍物表面的反射光线的跟踪,适用于机器人导航行走场景中,达到机器人定位障碍物的效果。另一方面,机器人检测到摄像头采集的当前帧图像是暗帧图像时,选择将当前帧图像输入亮度重心算法对应的处理规则模型中以输出连接长度合理的定位线段,在摄像头距离障碍物过近的场景中克服对于环境光干扰较为敏感的问题,对应地,防止机器人因为误判而撞上前方反射激光光线的障碍物。因此,本发明通过结合帧间追踪算法和亮度重心算法来在各种环境光强场景内取长补短,实现在先后交替产生的用于反映线激光的反射光线的亮帧图像和暗帧图像当中完成激光定位。
并且,本发明还引入对摄像头的曝光值的动态调节方式,根据当前环境的状况来调节摄像头的增益和曝光时间,使得摄像头中看到的图像不出现过曝或者欠曝的情况;在此基础上,根据调整后的摄像头增益和曝光时间,对线激光发射器的用于发射线激光的功率档位进行调节,实现在高亮度环境下使用较强的线激光发射功率挡位,使得障碍物在环境光较亮的情况下也能够看的到线激光,且图像不会过曝(比如,室外强环境光下,线激光在白色障碍物反射回摄像头后,降低摄像头的增益或曝光时间),避免由于环境过亮而导致摄像头采集的图像出现过曝,从而找到更加准确的线激光位置;在低亮度环境下使用较弱的线激光发射功率挡位,使得障碍物图像在环境光较暗的情况下过曝没那么强烈,从而不产生更多的反射干扰,便于更准确的线激光位置(对应为凸包中心)。
附图说明
图1是本发明的一实施例公开基于图像信息的激光定位方法的流程图。
实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行详细描述。为进一步说明各实施例,本发明提供有附图。这些附图为本发明揭露内容的一部分,其主要用以说明实施例,并可配合说明书的相关描述来解释实施例的运作原理。配合参考这些内容,本领域普通技术人员应能理解其他可能的实施方式以及本发明的优点。
本发明实施例公开基于图像信息的激光定位方法,具体针对激光光线在待测表面的反射位置进行定位,而且是基于摄像头采集的相关两帧图像内的像素点的亮度值的变化(对应为环境光强度的变化)自适应地筛选出具有代表性的激光线位置信息,克服环境光的干扰,以提高障碍物检测精度和机器人的避障效率。本发明实施例公开的激光定位方法的执行主体是依靠结构光导航定位的机器人,该机器人装配有结构光模组,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息;线激光发射器发射出去的线激光位于摄像头的视场范围内,该线激光传感器发射出的线激光可以投射到障碍物的表面,摄像头的视场范围覆盖障碍物的全部或部分轮廓,一般导航定位场景下,机器人在室内外移动过程中,可以通过设置在该机器人上的结构光模组检测机器人的行进方向的前方是否存在障碍物,当机器人行走向障碍物的过程中,通过执行所述激光定位方法来对障碍物表面的反射位置的检测精度施加影响,从而提高定位和避障精度。
需要说明的是,本申请实施例使用的结构光模组泛指任何包含线激光发射器和摄像头的传感器模组。在结构光模组中,线激光发射器用于向外发射线激光。其中,线激光发射器发射出去的线激光可以位于机器人的前方的有效探测区域内,摄像头可以依次采集各种环境光条件下的多帧图像,包括红外光的成像信息和可见光的成像信息,其中,可见光的成像信息可以直接用于构建地图和对地图中的障碍物的位置进行标记。在本实施例中主要是以接收线激光打到待测物体上返回来的反射光的图像为主,需要克服不同波段的环境光的干扰,线激光发射器发射出去的线激光位于摄像头的视场范围内并在待测物体的表面或水平地面上形成激光线段,线激光可帮助探测机器人的行进方向上的物体的轮廓、高度和/或宽度等信息,本实施例主要是提取物体的高度信息为主,适应帧间追踪算法的需求。相对于基于图像传感器的感知方案,线激光发射器能够为摄像头提供更为准确的像素点高度和方向信息,可降低感知运算的复杂度,提高实时性。
具体地,所述结构光模组的工作原理是:线激光发射器向外发射线激光,发射出的线激光在到达障碍物表面后,一部分反射回来并经摄像头中的光学成像系统形成图像上的像素点。而由于物体表面到返回点的距离不同,其反射光飞行时间不同,通过对反射光飞行时间的测量,每个像素点就可获得独立的距离信息和方向信息,进而使用三角换算关系获得高度信息和宽度信息,并标记为图像上的像素点的坐标信息,总称为位置信息。在机器人行进过程中,一方面可控制结构光模组中的线激光发射器对外发射线激光,线激光遇到行进路径上的障碍物后会被反射回来,至少覆盖地面介质、以及地面上的低矮障碍物;另一方面控制结构光模组中的摄像头采集前方区域内的环境图像。在此期间,若线激光探测到的行进路径上的障碍物,会在物体表面形成激光线段,该激光线段可被摄像头采集,即摄像头采集到的图像中会包含由线激光发射器发射出去的线激光遇到物体后形成的激光线段。至于线激光在物体表面形成的激光线段与水平面之间的角度不做限定,例如可以平行或垂直于水平面,也可以与水平面之间成任意角度,具体可根据应用需求而定。其中,每条激光线段包含多个像素点,每个像素点对应障碍物表面上的一个点,大量环境图像中的激光线段上的像素点所代表的障碍物表面上的点可形成障碍物点云数据。这些障碍物点云数据所使用的坐标系可以是机器人所在坐标系,则机器人可根据摄像头所在图像坐标系与机器人所在坐标系之间的转换关系,将激光线段上的像素点坐标转换到机器人所在坐标系下,得到障碍物点云数据。或者,这些障碍物点云数据所使用的坐标系也可以是世界坐标系,则机器人可以根据摄像头所在坐标系、机器人所在坐标系以及世界坐标系之间的转换关系,将激光线段上的像素点坐标转换到机器人所在坐标系下,得到障碍物点云数据;障碍物点云数据可以包含但不限于点的三维坐标信息、颜色信息、反射强度信息等等。在得到障碍物点云数据之后,获得障碍物的高度信息和长宽信息,则基于障碍物点云数据可以识别障碍物的类型。在本申请实施例中,并不限定通过障碍物点云数据识别障碍物的类型及其占据的区域。例如,可以将障碍物点云数据输入到深度学习模型中,识别障碍物的类型。或者,也可以根据障碍物点云数据对障碍物进行描绘,得到障碍物点以及障碍物轮廓,根据障碍物轮廓确定障碍物的类型,也可以根据障碍物点的聚类分析、阈值过滤、以及置信度判断。
具体地,并不限定线激光发射器的实现形态,可以是任何能够发射线激光的设备/产品形态。例如,线激光发射器可以是但不限于激光管。同理,也不限定摄像头的实现形态,凡是可以采集环境图像的视觉类设备均适用于本申请实施例。例如,摄像头可以包括但不限于单目摄像头、双目摄像头等。在本申请实施例中,可以限定线激光发射器发射线激光的波长是红外光线的波长,例如可以是红外激光,在实施所述激光定位方法的过程中,摄像头可以在镜头不装配滤光片(比如红外滤光片)的前提下接收线激光发射器发射的线激光的各种波长的光线;当然,在一些实施例中,对线激光发射器的安装位置、安装角度等,以及线激光发射器与摄像头模组之间的安装位置关系等均不做限定。在本申请实施例中,也不限定线激光发射器的数量,例如可以是一个,也可以是两个或者两个以上。同理,也不限定摄像头的数量,例如可以是一个,也可以是两个或两个以上。
在一些实施例中,所述摄像头的视场角包括垂直视场角和水平视场角。在本实施例中,可以根据应用需求来选择具有合适视场角的摄像头,只要线激光发射器发射出去的线激光位于摄像头的视场范围内即可,至于线激光在物体表面形成的激光线段与水平面之间的角度不做限定,例如可以平行或垂直于水平面,也可以与水平面之间成任意角度,具体可根据应用需求而定。
在一些实施例中,线激光发射器和摄像头组成的结构光模组的安装高度需要针对待检测障碍物的大小确定,若结构光模组在机器人中的安装高度越高,则在机器人的前方覆盖到的纵向空间越大,会让体型较小的障碍物的检测精度变差;若结构光模组在机器人中的安装高度越低,则在机器人的前方覆盖到的纵向空间越小,会让体型较小的障碍物的检测精度得到提高。优选地,线激光发射器安装在不设置红外滤光片的摄像头的上方,线激光发射器的中心线与摄像头的中心线相交于一点。
参阅图1可知,本发明公开基于图像信息的激光定位方法包括:机器人控制摄像头采集线激光发射器发射的线激光在待测物体表面反射回的光线的图像,并检测摄像头采集的图像的亮暗类型,具体是区分出亮帧图像还是暗帧图像,即机器人检测摄像头依次采集的每帧图像是亮帧图像还是暗帧图像;在一些实施例中,机器人在往既定目标位置行进的过程中,控制线激光发射器发射线激光,并控制摄像头采集线激光在待测物体表面反射回的光线的图像,所述结构光模组按照一定的方式工作,线激光发射器按照预设调制周期和发射功率档位对外发射线激光;摄像头周期性进行图像采集,得到一组图像序列,一组图像序列包括至少一帧图像的数据,每帧图像包含线激光打到物体表面或地面上形成的激光线段,一条激光线段包含多个坐标数据,大量环境图像中的激光线段上的坐标数据可形成点云数据。
具体地,检测摄像头采集的图像是亮帧图像还是暗帧图像的方法包括:控制线激光发射器按照预设调制周期射出的线激光,当线激光是属于红外激光调制信号时,红外激光调制信号在第一调制子周期输出第一电平(对应于逻辑高电平),经过待测物体反射后,被摄像头采集后形成亮帧图像;红外激光调制信号在第一调制子周期输出第二电平(对应于逻辑低电平),经过待测物体反射后,被摄像头采集后形成暗帧图像;则一个采样周期内反映到摄像头的成像平面依次为一帧亮帧图像和一帧暗帧图像,机器人会在从摄像头取到的图像中,针对每帧图像设置一个图像结构体(图像数据的结构体信息),再缓存起来并对应标记上线激光的亮暗属性,其中,每一帧图像都可以保存为上一帧图像以备跟踪匹配使用。在机器人的配置作用下,线激光发射器发射的线激光在待测物体表面反射回的光线在摄像头的成像平面内形成的图像序列是配置为亮帧图像与暗帧图像依次交替产生,以使:摄像头采集的当前帧图像是亮帧图像时,摄像头采集的下一帧图像是暗帧图像;在摄像头采集当前帧亮帧图像与摄像头采集下一帧亮帧图像的时间间隔内,摄像头采集当前帧暗帧图像,该时间间隔等于是摄像头的一个采样周期;在摄像头采集下一帧亮帧图像之后,摄像头采集下一帧暗帧图像。其中,执行所述激光定位方法的过程中,摄像头采集的由所述线激光在待测物体表面反射回的光线的第一帧图像是亮帧图像,记为线激光发射器发出线激光后被采集的第一帧图像,也可以记为第一帧亮帧图像;然后摄像头采集的由所述线激光在待测物体表面反射回的光线的第二帧图像是暗帧图像,记为线激光发射器发出线激光后的第二帧图像,也可以记为第一帧暗帧图像;接着,摄像头采集的由所述线激光在待测物体表面反射回的光线的第三帧图像是亮帧图像,记为线激光发射器发出线激光后的第三帧图像,也可以记为第二帧亮帧图像,然后摄像头采集的下一帧图像是暗帧图像,则机器人基于上述交替产生的方式从摄像头采集的一组图像序列中,按照摄像头的采样周期依次对亮帧图像和暗帧图像进行区分和标记。
优选地,区分亮帧图像和暗帧图像,可以依据图像的平均灰度值来实现的。具体地,每采集一帧图像,则先遍历完该帧图像的所有像素点,累加求取所有像素点的亮度值总和,再求取亮度值总和与像素点个数的商值,作为该帧图像的平均亮度值。当机器人检测到该帧图像内存在预设阈值数量的像素点的亮度值都大于平均亮度值时,将该帧图像设置为亮帧图像,可以用于环境光较强的定位场景中搜索所述符合最佳凸包条件的像素点,允许摄像头的曝光量提高;当机器人检测到该帧图像内存在预设阈值数量的像素点的亮度值都小于平均亮度值时,将该帧图像设置为暗帧图像,可以用于环境光较弱的定位场景中符合最佳凸包条件的像素点,允许摄像头的曝光量降低;提高所述激光定位方法对环境光强的适应性。其中,预设阈值数量是大于或等于该帧图像内所有像素点的个数。
当机器人检测到摄像头采集的当前帧图像是亮帧图像时,机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置,再将线激光位置的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,以实现对所述线激光的像素位置的定位;其中,机器人将摄像头采集的每帧图像依次输入帧间追踪算法对应的处理规则模型中,输出对应帧亮帧图像内的线激光位置,以获得各个线激光位置连接成的激光线段,便于对待测物体的定位;输入帧间追踪算法对应的处理规则模型中的图像可划分为当前帧图像和上一帧图像、或当前帧图像和下一帧图像,可以使用上一帧亮帧图像与当前帧亮帧图像之间的匹配关系,和/或上一帧暗帧图像与当前帧亮帧图像之间的匹配关系,跟踪线激光的反射位置,在摄像头距离障碍物不过近的场景中有效过滤掉各种环境光干扰,尤其是强环境光的干扰,克服由摄像头与待测物体之间的距离变化引起的像素点纵向跳变的影响,趋于获得精度更高的线激光位置,减少对红外滤光片的依赖。
当机器人检测到摄像头采集的当前帧图像是暗帧图像时,机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置,再将线激光位置的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,从而在交替执行帧间追踪算法和亮度重心算法的过程中实现对所述线激光的完整像素位置的定位;其中,机器人选择将每帧图像输入亮度重心算法对应的处理规则模型中,输出对应帧暗帧图像内有效的线激光位置,该有效的线激光位置是在采集到暗帧图像的时刻确定出的具有预测意义的线激光位置,以辅助对应帧亮帧图像内的线激光位置连接出相对准确的激光线段。由于机器人在执行帧间追踪算法的过程中只注重环境光强度变化的敏感性而忽略机器人相对于障碍物的距离,使得机器人在行走过程中容易碰撞上障碍物,所以在采集的当前帧图像由亮帧图像切换为暗帧图像后,机器人转而执行亮度重心算法,实现在摄像头距离障碍物过近的场景中,既可以注意到障碍物的存在,又及时识别出线激光的反射位置,防止机器人撞上前方障碍物,至于由于环境光干扰而产生的线激光位置的误判问题,则在采集的当前帧图像由暗帧图像切换为亮帧图像后,机器人通过执行帧间追踪算法来克服相应的误判问题。因此,本发明实施例通过切换执行帧间追踪算法和亮度重心算法来应对环境光强的干扰,增强机器人在各种行走环境内激光定位的稳定性,实现在先后交替产生的用于反映线激光的反射光线的亮帧图像和暗帧图像当中稳定地完成激光定位。
综上,在执行所述激光定位方法以对线激光的反射光线的图像进行跟踪的过程中,无需使用红外滤光片过滤环境光,为采集的图像中保留下红外和可见光波段的全部细节,便于机器人从当前帧图像中搜索出线激光在摄像头的成像平面内形成的像素点关联的位置信息,包括分别在亮帧图像和暗帧图像中采取相适应的算法(比如像素点匹配类算法、像素点搜索类算法),互补性地提取出各个线激光位置,以实现在各种环境光强场景下对远近不同的障碍物进行激光定位,机器人能够在各种行走环境内克服不同强度的环境光的干扰,进而可用于地图导航定位和用于识别障碍物的深度学习。
作为一种实施例,所述机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置的方法包括:
步骤1、机器人逐列遍历所述当前帧图像,并在所述当前帧图像的当前列中获取初始像素位置,一般地可以在所述当前帧图像的每列中分别获取一个初始像素位置,一个初始像素位置作为其所在列中用于搜索符合最佳凸包条件的像素点的搜索起点;需要说明的是,所述初始像素位置是在机器人的前方无障碍物(或摄像头的视场范围内障碍物)的情况下,线激光发射器发射的线激光在机器人的行进平面(一般为地面)反射回摄像头的视场范围后,形成于摄像头采集的图像中的原始像素点的位置,此时的线激光发射器或摄像头都已经经过校准处理;优选地,机器人的行进平面可以使用所述待测物体的表面来表示;每个原始像素点是对应机器人的行进平面上的一个反射位置,用于表示同一帧图像的各列中用于搜索所述线激光位置的搜索起点,同一帧图像内获得各个原始像素点都优选为位于同一行上,可以包括同一行的位置相邻的像素点。其中,所述待测物体可以是凸起于机器人的行进平面上的障碍物。
同时,根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点,以在当前帧图像中开始搜索符合凸包特征的像素点之前,排除强环境光干扰的像素点的干扰,其中,线激光位置用于表示机器人在当前帧图像中搜索到的所述线激光在待测物体表面的反射位置,在本实施例中,当前帧图像中不存在线激光位置的像素点是存在强环境光干扰的像素点。
步骤2、除了步骤1确定不存在线激光位置的像素点所在列之外,机器人依次遍历所述当前帧图像的相关列,具体是遍历存在初始像素位置的列内的像素点;在所述当前帧图像的当前列中,机器人将当前列存在的初始像素位置设置为搜索中心,再从搜索中心开始沿着当前列向上搜索一个搜索半径内的像素点,可选地,机器人从当前列存在的初始像素位置开始,将第一预设像素距离设置为搜索半径,分别沿着列方向向上和向下搜索符合凸包特征的像素点;其中,当前列是机器人当前遍历的一列,搜索半径同时适用于划定同一列的两个相反的列方向上的覆盖区域;同一列的两个相反的列方向包括从所述初始像素位置开始,沿着同一列向上搜索的方向和沿着同一列向下搜索的方向。然后针对当前确定的一个搜索中心,根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出当前列中的凸包中心像素点,再将筛选出当前列中的凸包中心像素点更新掉上一次在当前帧图像的当前列中确定的凸包中心像素点;每当当前列中的搜索中心被更新一次,则当前列中设置出的凸包中心像素点也被更新一次,并在当前列内遍历完相对于初始像素位置的搜索半径内的每个像素点并更新凸包中心像素点,确定出当前列最终的凸包中心像素点。在一些实施例中,由于线激光多次反射的原因,会在当前帧图像的同一列上产生多个符合凸包特征的像素点,则会同时更新出两个或两个以上的凸包中心像素点,则最终在当前列内确定出两个或两个以上的凸包中心像素点,然后比较这些凸包中心像素点相对于当前帧图像的坐标系的原点的偏移量,选择纵坐标偏移量的绝对值最小的一个凸包中心像素点更新为最终在当前列内确定出的一个凸包中心像素点,若当前帧图像的坐标系的原点的纵坐标表示机器人的行进平面的纵坐标,则最终在当前列内确定出的一个凸包中心像素点是当前帧图像的当前列内偏离地面最近的一个凸包中心像素点。
其中,参考帧图像是配置为在采集到当前帧图像之前,机器人最新找到的线激光位置所在的一帧亮帧图像;机器人最新找到的线激光位置是从对应列的所述凸包中心像素点当中筛选出来。向上搜索的像素点的亮度值与向下搜索的像素点的亮度值的差异可以延伸为向上搜索的像素点当中产生的亮度值梯度与向下搜索的像素点当中产生的亮度值梯度之间的数值关系,能够在相邻两次确定的搜索中心所对应的搜索状态下进行对比,有利于从搜索到的符合凸包特征的像素点中筛选出凸包中心像素点,而且针对于当前确定的一个搜索中心分别向上和向下搜索的亮度值、与同一列上一次确定的一个搜索中心分别向上和向下搜索的亮度值对比的结果;相邻两次确定的搜索中心分别是同一列内,当前确定的一个搜索中心和上一次确定的一个搜索中心,可以是从所述初始像素位置开始沿着当前列向上的方向上,先后两轮搜索所述搜索半径内的像素点、筛选并更新同一列的凸包中心像素点确定的相邻两个搜索中心;也可以是从所述初始像素位置开始沿着当前列向下的方向上,先后两轮搜索所述搜索半径内的像素点、筛选并更新同一列的凸包中心像素点确定的相邻两个搜索中心,其中一轮搜索对应一个搜索中心,更对应不同列的像素点区域内的一种搜索状态,搜索中心的更新范围处于相对于所述初始像素位置都在一个搜索半径的覆盖区域内,包括所述初始像素位置,以便于对同一列的凸包中心像素点进行更新,不断筛选出更加准确的凸包中心来表示线激光位置;所述帧间匹配关系包括两帧图像之间的像素点数量上的匹配、以及亮度值上的匹配,这两帧图像可以相邻两帧图像,也可以相隔一帧或多帧图像的两帧亮帧图像,具体涉及到的匹配可以是基于机器人行走过程中实时采集的图像当中用于表征所述线激光在障碍物的表面的同一反射位置的像素点的亮度值的变化和纵坐标的变化。
在此基础上,每当机器人按照步骤2从搜索中心开始沿着当前列向上遍历完一个搜索半径内的每个像素点并筛选和更新出所述凸包中心像素点,也从搜索中心开始沿着当前列向下遍历完一个搜索半径内的每个像素点并筛选和更新出所述凸包中心像素点,则机器人开始从当前帧图像的下一列的初始像素位置开始遍历的所述搜索半径内的像素点。
在步骤1中,所述当前帧图像是亮帧图像时,上一帧图像是暗帧图像,机器人将上一帧图像保存起来,若机器人已经在执行当前步骤1之前从上一帧图像中搜索出相应的线激光位置(包括对应列上的线激光位置、或所有列上的线激光位置),则将上一帧图像标记为参考帧图像;优选地,参考帧图像是配置为在采集到当前帧图像之前,机器人最新找到的线激光位置所在的一帧亮帧图像,其中,机器人最新找到的线激光位置是来源于对应列的凸包中心像素点,具体在对应列中内,以所述初始像素位置为中心,对应列向上方向的一个搜索半径覆盖区域内以及对应列向下方向的一个搜索半径覆盖区域内,所有像素点依次被更新为所述搜索中心之后,由所述步骤2设置出新的凸包中心像素点。
步骤3、对于机器人已经筛选出的属于每列像素点当中的凸包中心像素点,机器人根据线激光发射器发射的线激光在上一帧暗帧图像中的定位坐标对应的有效覆盖区域内的亮度值与所述凸包中心像素点在所述当前帧图像当中的亮度值的大小关系,先确定出属于干扰点的凸包中心像素点,再从已筛选出的凸包中心像素点当中剔除干扰点,可以是选择逐列遍历的方式剔除当前帧图像内存在的干扰点,以排除环境光的干扰。在机器人遍历完所述当前帧图像内所有列像素点当中的凸包中心像素点以剔除所有干扰点后,将剩余的凸包中心像素点的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,则机器人在所述当前帧图像内搜索出每一列中确定出的线激光位置,以连接出线激光发射器发射的线激光在待测物体的表面形成的激光线段,并确定机器人已经通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置;其中,同一列中确定出的线激光位置是在机器人遍历完同一列内所有像素点后,由同一列内最后更新出的凸包中心像素点所在的位置,一个线激光位置的坐标使用对应的定位坐标表示。由于机器人在执行步骤1至3的过程中,对于当前帧图像都是采取逐列遍历的方式获取属于相应列上的线激光位置,所以在确定当前遍历的列序号后,对于每个线激光位置的坐标可以只选择纵坐标值表示,以识别出线激光在障碍物表面的反射位置的高度信息,也可用于机器人避障。
具体地,在所述步骤2中,除了不存在线激光位置的像素点所在列之外,机器人将所述步骤1在当前列中获取的初始像素位置设置为搜索中心,即前述实施例中的步骤2中的初始像素位置。每当针对一个搜索中心筛选出一个凸包中心像素点,则将从所述搜索中心开始沿着当前列向上或向下搜索到的相邻一个像素点更新为所述搜索中心,再重新执行步骤2,获得一个新的凸包中心像素点并将新的凸包中心像素点更新为凸包中心像素点;其中,每个所述搜索中心相对于所述初始像素位置都在一个搜索半径的覆盖区域内,搜索半径是设置为第一预设像素距离,优选地,第一预设像素距离小于所述当前帧图像所覆盖的最大像素距离,并处于摄像头的探测范围内,以局限在所述初始像素位置的附近搜索凸包中心像素点,凸包中心像素点在本实施例中是属于所述搜索半径的覆盖范围内的符合凸包特征的像素点,优选地,这里的凸包特征是用于表示线激光打在障碍物的表面形成的图形的特征,该图形的覆盖区域的像素点特征可以是该图形本身覆盖区域内的像素点的亮度值、或该图形的内切圆覆盖区域内的像素点的亮度值、或该图形的外接圆覆盖区域内的像素点的亮度值。值得注意的是,所述已筛选出的凸包中心像素点是所述当前帧图像内存在(能够搜索到)凸包中心像素点的每一列当中,最后更新出的凸包中心像素点;所述已筛选出的凸包中心像素点是所述当前帧图像内存在凸包中心像素点的每一列当中偏离所述当前帧图像的坐标系的原点最近的一个凸包中心像素点,优选地,所述当前帧图像内不一定在每一列内都更新出凸包中心像素点,则最后连成的激光线段不是连续,可以用于表示机器人行进地面的凸起障碍物。
在本实施例中,机器人将当前帧图像的当前列上的符合凸包特征的像素点的集合设置为亮度值从凸包中心开始沿着当前列分别向上下两侧递减的像素点、以及凸包中心组成的像素点集合以形成一个凸包,可以视为形成包围一条激光线段的图形,用于探测线激光的反射位置所在的障碍物表面的局部有效探测区域,凸包中心是该像素点集合内亮度值最大的像素点,并将凸包中心像素点设置为属于凸包中心处的像素点;在符合凸包特征的像素点的集合内,从凸包中心开始沿着同一列向上的方向上,像素点的亮度值沿着当前列向上递减并在相邻两个像素点的亮度值之间产生第一梯度值,并且,从凸包中心开始沿着同一列向下的方向上,像素点的亮度值沿着当前列向下递减并在相邻两个像素点的亮度值之间产生第二梯度值,以使得凸包中心属于所述搜索中心。其中,所述凸包中心的邻域内,存在至少一个像素点的亮度值是等于所述搜索中心的亮度值,因而,在符合凸包特征的像素点的集合内,从凸包中心开始,亮度值沿着当前列分别向上递减产生第一梯度值,亮度值沿着当前列分别向下递减产生第二梯度值,其中,从凸包中心开始向上递减的所需遍历过的像素距离可以小于或等于所述搜索半径,从凸包中心开始向下递减的所需遍历过的像素距离也可以小于或等于所述搜索半径,以在凸包内形成既定的亮度值梯度变化规律,若在同一凸包内遍历到的多个像素点的亮度值的变化规律不符合该既定的亮度值梯度变化规律。
优选地,所述搜索中心的亮度值为数值255,即图像的亮度值按照二值化方式划分出的最大灰度值(最大灰度等级)。需要说明的是,像素点的亮度用于表示照射在待测物体表面的光线的明暗程度,在使用灰度值表示其亮度值的情况下,若灰度值越高则图像越亮,则亮度值越大。图像二值化形成的灰度图只含亮度信息,不含色彩信息,就像黑白图片,亮度由暗到明,变化是连续的,因此要表示灰度图,就需要把亮度值量化,通常划分为0至255共256个级别,其中数值255在本实施例运用于表示一种亮度值,当灰度值的范围为0至255时,本实施例将像素点的亮度值的取值范围也表示为数值0至255,则摄像头采集的每帧图像可以视为转换为灰度图像,其中,亮度即灰度,灰度值越大,亮度值越大,其中数值0可以表示像素点最黑,数值255可以表示像素点最白。本实施例提及的像素点是一帧图像中不可再分割的单元,每帧图像都是由许许多多的像素点构成的,它是以一个单一颜色的小格存在,可以映射到栅格地图内的一个单元格(栅格),灰度图用一个字节的容量来存储一个像素点。
作为一种实施例,在所述步骤2中,对于每一帧图像的存在初始像素位置的每列,不存在线激光位置的像素点所在列除外,所述根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出凸包中心像素点的方法包括:
在所述当前帧图像的当前列中,机器人从所述搜索中心开始,沿着列方向向上或向下搜索所述符合凸包特征的像素点的过程中,控制所述搜索中心的亮度值与上一次搜索到的位于同一列的凸包中心像素点的亮度值进行比较,可以但不限于使用亮度值的差值判断二者的大小关系;所述上一次搜索到的位于同一列的凸包中心像素点是针对上一次确定的搜索中心来在当前帧图像的同一列中筛选出的凸包中心像素点,上一次确定的搜索中心是与当前确定的搜索中心在所述当前帧图像的当前列向下或向上相邻的一个像素点,进一步地,参考帧图像的同一列像素点的列排序与所述当前帧图像的当前列的列排序相等,参考帧图像的相同排序的一列当中的不一定具有相同的行排序的凸包中心像素点。
若机器人检测到所述搜索中心的亮度值比上一次搜索到的位于同一列的凸包中心像素点的亮度值大,则在当前列中,自所述搜索中心向上搜索像素点,并计数所述亮度值按照所述第一梯度值递减的像素点的数量,即每当向上搜索到一个亮度值减小的像素点、且当前搜索到的像素点的亮度值相对于上一次搜索的像素点的亮度值(当前搜索到的像素点的下方的一个像素点的亮度值)减小一个所述第一梯度值,则对所述亮度值按照所述第一梯度值递减的像素点的数量加一计数一次,可以理解为在机器人沿着所述当前帧图像的当前列向上搜索符合凸包特征的像素点,直至满足向上计数停止条件。
优选地,所述第一梯度值随着搜索次数的变化而作适应性的变化,例如,当前搜索的像素点越靠近所述凸包的上边缘,则所述第一梯度值变得越大,在同一个凸包中,越靠近所述凸包的上边缘的像素点的亮度值减小的越剧烈,满足一种既定的亮度值梯度变化规律;满足向上计数停止条件时,停止计数所述亮度值按照所述第一梯度值递减的像素点的数量,此时机器人也停止沿着列方向继续向搜索像素点,再将亮度值按照所述第一梯度值递减的像素点的数量标记为向上梯度下降数量。
机器人还自所述搜索中心向下搜索像素点,并计数所述亮度值按照所述第二梯度值递减的像素点的数量,即每当向下搜索到一个亮度值减小的像素点、且当前搜索到的像素点的亮度值相对于上一次搜索的像素点的亮度值(当前搜索到的像素点的上方的一个像素点的亮度值)减小一个所述第二梯度值,则对所述亮度值按照所述第二梯度值递减的像素点的数量加一计数一次,可以理解为在机器人沿着所述当前帧图像的当前列向下搜索符合凸包特征的像素点,直至满足向下计数停止条件。
优选地,所述第二梯度值随着搜索次数的变化而作适应性的变化,例如,当前搜索的像素点越靠近所述凸包的下边缘,则所述第二梯度值变得越大,在同一个凸包中,越靠近所述凸包的下边缘的像素点的亮度值减小的越剧烈,满足一种既定的亮度值梯度变化规律;当满足向下计数停止条件时,停止计数所述亮度值按照所述第二梯度值递减的像素点的数量,再将亮度值按照所述第二梯度值递减的像素点的数量标记为向下梯度下降数量。
在确定停止沿着所述当前帧图像的当前列自所述搜索中心向上搜索和计数,且确定停止沿着所述当前帧图像的当前列自所述搜索中心向下搜索和计数后,当机器人判断到所述当前帧图像的当前列中计数出的向上梯度下降数量大于或等于参考帧图像的同一列(参考帧图像中参与比较的一列的列排序与所述当前帧图像的当前列的列排序相等,也可计为参考帧图像的当前列)中计数出的向上梯度下降数量、和/或判断到所述当前帧图像的当前列中计数出的向下梯度下降数量大于或等于参考帧图像的同一列(参考帧图像中参与比较的一列的列排序与所述当前帧图像的当前列的列排序相等,也可计为参考帧图像的当前列)中计数出的向下梯度下降数量时,表明机器人在靠近待测障碍物,且在这一过程中,用于表征障碍物的同一局部区域的符合凸包特征的像素点的数量相对于安装高度变大之前有所增加,则当前帧图像内所能搜索到的符合凸包特征的像素点的数量增多,机器人可以检测到障碍物的更多细节部分,虽然靠近障碍物后存在碰撞的风险,但当前帧图像切换为暗帧图像后或采集到下一帧图像(暗帧图像)后会切换为执行亮度重心算法以起到避障作用;在此基础上,在所述当前帧图像的当前列所遍历的像素点当中,若检测到第一梯度值与第二梯度值都不等于第一预设梯度参数,且第一梯度值与第二梯度值的差值的绝对值小于第二预设梯度参数,且沿着当前列向上搜索到的亮度值最小的像素点的亮度值与当前确定的搜索中心处的像素点的亮度值的差值的绝对值大于参考帧图像的同一列像素点中向上搜索形成的同一类型的亮度值的差值的绝对值,且沿着当前列向下搜索到的亮度值最小的像素点的亮度值与当前确定的搜索中心处的像素点的亮度值的差值的绝对值大于参考帧图像的同一列像素点中向下搜索形成的同一类型的亮度值的差值的绝对值,则机器人将当前确定的搜索中心标记为凸包中心像素点,并确定当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的匹配关系符合凸包内的像素点的位置在机器人行走过程中的变化预期。
具体地,所述参考帧图像的同一列像素点中向上搜索形成的同一类型的亮度值的差值的绝对值是在参考帧图像中,从与所述当前列的列排序相同的一列中最终确定的搜索中心开始,沿着与所述当前列的列排序相同的一列中,向上搜索到的亮度值最小的像素点的亮度值与同一列上最终确定的搜索中心的亮度值的差值的绝对值,其中,向上搜索到的亮度值最小的像素点相对于同一列上最终确定的搜索中心之间的距离小于或等于所述搜索半径。并且,所述参考帧图像的同一列像素点中向下搜索形成的同一类型的亮度值的差值的绝对值是在参考帧图像中,从与所述当前列的列排序相同的一列中最终确定的搜索中心开始,沿着与所述当前列的列排序相同的一列,向下搜索到的亮度值最小的像素点的亮度值与同一列上最终确定的搜索中心的亮度值的差值的绝对值,其中,向下搜索到的亮度值最小的像素点相对于同一列上最终确定的搜索中心之间的距离小于或等于所述搜索半径。
由于相反的两个方向上的亮度值最小的像素点的差值的绝对值在增大,所以确定用于表征所述线激光的同一反射位置的像素点在同一帧图像内相对于图像中心的像素偏移量(也可理解为图像坐标系内相对于坐标系原点的坐标偏移量)增加,进一步地确定机器人在靠近待测障碍物,且在这一过程中,用于表征障碍物的同一局部区域的像素点的数量相对于安装高度变大之前有所增加,则当前帧图像内所能搜索到的像素点的数量增多,机器人可以检测到障碍物的更多细节部分,则证明在当前帧图像的当前列搜索到的像素点当中的凸包中心像素点是相对准确表示所述线激光打在待测障碍物的表面的激光线段的一个点,直至遍历并更新同一帧图像的所有列的凸包中心像素点后,获得各列当中的线激光位置并连接或拟合为代表所述线激光的激光线段,以实现对激光线段所在的障碍物的定位,便于机器人及时避障。
需要说明的是,在执行步骤2的过程中,机器人先从搜索中心开始沿着当前帧图像的一列向上依次搜索像素点,直至沿着当前帧图像的一列向上搜索完一个搜索半径内的所有像素点,再从同一个搜索中心开始沿着当前帧图像的一列向下依次搜索像素点,直至沿着当前帧图像的一列向下搜索完一个搜索半径内的所有像素点。或者,机器人先从搜索中心开始沿着当前帧图像的一列向下依次搜索像素点,直至沿着当前帧图像的一列向下搜索完一个搜索半径内的所有像素点,再从同一个搜索中心开始沿着当前帧图像的一列向下依次搜索像素点,直至沿着当前帧图像的一列向上搜索完一个搜索半径内的所有像素点。
优选地,所述参考帧是执行所述激光定位方法的过程中,摄像头采集的由所述线激光在待测物体表面反射回的光线的第一帧图像时,所述参考帧图像的同一列像素点中的凸包中心像素点是位于参考帧图像的同一列中的初始像素位置时,参考帧图像的同一列中的初始像素位置是参考帧图像的同一列中的线激光位置,最能代表线激光打在待测物体表面上的激光线段的一点。其中,执行所述激光定位方法的过程中,摄像头采集的由所述线激光在待测物体表面反射回的光线的第一帧图像是亮帧图像,记为线激光发射器发出线激光后的第一帧图像,也可以记为第一帧亮帧图像。
需要说明的是,第一预设梯度参数小于第二预设梯度参数;第一预设梯度参数优选为数值0以避免在亮度值恒定不变的像素点区域(比如局部过曝区域,虽然其内部的像素点可能是凸包中心)选择出符合凸包特征的像素点;第二预设梯度参数优选为数值25以将用于表征障碍物的同一反射位置的像素点的坐标跳变控制在可控范围,避免引入亮度值剧烈变化的像素点,只注重于待测障碍物的有效探测区域;既可以减少搜索量和计算量,也能够提高检测精度。
针对当前确定的一个搜索中心,在上述实施例对应的步骤2中还包括,沿着列方向搜索像素点的停止条件(所述向上计数停止条件和所述向下计数停止条件),具体包括:
若所述搜索中心处的像素点的亮度值比上一次搜索到的位于同一列的凸包中心像素点的亮度值大,则表明当前帧图像的当前列的所述搜索中心(一开始是所述初始像素位置)处的像素点的亮度值不等于上一次搜索到的合理的凸包中心处的亮度值,且随着机器人靠近障碍物,二者之间的亮度值的差值增大,其中,上一次搜索到的位于同一列的凸包中心像素点是针对上一次确定的搜索中心来在当前帧图像的同一列中筛选出的凸包中心像素点,上一次确定的搜索中心是与当前确定的搜索中心在所述当前帧图像的当前列向下或向上相邻的一个像素点。则机器人开始在当前帧图像的当前列中,自所述搜索中心向上搜索像素点,目的是自所述搜索中心向上搜索像素点,以期通过计数所述符合凸包特征的像素点来筛选出当前列中的凸包中心像素点;并且自所述搜索中心向下搜索像素点,目的是自所述搜索中心向下搜索像素点,以期通过计数所述符合凸包特征的像素点来筛选出当前列中的凸包中心像素点。具体地,在当前帧图像的当前列中,自所述搜索中心向上对亮度值按照所述第一梯度值递减的像素点进行计数,优选为从所述搜索中心开始,每沿着当前列向上搜索到一个符合凸包特征的像素点,则加一计数一次,获得亮度值按照所述第一梯度值递减的像素点的数量;并且在当前帧图像的当前列中,自所述搜索中心向下计数亮度值按照所述第二梯度值递减的像素点的数量,以实现从对应的初始像素位置开始分别沿着列方向向上和向下搜索符合凸包特征的像素点,优选为从所述搜索中心开始,每沿着当前列向下搜索到一个符合凸包特征的像素点,则加一计数一次,获得亮度值按照所述第二梯度值递减的像素点的数量。
在一些实施例中,若机器人在自所述搜索中心向上搜索的过程中检测到像素点的亮度值不是按照所述第一梯度值递减,则对预先设置的向上梯度异常计数量计数一次,然后机器人判断其沿着当前帧图像的当前列向上是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向上搜索像素点并确定达到向上计数停止条件,再通过执行步骤2筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向上搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;否则在所述向上梯度异常频数大于第一预设误差次数时,停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件,再继续执行步骤2筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向上搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;直至相对所述初始像素位置向上搜索完所述搜索半径内所覆盖的所有像素点。
在一些实施例中,在自所述搜索中心向下搜索的过程中检测到像素点的亮度值不是按照所述第二梯度值递减,则对预先设置的向下梯度异常计数量计数一次,然后机器人判断其沿着当前帧图像的当前列向下是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向下搜索像素点并确定达到向下计数停止条件,再按照步骤2中的所述根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向下搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;否则在所述向上梯度异常频数大于第二预设误差次数时,停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件,再按照步骤2中的所述根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向上搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点,直至相对所述初始像素位置向上搜索完所述搜索半径内所覆盖的所有像素点。
对于第一预设误差次数以及第二预设误差次数,需要补充的是,在所述搜索半径内搜索到的像素点在一定误差允许范围内不符合凸包特征,毕竟所述搜索中心不一定是凸包中心,则需要设置预设误差次数进行判断,其中,误差的来源在于机器人行走过程中采集到的同一发射角度的线激光在同一待测物体表面形成的反射位置会发生纵向跳变,体现为不同帧图像中用于表征同一反射位置的像素点沿着纵坐标轴向上偏移。因此,若机器人在自所述搜索中心向上计数的过程中检测到像素点的亮度值不是按照所述第一梯度值递减、和/或在自所述搜索中心向下计数的过程中检测到像素点的亮度值不是按照所述第二梯度值递减,则确定沿着其中一个列方向搜索到的相邻两个像素点之间的梯度值出现异常,并对预先设置的梯度异常计数量计数一次;当机器人检测到所述梯度异常频数大于预设误差次数、和/或计数完所述搜索半径内所覆盖的像素点时,机器人停止计数,并停止搜索符合凸包特征的像素点。
在一些实施例中,机器人在自所述搜索中心向上搜索的过程中,沿着当前帧图像的当前列向上对所述亮度值为数值255且位置相邻接的像素点进行计数,并将所述亮度值为数值255且位置相邻接的像素点的数量标记为向上过曝数量,形成向上方向中的搜索中心处的像素点连续为255的亮度值的过曝区域的计数量。然后机器人判断其沿着当前帧图像的当前列向上是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向上搜索像素点并确定达到向上计数停止条件,再通过执行步骤2筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向上搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;否则在机器人检测到向上过曝数量大于第三预设误差次数时,停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件,再继续执行步骤2筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向上搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;直至相对所述初始像素位置向上搜索完所述搜索半径内所覆盖的所有像素点。实现:当机器人检测到向上过曝数量大于第三预设误差次数、和/或沿着当前帧图像的当前列向上计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件。
在一些实施例中,机器人在自所述搜索中心向下搜索的过程中,沿着当前帧图像的当前列向下对所述亮度值为数值255且位置相邻接的像素点进行计数,并将所述亮度值为数值255且位置相邻接的像素点的数量标记为向下过曝数量,形成向下方向上的搜索中心处的像素点连续为255的亮度值的过曝区域的计数量。然后机器人判断其沿着当前帧图像的当前列向下是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向下搜索像素点并确定达到向下计数停止条件,再通过执行步骤2筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向下搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;否则在机器人检测到向下过曝数量大于第四预设误差次数时,停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件,再继续执行步骤2筛选出凸包中心像素点,然后开始将从所述搜索中心开始沿着当前列向下搜索到的相邻一个像素点更新为所述搜索中心,再重复执行步骤2以更新凸包中心像素点;直至相对所述初始像素位置向下搜索完所述搜索半径内所覆盖的所有像素点。实现:当机器人检测到向上过曝数量大于第四预设误差次数、和/或沿着当前帧图像的当前列向下计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件。实现:当机器人检测到向上过曝数量大于第四预设误差次数、和/或沿着当前帧图像的当前列向下计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件。
综上,机器人每一轮在搜索中心附近搜索凸包中心像素点的过程中,通过对从搜索中心开始的一对相反方向上搜索的像素点进行亮度比较和对过曝和梯度反常情况进行计数以裁决停止搜索条件。
作为一种实施例,在所述步骤3中,所述根据线激光发射器发射的线激光在上一帧暗帧图像中的定位坐标对应的有效覆盖区域内的亮度值与所述凸包中心像素点在所述当前帧图像当中的亮度值的大小关系,从已经筛选出的凸包中心像素点当中剔除干扰点的方法包括:机器人遍历完所述当前帧图像的所有列的像素点并从所述当前帧图像中获取到凸包中心像素点,且也保存有线激光发射器发射的线激光在上一帧暗图像中的定位坐标的情况下,对于所述当前帧图像中的每个凸包中心像素点,在以线激光发射器发射的线激光在上一帧暗图像中的定位坐标所在位置为圆心,且半径为探测像素距离的圆域内,若机器人判断到该圆域内存在至少一个像素点的亮度值比所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点的亮度值大一个预设环境光亮度阈值,则机器人确定所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点是干扰点,所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点的附近区域存在环境光干扰,导致机器人在该干扰点处找不到线激光位置,则需要将该干扰点从所述当前帧图像剔除,克服环境光的干扰,减少定位误判;其中,所述圆域是所述定位坐标对应的有效覆盖区域,优选地,所述圆域的半径(探测像素距离)不等于所述搜索半径;在执行所述帧间追踪算法的过程中,当前帧图像是亮帧图像,上一帧图像是暗帧图像,即上一帧暗图像,此时,上一帧暗图像中已经由所述亮度重心算法输出其线激光位置,并使用上一帧暗图像中的定位坐标(其中一列中确定出的线激光位置的坐标);在本实施例中,机器人在上一帧暗图像中选择作为所述圆心的像素点的坐标是等于所述当前帧图像中获取到一个凸包中心像素点的坐标,则可以使用所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点的亮度值进行比较,其中,所述预设环境光亮度阈值具体与机器人行进速度或旋转速度关联,优选地,机器人行进速度或旋转速度越大,机器人实时采集的图像中用于表示同一反射位置的像素点发生的位置跳变越剧烈,在两个像素点的亮度值之间产生的梯度差异变得更大,则将所述预设环境光亮度阈值设置得更大,以适应去噪精度。
作为一种实施例,在所述步骤1中,所述根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点的方法包括:若所述当前帧图像的当前列中的初始像素位置的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,或者所述当前帧图像的当前列中的初始像素位置的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第二预设亮度阈值,则从沿着所述当前帧图像的当前列向上距离所述当前帧图像的当前列中的初始像素位置一个参考像素距离的位置开始,沿着所述当前帧图像的当前列向下搜索像素点;其中,第一预设亮度阈值小于第二预设亮度阈值,第一预设亮度阈值优选为数值10,第二预设亮度阈值优选为数值235,所述当前帧图像的当前列中的初始像素点的亮度值相对于上一轮找到的位于同一列的线激光位置产生的亮度值的变化较小,或上一轮找到的位于同一列的线激光位置产生的亮度值足够大以接近数值255(最高级灰度值)时,当前列可能存在环境光的影响,需要一个参考位置开始沿着所述当前帧图像的当前列搜索像素点以排除亮度值异常的像素点或其所在列,而且第一预设亮度阈值与第二预设亮度阈值的和值小于数值255(最高级灰度值),则第一预设亮度阈值与第二预设亮度阈值作为粗筛所需排除列的亮度值判断条件。
然后在搜索像素点的过程中,若检测到当前搜索的一个像素点的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,或检测到当前搜索的一个像素点的亮度值等于数值255(最高级灰度值),则对误差位置计数量计数一次,并确定当前搜索到的像素点是符合预设亮度分布特征的像素点,且所述当前帧图像的当前列中的初始像素点的附近区域既可以存在比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,也可以存在等于数值255(最高级灰度值),容易受环境光的影响;其中,参考像素距离使用像素点的数量表示,以使参考像素计数阈值等于参考像素距离。
当机器人检测到误差位置计数量大于参考像素计数阈值时,确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值,表示从沿着所述当前帧图像的当前列向上距离所述当前帧图像的当前列中的初始像素位置一个参考像素距离的位置开始,至所述当前帧图像的当前列最下方的像素点所在的位置之间的区域存在强环境光干扰;参考像素计数阈值优选为数值25,且设置为等于参考像素距离,则误差位置计数量在大于本实施例中符合预设亮度分布特征的像素点的搜索起点相对于同一列的初始像素位置的位置偏移量时,确定无法在所述当前帧图像的当前列中搜索到线激光位置,存在环境光干扰。从而实现在同一列中设置的参考测试区域内逐行遍历比较亮度符合要求的像素点并记录下次数,以判断强环境光。
优选地,参考像素距离等于25个像素点组成的像素距离,则从向上距离所述当前帧图像的当前列中的初始像素位置25个像素点的位置开始,沿着所述当前帧图像的当前列向下搜索像素点,直至遍历至所述当前帧图像的当前列的最下方,则在当前列内形成参考测试区域,其中,从向上距离所述当前帧图像的当前列中的初始像素位置一个参考像素距离的位置开始,沿着所述当前帧图像的当前列向下延伸至所述当前帧图像的当前列的最下方所形成区域是参考测试区域;在遍历该参考测试区域的每个像素点的过程中,每当检测当前遍历的一个像素点的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大数值10,或检测到当前遍历的一个像素点的亮度值等于数值255(最高级灰度值),则计数一次,并确定当前搜索到的像素点是符合预设亮度分布特征的像素点,直至符合预设亮度分布特征的像素点的数量大于数值25。其中,参考像素距离(或参考像素计数阈值)与第二预设亮度阈值的和值大于数值255(最高级灰度值),第一预设亮度阈值与第二预设亮度阈值的和值小于数值255,对比出所述当前帧图像的当前列中的初始像素位置相对于上一轮找到的位于同一列的线激光位置产生的亮度值的变化情况以及在所述当前帧图像的当前列中搜索的像素点的亮度值变化情况,反映出当前列对应的区域的环境光强情况。
需要说明的是,上一轮找到的位于同一列的线激光位置是属于所述参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置,即在所述参考帧图像的同一列像素点中确定出线激光位置(经过前述实施例剔除干扰点后设置出的凸包中心像素点所在的位置),所述参考帧图像的同一列像素点是在所述参考帧图像内,与所述当前帧图像的当前列的列排序相同的一列的像素点;每在一帧亮帧图像中设置出同一列线激光位置,则记为一轮找到位于对应图像的同一列的线激光位置,每一轮对应的搜索图像都是不同帧图像。
作为一种实施例,在所述步骤1中,所述根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点的方法包括:
以所述当前帧图像的当前列中的初始像素位置为圆环中心,在所述当前帧图像的当前列中,将位于圆环中心下方的、内径为第一定位半径且外径为第二定位半径的圆环区域所覆盖的像素点标记为第一待测像素点;等效于:以所述当前帧图像的当前列中的初始像素位置为圆心,设置第一半径为第一定位半径的第一圆;同时以所述当前帧图像的当前列中的初始像素位置为圆心,设置第二半径为第二定位半径的第二圆,其中,第一定位半径小于第二定位半径;然后在初始像素位置的下方(当前列向下的方向),将第二圆与第一圆之间围成的圆环区域在所述当前帧图像的当前列内覆盖的像素点标记为第一待测像素点。
然后计算第一待测像素点的亮度值的平均值,即求取所述当前帧图像的当前列内所有第一待测像素点的亮度值的和值与所述当前帧图像的当前列内的第一待测像素点的总数量的比值,作为第一待测像素点的亮度值的平均值;其中,第一圆与第二圆之间形成的圆环覆盖区域作为判断光强变化的过渡区域,依赖于第一定位半径与第二定位半径的设置,第一定位半径优选为3,第二定位半径优选为12,可以使用像素距离表示,其单位为像素点的数量,形成足够大的判断光强变化的过渡区域。
若第一待测像素点的亮度值的平均值大于上一轮找到的位于同一列的线激光位置处的像素点的亮度值,则确定第一待测像素点是符合预设亮度分布特征的像素点,并确定所述当前帧图像的当前列中不存在线激光位置,所述当前帧图像的当前列对应的反射区域存在强环境光干扰,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置,优选为参考帧图像的同一列像素点中的初始像素位置,参考帧图像的同一列像素点是与所述当前帧图像的当前列的列排序相等的一列。
同理地,以所述当前帧图像的当前列中的初始像素位置为圆环中心,在所述当前帧图像的当前列中,将位于圆环中心上方的、内径为第一定位半径且外径为第二定位半径的圆环区域所覆盖的像素点标记为第二待测像素点;等效于:以所述当前帧图像的当前列中的初始像素位置为圆心,设置第一半径为第一定位半径的第一圆;同时以所述当前帧图像的当前列中的初始像素位置为圆心,设置第二半径为第二定位半径的第二圆,其中,第一定位半径小于第二定位半径;然后在初始像素位置的上方(当前列向上的方向),将第二圆与第一圆之间围成的圆环区域在所述当前帧图像的当前列内覆盖的像素点标记为第二待测像素点,不同于所述第一待测像素点。
然后计算第二待测像素点的亮度值的平均值,即求取所述当前帧图像的当前列内所有第二待测像素点的亮度值的和值与所述当前帧图像的当前列内的第二待测像素点的总数量的比值,作为第二待测像素点的亮度值的平均值;其中,第一圆与第二圆之间形成的圆环覆盖区域作为判断光强变化的过渡区域,依赖于第一定位半径与第二定位半径的设置,第一定位半径优选为3,第二定位半径优选为12,可以使用像素距离表示,其单位为像素点的数量,形成足够大的判断光强变化的过渡区域。
若第二待测像素点的亮度值的平均值大于上一轮找到的位于同一列的线激光位置处的像素点的亮度值,则确定第二待测像素点是符合预设亮度分布特征的像素点,并确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置,优选为参考帧图像的同一列像素点中的初始像素位置,参考帧图像的同一列像素点是与所述当前帧图像的当前列的列排序相等的一列。
在一些实施例中,在所述步骤1中,若在所述当前帧图像的当前列中无法获取到初始像素位置,则将上一轮找到的位于同一列的线激光位置更新为所述初始像素位置,并将第二预设像素距离更新为所述搜索半径,再重复执行所述步骤2,具体会将上一轮找到的位于同一列的线激光位置更新为所述当前帧图像的当前列中的搜索中心,并设置搜索半径为第二预设像素距离,第二预设像素距离不等于第一预设像素距离,然后从最新设置的搜索中心开始沿着当前列向上搜索一个搜索半径内的像素点,并从搜索中心开始沿着当前列向下搜索一个搜索半径内的像素点,并对搜索半径内的每个像素点进行遍历,直至搜索出对应列中的凸包中心像素点;具体包括根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出当前列中的凸包中心像素点后,将从所述搜索中心开始沿着当前列向上或向下搜索到的相邻一个像素点更新为所述搜索中心,再重新执行步骤2,获得一个新的凸包中心像素点并将新的凸包中心像素点更新为凸包中心像素点,其中,每个所述搜索中心相对于所述初始像素位置都在一个搜索半径的覆盖区域内,其中,所述搜索半径设置为第二预设像素距离,优选地,第二预设像素距离不等于第一预设像素距离;则每当当前列中的搜索中心被更新一次,则当前列中设置出的凸包中心像素点也被更新一次;其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置。另外,若机器人在重复执行所述步骤2的过程中,若在同一列内(所述当前帧图像的当前列内)始终搜索不出凸包中心像素点,则确定机器人在同一列内找不到线激光位置,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定所述当前帧图像的当前列所处的环境的光强比较大以至于无法识别线激光打在待测物体内的反射位置。
综上,当机器人检测到摄像头采集的当前帧图像是亮帧图像时,选择将当前帧图像输入帧间追踪算法对应的处理规则模型中以输出有效的激光位置,在摄像头距离障碍物不过近的场景中有效过滤掉各种环境光干扰,减少对红外滤光片的依赖;具体会在符合凸包特征的像素点中,基于向上搜索的像素点当中产生的亮度值梯度与向下搜索的像素点当中产生的亮度值梯度之间的数值关系及其在相邻两次确定的搜索中心对应的搜索状态下的差异、当前搜索的像素点的亮度值与上一次在同一帧图像的同一列确定出的凸包中心像素点的亮度值之间的关系、及当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出当前列的凸包中心像素点,并在当前列内遍历完相对于初始像素位置的搜索半径内的每个像素点并更新凸包中心像素点,并排除不存在线激光位置的像素点的干扰后,确定出当前列最终的凸包像素点,能够在环境光较强的情况下减少干扰点误判现象,从而以跟踪参考帧图像的符合凸包特征的像素点的数量及亮度值的方式来搜索出更加准确的凸包中心像素点,进而中剔除所有干扰点后,将剩余的凸包中心像素点的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,较为准确地实现机器人对激光在障碍物表面的反射光线的跟踪,适用于机器人导航行走场景中,达到机器人定位障碍物的效果。
作为一种实施例,为在暗帧图像内寻找出所述线激光位置,以克服帧间追踪算法所不能应付的场景,保证避障效果,需要在采集到暗帧图像后切换为执行亮度重心算法,并允许保存上一帧图像(属于亮帧图像,对应为线激光发射器发射的线激光在待测物体表面反射回的光线的亮帧图像)以备亮度重心算法使用,与所述帧间追踪算法达到优势互补的效果,既能克服环境光干扰,又能持续跟踪线激光的反射位置。
具体地,所述机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置的方法包括:机器人逐列遍历所述当前帧图像,其中,所述当前帧图像是暗帧图像,并被配置为按列划分,则可以按列排序依次取出每一列的像素点的亮度值并对一定搜索区域内的像素点的数量进行计数,以便于筛选出预测线激光的反射位置的线激光位置,可能不同于前述的凸包中心像素点的属性,包括亮度值和纵坐标位置,由于是暗帧图像,表现出的像素点的亮度值不大,所以不能采集到前述实施例公开的初始像素位置。
机器人依次搜索当前列的各个像素点,具体是从当前列的最上一行的像素点开始,依次遍历至当前列的最下一行的像素点;或者从当前列的最下一行的像素点开始,依次遍历至当前列的最上一行的像素点,以完成同一列中的每个像素点的搜索、亮度值的检测以及像素点数量的统计;在搜索当前列的像素点的过程中,根据当前帧图像的当前列内当前搜索的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点,其中,上一帧亮帧图像的对应位置处的像素点所在的坐标位置等于当前帧图像内当前搜索的像素点所在的坐标位置,至少相对于坐标系原点形成的相对位置关系是等效,至于本实施例所述的大小关系是通过设定阈值来判断确定出来的。然后在当前帧图像的当前列内,将位置相邻接的至少两个合法像素点连接形成定位线段;当连接完位置相邻接的所有合法像素点后,选择出长度最大的定位线段,其中,连接完位置相邻接的所有合法像素点后,存在多条定位线段,每条定位线段是由同一列内像素位置连续排列的合法像素点依次连接形成的一条线段,然后选择所述定位线段进行长度比较,获得长度最大的定位线段,原因在于,合法像素点的亮度值相对于上一帧亮帧图像的同一位置的像素点的亮度值的差距被控制在合理的阈值范围内,且上一帧亮帧图像的同一位置的像素点的亮度值被控制在一定亮度值范围内以防止强光干扰,则连续排列的合法像素点的亮度值上,能够预测出线激光在待测物体表面的反射位置的大体范围,也能增强环境光的抗干扰能力。长度最大的定位线段优选为直线段且平行于图像坐标系的纵坐标轴;优选地,选择出的长度最大的定位线段设置为所述预测激光线段,再将所述预测激光线段的中心处的像素点的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标。若选择出的长度最大的定位线段的长度大于预设连续长度阈值,则将选择出的长度最大的定位线段设的中心设置为线激光位置,以等效于前述执行帧间追踪算法的实施例提及的凸包中心以提高检测障碍物的准确性。
需要补充的是,一个线激光位置的坐标使用对应的定位坐标表示,对于当前帧图像都是采取逐列遍历的方式获取属于相应列上的线激光位置,所以在确定当前遍历的列序号后,对于每个线激光位置的坐标可以只选择纵坐标值表示,以识别出线激光在障碍物表面的反射位置的高度信息,也可用于机器人避障。
在上述实施例的基础上,所述根据当前帧图像内当前搜索的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点的方法包括:将在所述当前帧图像内当前搜索的像素点的亮度值减去上一帧亮帧图像的具有相同行列位置处的像素点的亮度值,获得暗帧图像相对差值;当检测到暗帧图像相对差值的相反数大于预设亮度差阈值,且上一帧亮帧图像的具有相同行列位置处的像素点的亮度值大于参考亮帧图像亮度阈值时,将当前在所述当前帧图像内搜索的像素点设置为合法像素点,作为从当前帧图像的当前列中筛选出的合法像素点,表明所述当前帧图像内的像素点没有受到较强的环境光的干扰;其中,预设亮度差阈值是从相邻两帧图像内的同一位置处的像素点的亮度值的差值出发,设置出的让当前帧图像采集的像素点信息少受环境光强度干扰的经验值;而参考亮帧图像亮度阈值是从亮帧图像中的像素点的亮度值出发,设置出的让亮帧图像中的像素点少受环境光强度干扰的经验值。
综上,机器人检测到摄像头采集的当前帧图像是暗帧图像时,选择将当前帧图像输入亮度重心算法对应的处理规则模型中以输出连接长度合理的定位线段,在摄像头距离障碍物过近的场景中克服对于环境光干扰较为敏感的问题,对应地,防止机器人因为误判而撞上前方反射激光光线的障碍物。因此,本发明通过结合帧间追踪算法和亮度重心算法来在各种环境光强场景内取长补短,实现在先后交替产生的用于反映线激光的反射光线的亮帧图像和暗帧图像当中完成激光定位。
在一些实施例中,所述根据当前帧图像内当前搜索的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点的方法还可以表示为:当前在上一帧亮帧图像内遍历的像素点的亮度值减去所述当前帧图像内具有相同行列位置的像素点的亮度值,获得暗帧图像相对差值;当检测到暗帧图像相对差值大于预设亮度差阈值,且当前在上一帧亮帧图像内遍历的像素点的亮度值大于参考亮帧图像亮度阈值时,将所述当前帧图像内具有相同行列位置的像素点设置为合法像素点,作为从当前帧图像中筛选出的合法像素点,表明所述当前帧暗帧图像没有受到较强的环境光的干扰。需要补充的是,线激光发射器发射的线激光在待测物体表面反射回的光线在摄像头的成像平面内形成的图像序列是配置为亮帧图像与暗帧图像依次交替产生,以使:摄像头采集的当前帧图像是亮帧图像时,摄像头采集的下一帧图像是暗帧图像;在摄像头采集当前帧亮帧图像与摄像头采集下一帧亮帧图像的时间间隔内,摄像头采集当前帧暗帧图像;在摄像头采集下一帧亮帧图像之后,摄像头采集下一帧暗帧图像。
作为一种实施例,所述激光定位方法还包括对于摄像头的曝光信息的调节,具体包括:
当机器人检测到其所处的环境的光强大于第一预设光强阈值时,表示机器人检测到当前所处环境内的可见光的强度较大,摄像头的曝光量变得比较大,则机器人降低摄像头的增益(图像信号放大参数),获得第一增益,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现过曝,尤其是可见光部分的图像信息不容易过曝,以提高在环境光较强的场景内提取出前述线激光位置的准确性;第一预设光强阈值主要是依据环境中的较强可见光对摄像头采集的图像的过曝程度设置的强光阈值。
当机器人检测到其所处的环境的光强大于第一预设光强阈值时,表示机器人检测到当前所处环境内的可见光的强度较大,摄像头的曝光量变得比较大,则机器人降低摄像头的曝光时间,获得第一曝光时间,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现过曝,尤其是可见光部分的图像信息不容易过曝,以提高在环境光较强的场景内提取出前述线激光位置的准确性;第一预设光强阈值主要是依据环境中的较强可见光对摄像头采集的图像的过曝程度设置的强光阈值。
当机器人检测到其所处的环境的光强小于第二预设光强阈值时,表示机器人检测到当前所处环境内的可见光的强度较小,摄像头的曝光量变得比较小,则机器人提高摄像头的增益(图像信号放大参数),获得第二增益,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现欠曝(过暗),优选地,第一增益小于第二增益;但如果前述实施例调节出第一增益之前的增益本身就很大以应对所处环境的光强,则第一增益不一定小于第二增益;从而提高在环境光较弱的场景内提取出前述线激光位置的准确性;第二预设光强阈值主要是依据环境中的较暗可见光对摄像头采集的图像的曝光程度设置的强光阈值,第二预设光强阈值远小于第一预设光强阈值。
当机器人检测到其所处的环境的光强小于第二预设光强阈值时,表示机器人检测到当前所处环境内的可见光的强度较小,摄像头的曝光量变得比较小,则机器人提高摄像头的曝光时间,获得第二曝光时间,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现欠曝,优选地,第一曝光时间小于第二曝光时间;但如果前述实施例调节出第一曝光时间之前的曝光时间本身就很大以应对所处环境的光强,则第一曝光时间不一定小于第二曝光时间。
因此,本实施例根据当前环境的状况来调节摄像头的增益和曝光时间,使得摄像头中看到的图像不出现过曝或者欠曝的情况,实现对所述摄像头的动态曝光调节。
需要补充的是,对于摄像头的增益的调节在合理范围内,避免产生噪点;噪点一般是指摄像头在低曝光的环境下将摄像头的增益调太高产生。
作为一种实施例,摄像头的曝光信息应用于线激光发射器的功率档位调节时,存在以下情况:
当机器人检测到摄像头的当前曝光值大于第一预设曝光阈值时,调高线激光发射器的用于发射线激光的功率档位,以使线激光发射器发射的线激光的强度配置为等于平滑系数与当前曝光值的乘积;
优选例一,摄像头的当前曝光值包括所述第三增益和/或所述第三曝光时间,则在机器人所处的环境的光强越大时,调节出所述第三增益和/或所述第三曝光时间越大,以适应当前所处环境光强的曝光量,此时的平滑系数被设置为合理的数值,用于平滑曝光值调整的步长,起到抑制过曝的效果;
优选例二,摄像头的当前曝光值包括所述第一增益和/或所述第一曝光时间,则在机器人所处的环境的光强越大时,按照前述实施例调节出来的所述第一增益和/或所述第一曝光时间变小,此时的平滑系数被设置为合理的数值,用于平滑曝光值调整的步长,能够在所述第一增益和/或所述第一曝光时间在变为更小时,抑制线激光发射器发射的线激光的强度也变得更小,以适应当前所处环境光强的曝光量。
在前述优选例一或优选例二的基础上,自动调节线激光发射器的用于发射线激光的功率档位,直至线激光发射器发射的线激光的强度(所述线激光发射器的发射功率)等于平滑系数与当前曝光值的乘积,从而实现在高亮度环境下使用较强的线激光挡位,也避免当前曝光值变化得较为剧烈和摄像头采集到的线激光发射器发射的线激光在待测物体的表面反射回来的光线的图像不出现过曝,以便于机器人按照前述实施例公开的帧间追踪算法从所述当前帧图像中准确搜索出线激光位置,至少保证像素点的亮度值以及两个像素点之间的梯度值在合理范围内,障碍物在环境光较亮的情况下也能够被摄像头采集到线激光在其表面的反射光线图像。
当机器人检测到摄像头的当前曝光值小于第二预设曝光阈值时,调低线激光发射器的用于发射线激光的功率档位,以使线激光发射器发射的线激光的强度配置为等于平滑系数与当前曝光值的乘积。其中,第一预设曝光阈值大于第二预设曝光阈值,以反映处当前所处的环境光较暗。
优选例三,摄像头的当前曝光值包括所述第四增益和/或所述第四曝光时间;则在机器人所处的环境的光强越小时,将预先调节出的所述第三增益和/或所述第三曝光时间变小以适应当前所处环境光强所需的曝光量,此时的平滑系数被设置为合理的数值,用于平滑曝光值调整的步长,起到抑制欠曝的效果。
优选例四,摄像头的当前曝光值包括所述第二增益和/或所述第二曝光时间,则在机器人所处的环境的光强越小时,按照前述实施例调节出来的所述第二增益和/或所述第二曝光时间变大,此时的平滑系数被设置为合理的数值,用于平滑曝光值调整的步长,能够在所述第二增益和/或所述第二曝光时间在变大时抑制线激光发射器发射的线激光的强度也变得更大,以适应当前所处环境光强的曝光量。
在前述优选例三或优选例四的基础上,自动调节线激光发射器的用于发射线激光的功率档位,直至线激光发射器发射的线激光的强度(所述线激光发射器的发射功率)等于平滑系数与当前曝光值的乘积,避免当前曝光值变化得较为剧烈和摄像头采集到的线激光发射器发射的线激光在待测物体的表面反射回来的光线的图像不出现欠曝,以便于机器人在较暗环境内从所述当前帧图像中准确搜索出线激光位置,使得障碍物在环境光较暗的情况下采集到摄像头中的图像过曝没那么强烈,从而不产生更多的反射干扰,便于使用所述激光定位方法找到更准确的线激光位置。
优选地,摄像头的当前曝光值用于反映摄像头在当前光照亮度的环境内的曝光量;摄像头的当前曝光值可以是正面反映所处环境光强程度;也可以反面反映当前所处的环境光强程度,比如摄像头的当前曝光值被调节得越小,则反证出当前所处的环境光较强,则会引导线激光发射器的用于发射线激光的功率档位调高,使得障碍物在环境光较亮的情况下也被采集到线激光在该障碍物表面的反射光线。
综上,线激光发射器发射的线激光的强度可以描述线激光发射器的用于发射线激光的功率档位与当前曝光值之间的映射关系,再结合平滑系数的调整作用,适应于所述结构光模组在一系列不同曝光下对线激光的反射光线的图像数据的采集,从而根据调整后的摄像头增益和曝光时间,对线激光发射器的用于发射线激光的功率档位进行调节,实现在高亮度环境下使用较强的线激光发射功率挡位,使得障碍物在环境光较亮的情况下也能够看的到线激光,且图像不会过曝(比如,室外强环境光下,线激光在白色障碍物反射回摄像头后,降低摄像头的增益或曝光时间),避免由于环境过亮而导致摄像头采集的图像出现过曝,从而找到更加准确的线激光位置;在低亮度环境下使用较弱的线激光发射功率挡位,使得障碍物图像在环境光较暗的情况下过曝没那么强烈,从而不产生更多的反射干扰,便于更准确的线激光位置(对应为凸包中心)。
基于前述激光定位方法的各个实施例,本发明还公开一种机器人,该机器人的机体装配有结构光模组,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,摄像头在镜头不装配滤光片(比如红外滤光片)的前提下接收线激光发射器发射的线激光的各种波长的光线,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息。机器人内部设置控制器,控制器与结构光模组电性连接,控制器被配置为执行所述激光定位方法,以获得所述线激光发射器发射的线激光在当前帧图像中的定位坐标,即获取亮帧图像中的线激光位置和暗帧图像内的线激光位置;其中,线激光发射器发射出去的线激光位于摄像头的视场范围内。
在本实施例中,控制器可控制线激光发射器和摄像头进行工作。可选地,控制器一方面对摄像头进行曝光控制,另一方面可控制线激光发射器在摄像头曝光期间对外发射线激光,以便于摄像头采集由线激光探测到的环境图像。其中,控制器可以控制位于摄像头和线激光发射器同时工作,或者交替工作,对此不做限定。需要说明的是,被摄影物体(待测物体的表面)反射的激光光线,通过摄像头的镜头投射到感光片上,使之发生化学变化,并产生图像,这个过程被称为曝光。
优选地,所述控制器可采用具有FPGA 和DSP 的图像处理硬件装置。为达到更快的处理速度,考虑到FPGA 在处理流式的并行计算时有着明显的优势,一些涉及形态学处理图像的操作在FPGA 中进行,其余操作在DSP 中进行。即使图像的分辨率升高,也不会耗费更多处理时间。在图像大小为2048x2048像素时,该图像处理系统的处理速度可达6帧/s ,能够同时满足机器人避障的准确性和实时性要求。
在一些实施例中,所述摄像头的水平视角被配置为在机器人的前方接收所述线激光在机体宽度范围内反射回的光线,获得由线激光探测到的环境图像。其中,为了获得所述摄像头的相应水平视角,可以采用广角镜头,也可以采用非广角镜头,具体使用取决于机体宽度,只需能采集到全机身范围内的线激光即可。
和/或结构光模组在机器人的机体上的安装高度被配置为与待测的障碍物的高度成正相关关系,以使得待测的障碍物占据所述摄像头的有效视场空间。其中,线激光发射器和摄像头的安装高度都需要针对待测障碍物的尺寸大小确定,结构光模组在机器人的机体上的安装高度越大,则可以覆盖的纵向空间越大,但相对于尺寸小的待测障碍物偏离得更远,则采集到的局部细节较少,让尺寸小的待测障碍物的检测精度降低,;结构光模组在机器人的机体上的安装高度越小,则可以覆盖的纵向空间越小,但更加靠近尺寸小的待测障碍物,则采集到障碍物局部细节增加,让尺寸小的待测障碍物的检测精度提高。
至于所述安装高度,在所述结构光模组的安装高度上,线激光发射器和摄像头可以位于不同高度。例如,在机器人的机顶上,线激光发射器高于摄像头;或者,摄像头高于线激光发射器;当然,线激光发射器和摄像头也可以位于同一高度。在实际使用中,结构光模组会被安装在某一自移动机器人(例如扫地机器人、巡逻机器人等自主移动设备)上,在该情况下,线激光发射器和摄像头到机器人所在工作面(例如地面)之间的距离不相同,例如摄像头到工作面的距离是32mm ,线激光发射器到工作面的距离是47mm。
作为一种实施例,所述摄像头的上视角的覆盖范围被配置为覆盖到线激光发射器发射的线激光形成的平面的底部,具体是多束线激光组成的激光面,沿着线激光发射器的发射方向延伸铺开并射至机器人的行进地面,优选地,激光面与机器人的工作面所成的角度是15度;所述摄像头的下视角的覆盖范围被配置为覆盖到线激光发射器发射的线激光在机器人的机体前方的障碍物表面反射回的光线;因此,摄像头的俯仰角可以根据导航所需的地图图像的要求进行适当的调整;摄像头的下视角是自上向下探测形成的角度,形成摄像头的俯视角;摄像头的上视角是自下向上探测形成的角度,形成摄像头的仰视角;其中,所述摄像头的俯仰角被划分为所述摄像头的下视角和所述摄像头的上视角,优选地,摄像头的上视角是设置为24度,摄像头的下视角是设置为18度。
和/或所述摄像头(镜头的光轴线)相对于机器人的中轴线偏转形成的航向角保持在预设误差角度范围内,以使得摄像头的光轴与机器人的行进方向平行,且让摄像头在机器人的前方接收所述线激光在机体宽度范围内反射回的光线,以在机器人行走过程中实时探测到其正前方的障碍物。
和/或所述摄像头沿着其光轴转动产生的翻滚角保持在预设误差角度范围内,以使摄像头在机器人的前方接收所述线激光在机体宽度范围内反射回的光线,其中,所述摄像头是可转动装配地在机器人的机体上,预设误差角度范围设置为-0.01至0,01度,以使得所述摄像头(镜头的光轴线)相对于机器人的中轴线偏转形成的航向角保持在0度左右,所述摄像头沿着其光轴转动产生的翻滚角也保持在0度左右。
在安装好所述结构光模组之后,线激光发射器发射线激光的中心线与线激光发射器的安装基线之间的夹角,等效于激光面与机器人的工作面所成的角度,优选为15度;安装基线是指在线激光发射器位于机器人上方的一安装高度的情况下,线激光发射器所在的一条直线,或者在线激光发射器和摄像头位于同一安装高度的情况下,线激光发射器和摄像头所在的一条直线。
在本实施例中,并不限定线激光发射器的发射角度。该发射角度与结构光模组所在的机器人需要满足的探测距离、机器人的机体宽度以及线激光发射器与摄像头之间的机械距离有关。在结构光模组所在机器人需要满足的探测距离、机器人的机体宽度和线激光发射器与摄像头之间的机械距离确定的情况下,可直接通过三角函数关系得到线激光发射器的发射角度,即发射角度是一固定值。
当然,如果需要某个特定的发射角度,可以通过调整结构光模组所在机器人需要满足的探测距离和线激光发射器与摄像头之间的机械距离来实现。在一些应用场景中,在机器人需要满足的探测距离和机器人的机体宽度确定的情况下,通过调整线激光发射器与摄像头之间的机械距离,线激光发射器的发射角度可在一定角度范围内变化。
作为一种实施例,若所述摄像头与所述线激光模块之间的安装距离越大,则在所述摄像头采集的图像中,用于表示所述线激光在障碍物的表面的反射位置的像素点相对于摄像头的中心的坐标偏移量增大,其中,用于表示所述线激光在障碍物的表面的反射位置的像素点包括但不限于前述实施例获得的凸包中心像素点、符合凸包特征的像素点以及线激光位置。当机器人靠近障碍物时,所述摄像头与所述线激光模块之间的安装距离越大,则摄像头采集的图像中的像素点出现的纵向跳变会越大,从而获取更多的局部细节信息,提高障碍物的检测精度。
需要说明的是,安装距离是指线激光发射器与摄像头之间的机械距离(或者称为基线距离)。线激光发射器与摄像头之间的机械距离,可根据结构光模组的应用需求灵活设定。其中,线激光发射器与摄像头之间的机械距离、结构光模组所在机器人需要满足的探测距离以及机器人的机体宽度等信息可在一定程度上决定测量盲区的大小。对结构光模组所在的机器人来说,其机体宽度是固定的,测量范围与线激光发射器与摄像头之间的机械距离是可以根据需求灵活设定,这意味着机械距离及盲区范围不是固定值。在保证机器人的测量范围(或性能)的前提下,应该尽量减小盲区范围,然而,线激光发射器与摄像头之间的机械距离越大,可以控制的距离范围就越大,这有利于更好地控制盲区大小,以提高障碍物的检测精度。
在一些应用场景中,结构光模组应用于扫地机器人上,例如可以安装在扫地机器人的撞板上或机器人本体上。针对扫地机器人来说,下面示例性给出线激光发射器与摄像头之间比较合理的机械距离范围。例如,线激光发射器与摄像头之间的机械距离可以大于20mm;进一步可选地,线激光发射器与摄像头之间的机械距离大于30mm。更进一步,线激光发射器与摄像头之间的机械距离大于41mm 。需要说明的是,这里给出的机械距离的范围,并不仅仅适用于结构光模组应用在扫地机器人这一种场景,也适用于结构光模组在规格尺寸与扫地机器人比较接近或类似的其它设备上的应用。
作为一种实施例,线激光发射器的发射角度和摄像头的接收角度被设置为:线激光发射器发射线激光至机体的前方的预设探测位置处,线激光在预设探测位置处反射回所述摄像头,以在摄像头采集的图像内形成所述符合凸包特征的像素点或凸包中心像素点,其中,线激光在预设探测位置处形成的激光线段的长度大于机器人的机体宽度;线激光打在地面后的反射位置取决于线激光的横向出射角度(即线激光反射器的发射角度)和摄像头的横向像素可视角度(即摄像头的接收角度,对应为所述水平视角),线激光打到前方使得摄像头提取出的线激光的水平长度比机器人的机体宽度略宽。
每当机器人沿着由当前位置指向所述预设探测位置的方向行走预设行进距离时,预设探测位置与机器人之间的水平距离变小,在一些实施例中,机器人靠近所述预设探测位置处的障碍物;摄像头采集的图像中的用于表示所述线激光在所述预设探测位置中的同一反射位置的像素点相对于摄像头的中心的坐标偏移量增大,即线激光打到前方地面距离机器越近,在机器人在相同行进距离下,摄像头采集的图像中的用于表示线激光的反射位置的像素点产生的纵向跳变变大,捕获到的局部信息更多,对于障碍物的检测精度更高。
综上,在机器人行走的过程中,当障碍物与摄像头(或结合激光发射器整体视为结构光模组)的距离减小时,摄像头采集的图像中的用于反映线激光的同一反射位置的像素点的纵坐标位置跳变增大,则用于表征障碍物的同一局部区域的像素点的数量增加,使得像素点之间的亮度的梯度值可能减小,且同一帧图像内的像素点数量可能不变,从而提高对障碍物的检测精度,其中,用于反映线激光的反射位置的像素点包括符合凸包特征的像素点,随着机器人靠近障碍物,符合凸包特征的像素点的位置会发生纵向跳变(纵坐标变化),让机器人的摄像头采集图像内的障碍物由原来的整体轮廓变为局部轮廓,至少在纵向上覆盖到的障碍物的轮廓高度出现变化,则相对于靠近障碍物之前采集的局部轮廓所需的像素点的数量会增加,提高检测障碍物的精度。进一步地,机器人中摄像头与线激光发射器之间的安装距离越大,比如,线激光发射器相对于摄像头的安装高度越大,则用于表示所述线激光在障碍物的表面的反射位置的像素点在纵坐标变化量增大,摄像头采集的同一帧图像内的障碍物由原来的整体轮廓变为局部轮廓,则用于表征障碍物的同一局部区域的像素点的数量相对于安装高度变大之前有所增加,也可以是机器人每行走一段测试距离,则实时采集到的当前帧图像内用于表征障碍物的同一局部区域的像素点的数量相对于安装高度变大之前有所增加,提高检测障碍物的精度,相对于现有技术机器人能够检测的障碍物的体型可以变得更小。
需要说明的是,符合凸包特征的像素点被配置为模拟线激光投射到待测物体的表面形成的激光线段的部分或全部点信息;机器人将图像的符合凸包特征的像素点的集合设置为亮度值从凸包中心开始沿着当前列分别向上下两侧递减的像素点、以及凸包中心组成的像素点集合,该像素点集合组成一个凸包,其中,凸包中心是该像素点集合内亮度值最大的像素点;在符合凸包特征的像素点的集合内,从凸包中心开始,亮度值沿着当前列分别向上侧递减产生所述第一梯度值,亮度值沿着当前列分别向下侧递减产生所述第二梯度值。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电 连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器 (ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
上述实施例只为说明本发明的技术构思及特点,其目的是让熟悉该技术领域的技术人员能够了解本发明的内容并据以实施,并不能以此来限制本发明的保护范围。凡根据本发明精神实质所作出的等同变换或修饰,都应涵盖在本发明的保护范围之内。

Claims (21)

  1. 基于图像信息的激光定位方法,其特征在于,激光定位方法的执行主体是装配有结构光模组的机器人,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息;
    所述激光定位方法包括:
    机器人控制摄像头采集线激光发射器发射的线激光在待测物体表面反射回的光线的图像,并检测摄像头采集的图像的亮暗类型;
    当机器人检测到摄像头采集的当前帧图像是亮帧图像时,机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置,再将线激光位置的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标;
    当机器人检测到摄像头采集的当前帧图像是暗帧图像时,机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置,再将线激光位置的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标。
  2. 根据权利要求1所述激光定位方法,其特征在于,所述机器人通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置的方法包括:
    步骤1、机器人逐列遍历所述当前帧图像,并在所述当前帧图像的对应列中获取初始像素位置,同时根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点,其中,线激光位置用于表示所述线激光在待测物体表面的反射位置;
    步骤2、除了不存在线激光位置的像素点所在列之外,在所述当前帧图像的当前列中,机器人将当前列存在的初始像素位置设置为搜索中心,再从搜索中心开始沿着当前列向上搜索一个搜索半径内的像素点,并从搜索中心开始沿着当前列向下搜索一个搜索半径内的像素点;然后根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出当前列中的凸包中心像素点以更新上一次在当前帧图像的当前列中确定的凸包中心像素点;其中,参考帧图像是配置为在采集到当前帧图像之前,机器人最新找到的线激光位置所在的一帧亮帧图像;每当当前列中的搜索中心被更新一次,则当前列中设置出的凸包中心像素点也被更新一次;
    步骤3、根据线激光发射器发射的线激光在上一帧暗帧图像中的定位坐标对应的有效覆盖区域内的亮度值与所述凸包中心像素点在所述当前帧图像当中的亮度值的大小关系,从已筛选出的凸包中心像素点当中剔除干扰点;在机器人遍历完所述当前帧图像内所有列像素点当中的凸包中心像素点以剔除所有干扰点后,将剩余的凸包中心像素点的坐标设置为线激光发射器发射的线激光在当前帧图像中的定位坐标,并确定机器人在所述当前帧图像内搜索出对应列中确定出的线激光位置,以连成线激光发射器发射的线激光在待测物体的表面形成的激光线段,并确定机器人已经通过执行帧间追踪算法来从当前帧图像中搜索出线激光位置;
    其中,同一列中确定出的线激光位置是在机器人遍历完同一列内所有像素点后,由同一列内最后更新出的凸包中心像素点所在的位置,一个线激光位置的坐标使用对应的定位坐标表示。
  3. 根据权利要求2所述激光定位方法,其特征在于,在所述步骤2中,每当针对一个搜索中心筛选出一个凸包中心像素点,则将从所述搜索中心开始沿着当前列向上或向下搜索到的相邻一个像素点更新为所述搜索中心,再重新执行步骤2,获得一个新的凸包中心像素点并将新的凸包中心像素点更新为凸包中心像素点;每个所述搜索中心相对于所述初始像素位置都在一个搜索半径的覆盖区域内,其中,所述搜索半径设置为第一预设像素距离;所述已筛选出的凸包中心像素点是所述当前帧图像内存在凸包中心像素点的每一列当中,最后更新出的凸包中心像素点;所述已筛选出的凸包中心像素点是所述当前帧图像内存在凸包中心像素点的每一列当中偏离所述当前帧图像的坐标系的原点最近的一个凸包中心像素点;
    其中,机器人将当前帧图像的当前列上的符合凸包特征的像素点的集合设置为亮度值从凸包中心开始沿着当前列分别向上下两侧递减的像素点、以及凸包中心组成的像素点集合以形成一个凸包,凸包中心是该像素点集合内亮度值最大的像素点,并将凸包中心像素点设置为属于凸包中心处的像素点;在符合凸包特征的像素点的集合内,从凸包中心开始沿着同一列向上的方向上,像素点的亮度值沿着当前列向上递减并在相邻两个像素点的亮度值之间产生第一梯度值,并且,从凸包中心开始沿着同一列向下的方向上,像素点的亮度值沿着当前列向下递减并在相邻两个像素点的亮度值之间产生第二梯度值,以使得凸包中心属于所述搜索中心。
  4. 根据权利要求3所述激光定位方法,其特征在于,在所述步骤2中,所述根据向上搜索的像素点的亮度值与向下搜索的像素点的亮度值在相邻两次确定的搜索中心所对应的搜索状态下的差异,及其当前帧图像相对于参考帧图像在同一列像素点中的同一类型的数值形成的帧间匹配关系,筛选出凸包中心像素点的方法包括:
    在所述当前帧图像的当前列中,控制所述搜索中心的亮度值与上一次搜索到的位于同一列的凸包中心像素点的亮度值进行比较;所述上一次搜索到的位于同一列的凸包中心像素点是针对上一次确定的搜索中心来在当前帧图像的同一列中筛选出的凸包中心像素点,上一次确定的搜索中心是与当前确定的搜索中心在所述当前帧图像的当前列向下或向上相邻的一个像素点,当前帧图像的同一列像素点的列排序与所述当前帧图像的当前列的列排序相等;
    若当前确定的所述搜索中心的亮度值比上一次搜索到的位于同一列的凸包中心像素点的亮度值大,则在所述当前帧图像的当前列中,自所述搜索中心向上搜索像素点,并对所述亮度值按照所述第一梯度值递减的像素点进行计数,直至满足向上计数停止条件,再将亮度值按照所述第一梯度值递减的像素点的数量标记为向上梯度下降数量,并停止向上搜索像素点以待下一次更新所述搜索中心;并且,自所述搜索中心向下搜索像素点,并对所述亮度值按照所述第二梯度值递减的像素点进行计数,直至满足向下计数停止条件,再将亮度值按照所述第二梯度值递减的像素点的数量标记为向下梯度下降数量,并停止向下搜索像素点以待下一次更新所述搜索中心;
    当机器人判断到所述当前帧图像的当前列中计数出的向上梯度下降数量大于或等于上一次搜索到位于同一列的凸包中心像素点所需计数出的所述向上梯度下降数量、和/或判断到所述当前帧图像的当前列中计数出的向下梯度下降数量大于或等于上一次搜索到位于同一列的凸包中心像素点所需计数出的所述向下梯度下降数量时,机器人在当前帧图像的当前列所遍历的像素点当中,若检测到第一梯度值与第二梯度值都不等于第一预设梯度参数,且第一梯度值与第二梯度值的差值的绝对值小于第二预设梯度参数,且沿着当前列向上搜索到的亮度值最小的像素点的亮度值与当前确定的搜索中心处的像素点的亮度值的差值的绝对值大于参考帧图像的同一列像素点中向上搜索形成的同一类型的亮度值的差值的绝对值,且沿着当前列向下搜索到的亮度值最小的像素点的亮度值与当前确定的搜索中心处的像素点的亮度值的差值的绝对值大于参考帧图像的同一列像素点中向下搜索形成的同一类型的亮度值的差值的绝对值,则机器人将当前确定的搜索中心标记为凸包中心像素点;其中,第一预设梯度参数小于第二预设梯度参数。
  5. 根据权利要求4所述激光定位方法,其特征在于,所述参考帧图像的同一列像素点中向上搜索形成的同一类型的亮度值的差值的绝对值是在参考帧图像中,从与所述当前列的列排序相同的一列中最终确定的搜索中心开始,沿着与所述当前列的列排序相同的一列中,向上搜索到的亮度值最小的像素点的亮度值与同一列上最终确定的搜索中心处的像素点的亮度值的差值的绝对值,其中,向上搜索到的亮度值最小的像素点相对于同一列上最终确定的搜索中心之间的距离小于或等于所述搜索半径;
    所述参考帧图像的同一列像素点中向下搜索形成的同一类型的亮度值的差值的绝对值是在参考帧图像中,从与所述当前列的列排序相同的一列中最终确定的搜索中心开始,沿着与所述当前列的列排序相同的一列,向下搜索到的亮度值最小的像素点的亮度值与同一列上最终确定的搜索中心处的像素点的亮度值的差值的绝对值,其中,向下搜索到的亮度值最小的像素点相对于同一列上最终确定的搜索中心之间的距离小于或等于所述搜索半径。
  6. 根据权利要求4所述激光定位方法,其特征在于,针对当前确定的一个搜索中心,所述步骤2还包括:
    若所述搜索中心处的像素点的亮度值比上一次搜索到的位于同一列的凸包中心像素点的亮度值大,则在当前帧图像的当前列中,自所述搜索中心向上搜索像素点,并且自所述搜索中心向下搜索像素点;
    若机器人在自所述搜索中心向上搜索的过程中检测到像素点的亮度值不是按照所述第一梯度值递减,则对预先设置的向上梯度异常计数量计数一次,然后机器人判断其沿着当前帧图像的当前列向上是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向上搜索像素点并确定达到向上计数停止条件,否则在所述向上梯度异常频数大于第一预设误差次数时,停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件;并且,在自所述搜索中心向下搜索的过程中检测到像素点的亮度值不是按照所述第二梯度值递减,则对预先设置的向下梯度异常计数量计数一次,然后机器人判断其沿着当前帧图像的当前列向下是否搜索完所述搜索半径内所覆盖的像素点,是则机器人停止沿着当前帧图像的当前列向下搜索像素点并确定达到向下计数停止条件,否则在所述向上梯度异常频数大于第二预设误差次数时,停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件;
    或者,机器人在自所述搜索中心向上搜索的过程中,沿着当前帧图像的当前列向上对所述亮度值为数值255且位置相邻接的像素点进行计数,并将所述亮度值为数值255且位置相邻接的像素点的数量标记为向上过曝数量,当机器人检测到向上过曝数量大于第三预设误差次数、和/或沿着当前帧图像的当前列向上计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向上搜索像素点并确定满足向上计数停止条件;并且机器人在自所述搜索中心向下搜索的过程中,沿着当前帧图像的当前列向下对所述亮度值为数值255且位置相邻接的像素点进行计数,并将所述亮度值为数值255且位置相邻接的像素点的数量标记为向下过曝数量,当机器人检测到向上过曝数量大于第四预设误差次数、和/或沿着当前帧图像的当前列向下计数完所述搜索半径内所覆盖的像素点时,机器人停止沿着当前帧图像的当前列向下搜索像素点并确定满足向下计数停止条件。
  7. 根据权利要求3所述激光定位方法,其特征在于,在所述步骤3中,所述根据线激光发射器发射的线激光在上一帧暗帧图像中的定位坐标对应的有效覆盖区域内的亮度值与所述凸包中心像素点在所述当前帧图像当中的亮度值的大小关系,从已经筛选出的凸包中心像素点当中剔除干扰点的方法包括:
    机器人遍历完所述当前帧图像的所有列的像素点并获取到每列中最新的凸包中心像素点,且保存线激光发射器发射的线激光在上一帧暗图像中的定位坐标的情况下,对于所述当前帧图像中的每个凸包中心像素点,在以线激光发射器发射的线激光在上一帧暗图像中的定位坐标所在位置为圆心,且半径为探测像素距离的圆域内,若机器人判断到该圆域内存在至少一个像素点的亮度值比所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点的亮度值大一个预设环境光亮度阈值,则机器人确定所述当前帧图像内与所述圆心具有相同坐标的凸包中心像素点是干扰点,机器人在该干扰点处找不到线激光位置,并将该干扰点从所述当前帧图像剔除。
  8. 根据权利要求2所述激光定位方法,其特征在于,在所述步骤1中,所述根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点的方法包括:
    若所述当前帧图像的当前列中的初始像素位置的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,或者所述当前帧图像的当前列中的初始像素位置的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第二预设亮度阈值,则从沿着所述当前帧图像的当前列向上距离所述当前帧图像的当前列中的初始像素位置一个参考像素距离的位置开始,沿着所述当前帧图像的当前列向下搜索像素点;若检测到当前搜索的一个像素点的亮度值比上一轮找到的位于同一列的线激光位置处的像素点的亮度值大第一预设亮度阈值,或检测到当前搜索的一个像素点的亮度值等于数值255,则对误差位置计数量计数一次,并确定当前搜索到的像素点是符合预设亮度分布特征的像素点;当机器人检测到误差位置计数量大于参考像素计数阈值时,确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;
    其中,参考像素距离使用像素点的数量表示,以使参考像素计数阈值等于参考像素距离;
    其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置。
  9. 根据权利要求2所述激光定位方法,其特征在于,在所述步骤1中,所述根据对应列中符合预设亮度分布特征的像素点来排除掉当前帧图像中不存在线激光位置的像素点的方法包括:
    以所述当前帧图像的当前列中的初始像素位置为圆环中心,在所述当前帧图像的当前列中,将位于圆环中心下方的、内径为第一定位半径且外径为第二定位半径的圆环区域所覆盖的像素点标记为第一待测像素点,然后计算第一待测像素点的亮度值的平均值,若第一待测像素点的亮度值的平均值大于上一轮找到的位于同一列的线激光位置处的像素点的亮度值,则确定第一待测像素点是符合预设亮度分布特征的像素点,并确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,第一定位半径小于第二定位半径,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置;
    或者,以所述当前帧图像的当前列中的初始像素位置为圆环中心,在所述当前帧图像的当前列中,将位于圆环中心上方的、内径为第一定位半径且外径为第二定位半径的圆环区域所覆盖的像素点标记为第二待测像素点,然后计算第二待测像素点的亮度值的平均值,若第二待测像素点的亮度值的平均值大于上一轮找到的位于同一列的线激光位置处的像素点的亮度值,则确定第二待测像素点是符合预设亮度分布特征的像素点,并确定所述当前帧图像的当前列中不存在线激光位置,则所述当前帧图像的当前列中的像素点设置为不存在线激光位置的像素点,再将所述当前帧图像的当前列中的像素点排除在步骤2的像素点搜索范围之外,同时确定机器人所处的环境的光强大于第一预设光强阈值;其中,第一定位半径小于第二定位半径,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置。
  10. 根据权利要求3所述激光定位方法,其特征在于,所述初始像素位置是在机器人的前方无障碍物的情况下,线激光发射器发射的线激光在机器人的行进平面反射回摄像头的视场范围后,形成于摄像头采集的图像中的原始像素点的位置;
    每个原始像素点是对应机器人的行进平面上的一个反射位置,用于表示同一帧图像的各列中用于搜索所述线激光位置的搜索起点;
    参考帧图像是配置为在采集到当前帧图像之前,机器人最新找到的线激光位置所在的一帧亮帧图像,其中,机器人最新找到的线激光位置是来源于参考帧图像对应列中设置出凸包中心像素点。
  11. 根据权利要求10所述激光定位方法,其特征在于,在所述步骤1中,若在所述当前帧图像的当前列中无法获取到初始像素位置,则将上一轮找到的位于同一列的线激光位置更新为所述初始像素位置,并将第二预设像素距离更新为搜索半径,再重复执行所述步骤2以搜索出对应列中的凸包中心像素点;其中,上一轮找到的位于同一列的线激光位置是属于参考帧图像的同一列像素点中最终确定出的凸包中心像素点所在的位置或第一帧亮帧图像的同一列像素点中的初始像素位置;
    若机器人在重复执行所述步骤2的过程中,在同一列内始终搜索不出凸包中心像素点,则确定机器人在同一列内找不到线激光位置。
  12. 根据权利要求1所述激光定位方法,其特征在于,所述机器人通过执行亮度重心算法来从当前帧图像中提取出线激光位置的方法包括:
    机器人逐列遍历所述当前帧图像;
    机器人依次搜索当前列的各个像素点,并根据当前帧图像的当前列内当前搜索出的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点;
    然后在当前帧图像的当前列中,将位置相邻接的至少两个合法像素点连接形成定位线段;当连接完位置相邻接的所有合法像素点后,选择出长度最大的定位线段;
    若选择出的长度最大的定位线段的长度大于预设连续长度阈值,则将选择出的长度最大的定位线段设的中心设置为线激光位置。
  13. 根据权利要求12所述激光定位方法,其特征在于,根据当前帧图像内当前搜索的像素点的亮度值与上一帧亮帧图像的对应位置处的像素点的亮度值的大小关系以及上一帧亮帧图像的对应位置处的像素点的亮度值,从当前帧图像的当前列中筛选出合法像素点的方法包括:
    将在所述当前帧图像内当前搜索的像素点的亮度值减去上一帧亮帧图像的具有相同行列位置处的像素点的亮度值,获得暗帧图像相对差值;当检测到暗帧图像相对差值的相反数大于预设亮度差阈值,且上一帧亮帧图像的具有相同行列位置处的像素点的亮度值大于参考亮帧图像亮度阈值时,将在所述当前帧图像内当前搜索的像素点设置为所述合法像素点。
  14. 根据权利要求1至13任一项所述激光定位方法,其特征在于,摄像头采集所述线激光发射器发射的线激光在待测物体表面反射回的光线所形成的图像序列是配置为亮帧图像与暗帧图像依次交替产生,以使:摄像头采集的当前帧图像是亮帧图像时,摄像头采集的下一帧图像是暗帧图像;在摄像头采集当前帧亮帧图像与摄像头采集下一帧亮帧图像的时间间隔内,摄像头采集当前帧暗帧图像;在摄像头采集下一帧亮帧图像之后,摄像头采集下一帧暗帧图像;
    其中,执行所述激光定位方法的过程中,所述图像序列的第一帧图像是亮帧图像。
  15. 根据权利要求1至13任一项所述激光定位方法,其特征在于,所述激光定位方法还包括:
    当机器人检测到其所处的环境的光强大于第一预设光强阈值时,机器人降低摄像头的增益,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现过曝;
    当机器人检测到其所处的环境的光强大于第一预设光强阈值时,机器人降低摄像头的曝光时间,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现过曝;
    当机器人检测到其所处的环境的光强小于第二预设光强阈值时,机器人提高摄像头的增益,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现欠曝;
    当机器人检测到其所处的环境的光强小于第二预设光强阈值时,机器人提高摄像头的曝光时间,以使得摄像头采集的所述线激光在待测物体表面反射回的光线的图像不出现欠曝。
  16. 根据权利要求1至13任一项所述激光定位方法,其特征在于,当机器人检测到摄像头的当前曝光值大于第一预设曝光阈值时,调高线激光发射器的用于发射线激光的功率档位,以使线激光发射器发射的线激光的强度配置为等于平滑系数与当前曝光值的乘积;
    当机器人检测到摄像头的当前曝光值小于第二预设曝光阈值时,调低线激光发射器的用于发射线激光的功率档位,以使线激光发射器发射的线激光的强度配置为等于平滑系数与当前曝光值的乘积;
    其中,第一预设曝光阈值大于第二预设曝光阈值,摄像头的当前曝光值用于反映摄像头在当前光照亮度的环境内的曝光量;平滑系数用于平滑曝光值调整的步长,以便于机器人从所述当前帧图像中搜索出线激光位置。
  17. 一种机器人,其特征在于,该机器人的机体装配有结构光模组,结构光模组包括线激光发射器和不设置红外滤光片的摄像头,以使摄像头采集的图像中保留有红外光的成像信息和可见光的成像信息;
    机器人内部设置控制器,控制器与结构光模组电性连接,控制器被配置为执行权利要求1至16任一项所述激光定位方法,以获得所述线激光发射器发射的线激光在当前帧图像中的定位坐标;
    其中,线激光发射器发射出去的线激光位于摄像头的视场范围内。
  18. 根据权利要求17所述机器人,其特征在于,所述摄像头的水平视角被配置为在机器人的前方接收所述线激光在机体宽度范围内反射回的光线;
    和/或结构光模组在机器人的机体上的安装高度被配置为与待测的障碍物的高度成正相关关系,以使得待测的障碍物占据所述摄像头的有效视场空间。
  19. 根据权利要求18所述机器人,其特征在于,所述摄像头的上视角的覆盖范围被配置为覆盖到线激光发射器发射的线激光形成的平面的底部;所述摄像头的下视角的覆盖范围被配置为覆盖到线激光发射器发射的线激光在机器人的机体前方的障碍物表面反射回的光线;
    和/或所述摄像头相对于机器人的中轴线偏转形成的航向角保持在预设误差角度范围内,以使得摄像头的光轴与机器人的行进方向平行,且让摄像头在机器人的前方接收所述线激光在机体宽度范围内反射回的光线;
    和/或所述摄像头沿着其光轴转动产生的翻滚角保持在预设误差角度范围内,以使摄像头在机器人的前方接收所述线激光在机体宽度范围内反射回的光线。
  20. 根据权利要求17所述机器人,其特征在于,若所述摄像头与所述线激光模块之间的安装距离越大,则在所述摄像头采集的图像中,用于表示所述线激光在障碍物的表面的反射位置的像素点相对于摄像头的中心的坐标偏移量增大。
  21. 根据权利要求17所述机器人,其特征在于,线激光发射器的发射角度和摄像头的接收角度被设置为:线激光发射器发射线激光至机体的前方的预设探测位置处,线激光在预设探测位置处反射回所述摄像头,其中,线激光在预设探测位置处形成的激光线段的长度大于机器人的机体宽度;
    每当机器人沿着由当前位置指向所述预设探测位置的方向行走预设行进距离时,预设探测位置与机器人之间的水平距离变小,摄像头采集的图像中的用于表示所述线激光在所述预设探测位置中的同一反射位置的像素点相对于摄像头的中心的坐标偏移量增大。
PCT/CN2023/112380 2022-09-15 2023-08-10 基于图像信息的激光定位方法及机器人 WO2024055788A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211119924.4A CN115619860A (zh) 2022-09-15 2022-09-15 基于图像信息的激光定位方法及机器人
CN202211119924.4 2022-09-15

Publications (1)

Publication Number Publication Date
WO2024055788A1 true WO2024055788A1 (zh) 2024-03-21

Family

ID=84858496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/112380 WO2024055788A1 (zh) 2022-09-15 2023-08-10 基于图像信息的激光定位方法及机器人

Country Status (2)

Country Link
CN (1) CN115619860A (zh)
WO (1) WO2024055788A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619860A (zh) * 2022-09-15 2023-01-17 珠海一微半导体股份有限公司 基于图像信息的激光定位方法及机器人
CN117455940B (zh) * 2023-12-25 2024-02-27 四川汉唐云分布式存储技术有限公司 基于云值守的顾客行为检测方法、系统、设备及存储介质

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014689A (ja) * 2006-07-04 2008-01-24 Matsushita Electric Ind Co Ltd 蛍光読取装置および位置補正用チップ
CN106091984A (zh) * 2016-06-06 2016-11-09 中国人民解放军信息工程大学 一种基于线激光的三维点云数据获取方法
CN107203973A (zh) * 2016-09-18 2017-09-26 江苏科技大学 一种三维激光扫描系统中线激光中心的亚像素定位方法
JP2019078682A (ja) * 2017-10-26 2019-05-23 日本電気株式会社 レーザ測距装置、レーザ測距方法および位置調整プログラム
CN110631554A (zh) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 机器人位姿的确定方法、装置、机器人和可读存储介质
CN111640156A (zh) * 2020-05-26 2020-09-08 中国地质大学(武汉) 针对室外弱纹理目标的三维重建方法、设备及存储设备
CN111798519A (zh) * 2020-07-21 2020-10-20 广东博智林机器人有限公司 激光条纹中心的提取方法、装置、电子设备及存储介质
CN113324478A (zh) * 2021-06-11 2021-08-31 重庆理工大学 一种线结构光的中心提取方法及锻件三维测量方法
CN113554697A (zh) * 2020-04-23 2021-10-26 苏州北美国际高级中学 基于线激光的舱段轮廓精确测量方法
CN114019533A (zh) * 2020-07-15 2022-02-08 原相科技股份有限公司 移动机器人
CN115619860A (zh) * 2022-09-15 2023-01-17 珠海一微半导体股份有限公司 基于图像信息的激光定位方法及机器人

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008014689A (ja) * 2006-07-04 2008-01-24 Matsushita Electric Ind Co Ltd 蛍光読取装置および位置補正用チップ
CN106091984A (zh) * 2016-06-06 2016-11-09 中国人民解放军信息工程大学 一种基于线激光的三维点云数据获取方法
CN107203973A (zh) * 2016-09-18 2017-09-26 江苏科技大学 一种三维激光扫描系统中线激光中心的亚像素定位方法
JP2019078682A (ja) * 2017-10-26 2019-05-23 日本電気株式会社 レーザ測距装置、レーザ測距方法および位置調整プログラム
CN110631554A (zh) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 机器人位姿的确定方法、装置、机器人和可读存储介质
CN113554697A (zh) * 2020-04-23 2021-10-26 苏州北美国际高级中学 基于线激光的舱段轮廓精确测量方法
CN111640156A (zh) * 2020-05-26 2020-09-08 中国地质大学(武汉) 针对室外弱纹理目标的三维重建方法、设备及存储设备
CN114019533A (zh) * 2020-07-15 2022-02-08 原相科技股份有限公司 移动机器人
CN111798519A (zh) * 2020-07-21 2020-10-20 广东博智林机器人有限公司 激光条纹中心的提取方法、装置、电子设备及存储介质
CN113324478A (zh) * 2021-06-11 2021-08-31 重庆理工大学 一种线结构光的中心提取方法及锻件三维测量方法
CN115619860A (zh) * 2022-09-15 2023-01-17 珠海一微半导体股份有限公司 基于图像信息的激光定位方法及机器人

Also Published As

Publication number Publication date
CN115619860A (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
WO2024055788A1 (zh) 基于图像信息的激光定位方法及机器人
CN210046133U (zh) 基于激光结构光的焊缝视觉跟踪系统
JP4450532B2 (ja) 相対位置計測装置
CN110852312B (zh) 悬崖检测方法、移动机器人的控制方法及移动机器人
US9087258B2 (en) Method for counting objects and apparatus using a plurality of sensors
CN110142785A (zh) 一种基于目标检测的巡检机器人视觉伺服方法
CN108592788A (zh) 一种面向喷涂生产线的3d智能相机系统与工件在线测量方法
CN110189375B (zh) 一种基于单目视觉测量的图像目标识别方法
JP2017535279A (ja) 自動移動ロボット
US20230364797A1 (en) Mobile robot determining future exposure time of optical sensor
CN104125372A (zh) 一种目标光电搜索探测方法
US20200338744A1 (en) Mobile robot performing multiple detections using image frames of same optical sensor
CN204405556U (zh) 一种基于数字图像处理的混凝土桥梁裂缝检测装置
TW201415183A (zh) 行進控制裝置以及具有該行進控制裝置之自動引導載具
KR101407508B1 (ko) 지면 형상 인식 알고리즘을 이용한 이동로봇의 이동경로추출 시스템 및 방법
CN209991983U (zh) 一种障碍物检测设备及无人机
CN113932712B (zh) 一种基于深度相机和关键点的瓜果类蔬菜尺寸测量方法
CN107016343A (zh) 一种基于贝尔格式图像的红绿灯快速识别方法
CN116901089B (zh) 一种多角度视距的机器人控制方法及系统
CN111367286B (zh) 一种测量对接设备位置的激光视觉定位系统及方法
CN116898351A (zh) 脏污检测装置、方法及机器人
WO2022188292A1 (zh) 目标检测及控制方法、系统、设备及存储介质
KR20210054798A (ko) 하수관로의 경사도 측정 방법
CN111246120B (zh) 移动设备的图像数据处理方法、控制系统及存储介质
US20230009071A1 (en) Control method for light sources of vision machine, and vision machine

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23864522

Country of ref document: EP

Kind code of ref document: A1