WO2021238000A1 - 机器人沿边工作方法、系统,机器人及可读存储介质 - Google Patents

机器人沿边工作方法、系统,机器人及可读存储介质 Download PDF

Info

Publication number
WO2021238000A1
WO2021238000A1 PCT/CN2020/118339 CN2020118339W WO2021238000A1 WO 2021238000 A1 WO2021238000 A1 WO 2021238000A1 CN 2020118339 W CN2020118339 W CN 2020118339W WO 2021238000 A1 WO2021238000 A1 WO 2021238000A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
working
edge
working area
image
Prior art date
Application number
PCT/CN2020/118339
Other languages
English (en)
French (fr)
Inventor
朱绍明
任雪
Original Assignee
苏州科瓴精密机械科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州科瓴精密机械科技有限公司 filed Critical 苏州科瓴精密机械科技有限公司
Publication of WO2021238000A1 publication Critical patent/WO2021238000A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to the field of intelligent control, in particular to a method and system for working along the edge of a robot, a robot and a readable storage medium.
  • Low repetition rate and high coverage rate are the goals pursued by mobile robots such as ergodic robots such as vacuuming, mowing and swimming pool cleaning.
  • the lawn mower robot uses the lawn enclosed by the electronic boundary as the working area, and is guided by the electronic boundary line as the direction when walking along the edge and executing the mowing process.
  • the change in the magnetic field of the coil is usually used to determine whether the robot is on the boundary line; and further, markers are set at the positions of the inner and outer inflection points, for example: a multi-level coil is wound around the inflection point to increase the magnetic field Strength:
  • a multi-level coil is wound around the inflection point to increase the magnetic field Strength:
  • this scheme uses magnetic field intensity induction to make judgments, and the accuracy is low; further, when it is necessary to identify the inflection point, it is necessary to increase the cost of the coil.
  • the purpose of the present invention is to provide a method and system for working along the edge of a robot, a robot and a readable storage medium.
  • an embodiment of the present invention provides a method for working along the edge of a robot.
  • the method includes: acquiring an original image;
  • the binarized image including a working area having a first pixel value and a non-working area having a second pixel value;
  • the robot determines whether the robot reaches the turning point of the working path along the side.
  • the "judging whether the robot has reached the turning point of the working path along the side according to the first rectangular contour and the second rectangular contour" includes:
  • the method further includes:
  • the method further includes: when the robot is at the first turning point, the robot executes one of clockwise or counterclockwise turning logic, and when the robot is at the At the second turning point, the robot executes the other of clockwise or counterclockwise turning logic.
  • an embodiment of the present invention provides a robot working along the edge of the system.
  • the system includes: an image acquisition module for acquiring original images;
  • An image conversion module for converting the original image into a binarized image, the binarized image including a working area with a first pixel value and a non-working area with a second pixel value;
  • a parsing module configured to obtain the smallest first rectangular outline enclosing the working area and the smallest second rectangular outline enclosing the non-working area on the basis of the binarized image
  • the robot determines whether the robot reaches the turning point of the working path along the side.
  • an embodiment of the present invention provides a method for working along the edge of a robot.
  • the method includes: acquiring an original image;
  • the binarized image including a working area having a first pixel value and a non-working area having a second pixel value;
  • the robot determines whether the robot reaches the turning point of the working path along the side.
  • the "determining whether the robot is on a working path along the edge" includes:
  • the "determining whether the robot is on a working path along the edge" includes:
  • an embodiment of the present invention provides a robot including a memory and a processor, the memory stores a computer program, and the processor executes the computer program to implement the method for working along the edge of the robot as described above A step of.
  • an embodiment of the present invention provides a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for working along the edge of the robot as described above are realized.
  • the method, system, robot and readable storage medium of the robot working along the edge of the present invention distinguish between the working area and the non-working area according to the images taken by the camera on the robot, and according to enclosing the working area and being The smallest first rectangular contour and the smallest second rectangular contour enclosing the non-working area are used to determine whether the robot reaches the turning point; so that the robot with a camera can perform edge-cutting function by visually identifying the boundary.
  • Fig. 1 is a schematic structural diagram of a specific example in the background art of the present invention.
  • FIG. 2 is a schematic structural diagram of a specific example of the lawn mower robot system of the present invention.
  • FIG. 3 is a schematic flowchart of a method for working along the edge of a robot provided by the first embodiment of the present invention
  • 4A, 4B, 4C, 4D, 4E, 4F, and 4G are schematic diagrams obtained at various steps in the same specific example of the present invention.
  • Figures 5 and 6 are respectively a schematic flow diagram of a specific implementation method of one of the steps in Figure 3;
  • FIG. 7 is a schematic flowchart of a method for working along the edge of a robot provided by another preferred embodiment of the method shown in FIG. 3;
  • FIG. 8 is a schematic flowchart of a specific implementation method of one of the steps in FIG. 7;
  • Fig. 9 is a schematic diagram of a second specific example of the present invention.
  • Figure 10 is a schematic diagram of a third specific example of the present invention.
  • FIG. 11 is a schematic flowchart of a method for working along the edge of a robot provided by the second embodiment of the present invention.
  • Fig. 12 is a schematic diagram of modules of a robot edge work system provided by an embodiment of the present invention.
  • the robot system of the present invention can be a mowing robot system, a sweeping robot system, a snow sweeper system, a leaf suction system, a golf course ball picker system, etc. Each system can automatically walk in the work area and perform corresponding tasks.
  • the robot system is taken as an example of a lawn mowing robot system.
  • the working area may be a lawn.
  • the lawn mower robot system of the present invention includes: a lawn mower robot (RM), a charging station 20 and a boundary 30.
  • RM lawn mower robot
  • the lawn mower robot includes a main body 10, a walking unit, an image acquisition unit, and a control unit provided on the main body 10.
  • the walking unit includes: a driving wheel 111, a driven wheel 113, and a motor for driving the driving wheel 111;
  • the motor can be a brushless motor with a reduction box and a Hall sensor; after the motor is started, it can be driven by the reduction box
  • the driving wheel 111 travels, and by controlling the speed and direction of the two wheels, it can realize forward and backward linear running, turning in place, and arc running;
  • the passive wheel 113 can be a universal wheel, which is usually set to One or two, which mainly play the role of supporting balance.
  • the image acquisition unit is used to acquire a scene within its viewing angle within a certain range.
  • it is a camera 12.
  • the camera 12 is installed on the upper part of the main body 10 at a certain angle with the horizontal direction, and can shoot A scene within a certain range of the lawn mower robot; the camera 12 usually shoots a scene within a certain range in front of the lawn mower robot.
  • the control unit is the main controller 13 for image processing, such as MCU or DSP.
  • the lawn mower robot further includes: a working mechanism for work, and a power supply 14; in this embodiment, the working mechanism is a lawn mower blade, and various sensors used to sense the walking state of the walking robot, For example: dumping, lifting off the ground, collision sensor, geomagnetism, gyroscope, etc., which are not detailed here.
  • the lawn mower robot system is also provided with a charging station 20 for providing power for automatic charging of the lawn mower robot.
  • the boundary 30 may be a virtual boundary, that is, the lawn mower robot uses a vision sensor to work along the edge.
  • the boundary 30 may also be a boundary line, and the lawn mower robot uses a vision sensor. Work along the border in conjunction with the boundary line.
  • the inside of the boundary 30 forms a working area A, and the outside forms a non-working area B; wherein a number of turning points are also formed on the boundary; during the working process of the robot, the specific walking and working rules of the present invention are set to walk along the edge.
  • turning refers to the forward movement of the left and right wheels with different speeds.
  • rotation includes in-situ rotation/turning, and its purpose is to make the robot return to the boundary line and drive the robot to walk and work along the edge.
  • obstacles 50 that need to prevent the lawn mower robot from entering such as pools and flower bushes are usually set in the work area, which will not be described in further detail here.
  • the method for working along the edge of a robot provided by the first embodiment of the present invention includes the following steps:
  • the camera installed on the lawn mower robot is used to shoot the scene in front of the robot in real time to form the original image; the scene is the ground image in the forward direction of the robot; further, when the main control After receiving the original image, the device analyzes the original image; thus, the original image can be used to determine whether the robot is on the working path along the edge, and further analyze the specific position of the robot on the working path along the edge, which will be described in detail in the following content.
  • the original image is actually a color image in RGB format; it should be noted that at different moments, the original image captured by the camera has the same size and is rectangular; further, in order to facilitate calculations, use the upper left of the original image
  • the angle is the coordinate origin (0,0), the upper boundary of the original image extends to the right as the positive X axis, and the left boundary extends downward as the positive Y axis to establish a Cartesian coordinate system; in this way, when the original image is determined, The coordinate value of the coordinate point of each element on it is determined.
  • this example is only for the convenience of description.
  • the establishment of the coordinate origin and the X and Y axes can be specifically adjusted as needed.
  • the coordinate origin, X and Y axes can also be established on the binarized image and correspond to the original image, and are not limited to being established on the original image.
  • step S2 in the implementation manner of the present invention, there are multiple ways to convert the original image in RGB format into a binarized image;
  • the binarized image is: the pixels in the image have only two gray values, or If it is 0 or 255, the binarization process is a process of presenting the entire original image with a clear black and white effect; in the specific embodiment of the present invention, the working area and the non-working area are adjusted to be white and black each other.
  • FIG. 4B is the HSV image formed by the conversion of FIG. 4A, which is also a color image
  • FIG. 4C is the binary image formed by the conversion of FIG.
  • the step S2 specifically includes: S21, converting the original image composed of the RGB color space into the HSV image composed of the HSV color space, so that the working area and the non-working area can be distinguished by more obvious color features; S22, matching through the threshold Convert the HSV image to a binary image; in this specific example, the threshold ranges for the H, S, and V channels are preset, and the threshold ranges are: [vlaueHLow, vlaueHHigh], [valueSLow, valueSHigh], [valueVLow ,valueVHigh], if the value of H, S, V corresponding to any pixel in the HSV image is within the above-mentioned corresponding preset threshold range, adjust its gray value to one of 0 or 255, and set H, S The gray value of pixels whose V and V are all outside the preset range is adjusted to the other of 0 or 255.
  • the gray value of the working area is adjusted to 0, and the gray value of the non-working area is adjusted to 255; if the non-working area needs to be analyzed currently , The gray value of the working area is adjusted to 255, and the gray value of the non-working area is adjusted to 0, so that the main controller can only calculate and detect one color (white or black), and improve the analysis efficiency; It should be noted that the adjustment of the gray value only needs to reverse the above-mentioned preset range of the threshold value, which can achieve the purpose of adjustment, and will not be further described here.
  • FIGS. 4D and 4E in order to improve the calculation accuracy, as shown in FIGS. 4D and 4E, 4D is an image formed after smoothing and filtering processing of FIG. 4C; FIG. 4E is an image formed after dilation and corrosion processing of FIG. 4D
  • the method further includes: performing smoothing filtering processing and/or dilation and corrosion processing on the binarized image to remove noise in the binarized image; the smoothing filtering process For example: median filtering, mean filtering, Gaussian filtering, etc.; the expansion and corrosion processing is a morphological operation.
  • the edge work method of the present invention including the step S3 is only an example. In other embodiments, if the robot is already on the edge work path , This step is not required.
  • the step S3 includes: S31. Based on the binarized image, obtain a first rectangular contour that encloses the working area and is the smallest; obtain a diagonal line of the first rectangular contour, and set its length as maxLine [g] means; judge whether the maxLine[g] is less than the preset first value, if so, drive the robot to change direction to re-plan and find the working path along the edge until the maxLine[g] is not less than the preset first value; If not, the robot is on the edge working path.
  • the preset first value is the length value preset by the system, and its size can be specifically adjusted as required; when maxLine[g] is less than the preset first value, it means that the robot is in a non-working area In this way, the position of the robot needs to be adjusted to return to the working path along the edge; accordingly, the adjustment method can drive the robot to rotate, and re-find the working path along the edge, and then return to the working path along the edge.
  • the camera set on it takes an image of the environment in front of it; correspondingly, the obtained original image is a rectangular area; further, the surrounding area At least two adjacent borders in the first rectangular outline of the joint working area share the border of the original image, and the edge working path means that the robot walks and works along the border line.
  • the specific example of the present invention is shown in FIG. 4E.
  • the preset first value value1 is 22.3
  • the value of maxLine[g] is 156.2 obtained by calculation, that is, maxLine[g]>value1. At this time, it is determined that the robot is on the working path along the edge.
  • the step S3 further includes: S32, based on the binarized image, obtain the area of the working area, and express it as maxArea[g]; determine the maxArea[g ] Is less than the preset second value, if yes, the robot is on the edge working path; if not, the robot is driven to change direction to re-plan and find the edge working path until the maxArea[g] is less than the preset second value .
  • the preset second value is the area value preset by the system, and its size can be specifically adjusted as needed; when maxArea[g] is not less than the preset second value, it means that the robot is in the working area , The distance from the boundary line is far, so, the position of the robot needs to be adjusted to return to the working path along the edge; correspondingly, the adjustment method can drive the robot to rotate, and re-search for the working path along the edge, and then return to the working path along the edge .
  • the specific example of the present invention is shown in FIG. 4E.
  • the preset second value value2 is 19150, and the value of maxArea[g] is obtained by calculation as 10655, that is, maxArea[g] ⁇ value2. At this time, it is determined that the robot is on the working path along the edge.
  • the method further includes: S4. If the robot is on the edge working path, acquiring the enclosure based on the binary image The working area is the smallest first rectangular contour, and the non-working area is the smallest second rectangular contour; according to the first rectangular contour and the second rectangular contour, the judgment Whether the robot reaches the turning point of the working path along the side.
  • the methods for obtaining the first rectangular contour and the second rectangular contour are the same as those described in the foregoing content, and will not be further described here.
  • step P1 if W[b] ⁇ W, and Y[b] is not equal to the preset first ordinate value, the robot continues to perform edge work in the current direction until Y[b] is equal to all The preset first ordinate value.
  • the preset first ordinate value and the preset second ordinate value are both the ordinate values of the pixel coordinates preset by the system, and their sizes can be specifically adjusted as required.
  • the width W of the original image is a certain value, and the first rectangular outline is a part of the original image. Therefore, when W[g] is compared with W, there is no W[g]. [g]>W case.
  • step P1 when the width of the first rectangular outline is consistent with the width of the original image, and the width of the second rectangular outline is smaller than the width of the original image, the coordinate system established in the original image is based on the above content.
  • the robot walks along the working path, the closer the robot is to the turning point, the larger the relative ordinate value Y[b] of the upper left corner of the original image obtained, when Y[b] is not equal to the preset first ordinate value
  • Y[b] is equal to the preset first ordinate value
  • the robot needs to be driven to turn according to the preset rules and keep It is on the working path along the edge.
  • the robot walks along the working path.
  • Y[g] is not equal to the preset second ordinate value, the robot keeps walking along the side working path; when Y[g] is equal to the preset second ordinate value, it means that the robot walks to the turning point of the side working path.
  • the robot needs to be driven to turn according to the preset rules and kept on the working path along the edge.
  • the method further includes: when the robot is at the first turning point, the robot executes one of clockwise or counterclockwise turning logic, and when the robot is at the second turning point, The robot executes the other of clockwise or counterclockwise turning logic.
  • the binarized images on the left and right of Figure 9 are the same image.
  • the difference is that the rectangular frame selection part in the binarized image is different, the left part is the first rectangular outline, and the right The selected part is the second rectangular outline; in addition, the gray value of the working area in the binarized image on the left is set to 0, and the gray value of the non-working area is set to 255; the gray of the working area in the binarized image on the left The degree value is set to 255, and the gray value of the non-working area is set to 0.
  • the binarized images on the left and right of Figure 10 are the same image.
  • the selected part on the left is the first rectangular outline; in addition, the gray area of the working area in the binarized image on the left
  • the degree value is set to 0, the gray value of the non-working area is set to 255; the gray value of the working area in the binarized image on the left is set to 255, and the gray value of the non-working area is set to 0.
  • the edge work method of the robot is also It can be:
  • the binarized image including a working area having a first pixel value and a non-working area having a second pixel value;
  • the robot According to the first rectangular contour and the second rectangular contour, it is judged whether the robot reaches the turning point along the working path along the side.
  • the lawn mower robot may first search for the boundary through other means such as wireless communication, and after being in the edge working path, pass the first rectangular outline and the The second rectangular contour is used to determine whether the robot reaches the turning point of the working path along the edge, and it is not limited to always performing the working along the edge in a visual manner.
  • the specific content of judging whether the robot reaches the turning point along the side working path is consistent with the first embodiment of the present invention, and will not be repeated here.
  • the method does not include: after the robot is initialized, the binarized image obtained by its first analysis is equally divided into the left image and the right image, and the left image and the right image are counted respectively.
  • the number of pixels in the right image determines whether the number of pixels in the left image is greater than the number of pixels in the right image. If yes, the robot is driven to work in a counterclockwise sequence; if not, the robot is driven to work in a clockwise sequence.
  • the edge detection method can be used to detect the specific edge working path based on the binarized image.
  • the edge detection method used in a specific example of the present invention is the canny algorithm; the detection principle is: the offset x between the edge line and the center line is counted, and the walking path of the robot is adjusted according to x; the center line is shown It is the vertical line at 1/2 of the width of the binarized image; the edge line shown is the intersection of the working area and the non-working area; in the statistical process, it is not limited to the average value of the edge line in the x-axis direction.
  • Hough line detection can be used to obtain the straight line equation of the edge line, and the walking direction and walking path of the robot can be adjusted by the angle between the edge line and the center line, which will not be repeated here.
  • a robot including a memory and a processor
  • the memory stores a computer program
  • the processor implements the steps of the method for working along the edge of the robot when the computer program is executed by the processor.
  • a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for working along the edge of the robot are realized.
  • the system includes: an image acquisition module 100, an image conversion module 200, and an analysis module 300.
  • the image acquisition module 100 is used to acquire an original image; the image conversion module 200 is used to convert the original image into a binarized image, the binarized image including a working area with a first pixel value and a non-working area with a second pixel value Area; the analysis module 300 is used to obtain the smallest first rectangular outline that encloses the working area on the basis of the binarized image, and the smallest second rectangular outline that encloses the non-working area Contour; according to the first rectangular contour and the second rectangular contour, it is determined whether the robot has reached the turning point of the working path along the side.
  • the parsing module 300 is specifically used to: obtain the smallest first rectangular outline that encloses the working area on the basis of the binarized image; obtain the first rectangle
  • the diagonal of the contour is expressed as maxLine[g]; it is judged whether the maxLine[g] is less than the preset first value, and if so, the robot is driven to change direction to re-plan and find the working path along the edge , Until the maxLine[g] is not less than the preset first value; if not, the robot is on the edge working path.
  • the analysis module 300 is specifically configured to: obtain the area of the working area based on the binarized image, and express it as maxArea[g]; determine whether the maxArea[g] is smaller than the preset first Two values, if yes, the robot is on the edge working path; if not, the robot is driven to change direction to re-plan and find the edge working path until the maxArea[g] is less than the preset second value .
  • the analysis module 300 is specifically configured to: obtain the width W[g] of the first rectangular contour, and the upper left corner of the first rectangular contour is in the original image or the binary
  • An ordinate value confirms that the robot is currently at the first turning point; if W[b] ⁇ W, and Y[b] is not equal to the preset first ordinate value, the robot continues to perform edgewise work in the current direction until Y[b] is equal to the preset first ordinate value; if W[g] ⁇ W, Y[g] is equal to the preset second ordinate value, confirm
  • the system further includes: an initialization module, which is used to divide the binarized image obtained by the first analysis of the image conversion module 200 into a left image and a right image after the robot is initialized.
  • the number of pixels in the image and the right image is used to determine whether the number of pixels in the left image is greater than the number of pixels in the right image. If so, the robot is driven to work in a counterclockwise sequence; if not, the robot is driven to work in a clockwise sequence.
  • the analysis module 300 when it is determined that the robot is on the edge working path and is walking along the edge working path, the analysis module 300 is also used to detect the specific edge working path based on the binarized image using an edge detection method .
  • the image acquisition module 100 in the robot edge work system is used to implement step S1; the image conversion module 200 is used to implement step S2; and the analysis module 300 is used to implement step S2.
  • Steps S3 and S4 are implemented; those skilled in the art can clearly understand that for the convenience and conciseness of the description, the specific working process of the above-described system can refer to the corresponding process in the foregoing method implementation, which will not be repeated here. .
  • the method, system, robot and readable storage medium of the robot working along the edge of the present invention distinguish between the working area and the non-working area according to the image taken by the camera on the robot, and confirm whether the robot is working along the edge according to the specific position of the working area On the path; so that the robot with the camera can perform the function of mowing along the edge by visually identifying the boundary; further, combined with the specific location of the non-working area, the turning point on the working path can be further judged, thereby further improving the recognition of the robot efficient.
  • modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed to multiple network modules, Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of this embodiment.
  • the functional modules in the various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)

Abstract

一种机器人沿边工作方法、系统,机器人及可读存储介质,所述方法包括:获取原始图像(S1);将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域(S2);以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点(S5)。所述方法根据机器人上摄像机(12)拍摄的图像区分工作区域和非工作区域,并根据工作区域的划分确认机器人是否到达沿边工作路径的转向点;从而使带有摄像机(12)的机器人,可以通过视觉识别边界(30)执行沿边割草功能。

Description

机器人沿边工作方法、系统,机器人及可读存储介质 技术领域
本发明涉及智能控制领域,尤其涉及一种机器人沿边工作方法、系统,机器人及可读存储介质。
背景技术
低重复率、高覆盖率是遍历式机器人如吸尘、割草及泳池清洗等移动机器人追求的目标。
以移动机器人为智能割草机器人为例,割草机器人以电子边界围住的草坪为工作区域,并在沿边行走及执行割草过程中,由电子边界线作为方向引导。
结合图1所示,边界线L首尾相接后,其内部围合形成机器人的工作区域A,即草坪区域;其外部形成机器人的非工作区域B,即边界区域;以在边界线L上的1-a为起点,顺时针旋转为例进行介绍:机器人沿边割草过程中,会分别遇到外拐点(凸拐点),内拐点(凹拐点);在图1所示示例中,所述外拐点分别为2-a、2-b、2-c、2-d、2-e、2-f,所述内拐点分别为:3-a、3-b;实际应用中,当机器人行走至外拐点时,以2-a为例,其需要顺时针旋转,直至机身方位为1-b一致;当机器人行走至内拐点时,以3-a为例,其需要逆时针旋转,直至机身方位为1-c一致。
现有技术中,通常通过线圈的磁场变化,判断机器人是否处于边界线上;并进一步的,在内拐点和外拐点所在位置设置标识物,例如:通过在拐点处绕设多级线圈以增加磁场强度;当机器人感应到磁场强度增加时,按照预设的规则执行转弯。然而,该种方案通过磁场强度感应的方式进行判断,精度较低;进一步的,当需要识别拐点时,还需要额为增加线圈成本。
发明内容
为解决上述技术问题,本发明的目的在于提供一种机器人沿边工作方法、系统,机器人及可读存储介质。
为了实现上述发明目的之一,本发明一实施方式提供一种机器人沿边工作方法,所述方法包括:获取原始图像;
将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
作为本发明一实施方式的进一步改进,所述“根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点”包括:
获取所述第一矩形轮廓的宽度W[g]、所述第一矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对纵坐标值Y[g],所述第二矩形轮廓的宽度W[b]、所述第二矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对的纵坐标值Y[b],以及获取所述原始图像或所述二值化图像的宽度W;
若W[g]=W,则执行步骤P1;
若W[g]<W,则执行步骤P2;
P1,若W[b]<W,且Y[b]等于预设第一纵坐标值,则所述机器人当前处于第一转向点;
P2,若Y[g]等于预设第二纵坐标值,则所述机器人当前处于第二转向点。
作为本发明一实施方式的进一步改进,所述方法还包括:
若W[b]<W,且Y[b]不等于预设第一纵坐标值,则所述机器人沿当前方向继续执行沿边工作,直至Y[b]等于所述预设第一纵坐标值。
作为本发明一实施方式的进一步改进,所述方法还包括:当所述机器人处于所述第一转向点时,所述机器人执行顺时针或逆时针转弯逻辑其中之一,当所述机器人处于所述第二转向点时,所述机器人执行顺时针或逆时针转弯逻辑其中另一。
为了实现上述发明目的之一,本发明一实施方式提供一种机器人沿边工作系统,所述系统包括:图像获取模块,用于获取原始图像;
图像转换模块,用于将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
解析模块,用于以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
为了实现上述发明目的之一,本发明一实施方式提供一种机器人沿边工作方法,所述方法包括:获取原始图像;
将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
判断所述机器人是否处于沿边工作路径上;
若所述机器人处于所述沿边工作路径上,则以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
作为本发明一实施方式的进一步改进,所述“判断所述机器人是否处于沿边工作路径上”包括:
以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓;
获取所述第一矩形轮廓的对角线,将其长度以maxLine[g]表示;
判断所述maxLine[g]是否小于预设第一数值,
若是,则驱动所述机器人变向以重新规划并寻找所述沿边工作路径,直至所述maxLine[g]不小于所述预设第一数值;
若否,则所述机器人处于所述沿边工作路径上。
作为本发明一实施方式的进一步改进,所述“判断所述机器人是否处于沿边工作路径上”包括:
以二值化图像为基础,获取所述工作区域的面积,将其以maxArea[g]表示;
判断所述maxArea[g]是否小于预设第二数值,
若是,则所述机器人处于沿边工作路径上;
若否,驱动所述机器人变向以重新规划并寻找所述沿边工作路径,直至所述maxArea[g]小于所述预设第二数值。
为了实现上述发明目的之一,本发明一实施方式提供一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如上所述机器人沿边工作方法的步骤。
为了实现上述发明目的另一,本发明一实施方式提供一种可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述机器人沿边工作方法的步骤。
与现有技术相比,本发明的机器人沿边工作方法、系统,机器人及可读存储介质,根据机器人上摄像机拍摄的图像区分工作区域和非工作区域,并根据围合所述工作区域、且为最小的第一矩形轮廓及围合所述非工作区域、且为最小的第二矩形轮廓,判断机器人是否到达转向点;从而使带有摄像机的机器人,可以通过视觉识别边界执行沿边割草功能。
附图说明
图1是本发明背景技术中具体示例的结构示意图;
图2是本发明割草机器人系统的具体示例的结构示意图;
图3是本发明第一实施方式提供的机器人沿边工作方法的流程示意图;
图4A、4B、4C、4D、4E、4F、4G为本发明同一具体示例中各个步骤获得的示意图;
图5、图6分别是图3中其中一个步骤的具体实现方法的流程示意图;
图7是基于图3所示方法的另一较佳实施方式提供的机器人沿边工作方法的流程示意图;
图8是图7中其中一个步骤的具体实现方法的流程示意图;
图9为本发明第二具体示例的示意图;
图10为本发明第三具体示例的示意图;
图11是本发明第二实施方式提供的机器人沿边工作方法的流程示意图;
图12是本发明一实施方式提供的机器人沿边工作系统的模块示意图。
具体实施方式
以下将结合附图所示的各实施方式对本发明进行详细描述。但这些实施方式并不限制本发明,本领域的普通技术人员根据这些实施方式所做出的结构、方法、或功能上的变换均包含在本发明的保护范围内。
本发明的机器人系统可以是割草机器人系统,扫地机器人系统、扫雪机系统、吸叶机系统,高尔夫球场拾球机系统等,各个系统可以自动行走于工作区域并进行相对应的工作,本发明具体示例中,以机器人系统为割草机器人系统为例做具体说明,相应的,所述工作区域可为草坪。
如图2所示,本发明的割草机器人系统包括:割草机器人(RM)、充电站20、边界30。
所述割草机器人包括:本体10,设置于本体10上的行走单元、图像获取单元及控制单元。所述行走单元包括:主动轮111、被动轮113以及用于驱动主动轮111的电机;所述电机可为带减速箱和带霍尔传感器的无刷电机;电机启动后,可通过减速箱带动主动轮111行走,并通过控制两个轮的速度、方向便可以实现前进与后退直线运行、原地转弯及圆弧运行等行驶动作;所述被动轮113可为万向轮,其通常设置为1个或者2个,其主要起支撑平衡的作用。
所述图像获取单元用于在一定范围内获取其视角范围内的场景,在本发明具体实施方式中为摄像机12,该摄像机12安装于本体10的上部,与水平方向成一定夹角,可以拍摄到割草机器人一定范围内的场景;该摄像机12通常拍摄 割草机器人前部一定范围内的场景。
控制单元为进行图像处理的主控制器13,例如:MCU或DSP等。
进一步的,所述割草机器人还包括:用于工作的工作机构,及供电电源14;在本实施例中,工作机构为割草刀盘,用于感应行走机器人的行走状态的各种传感器,例如:倾倒、离地、碰撞传感器、地磁、陀螺仪等,在此未一一具体赘述。
割草机器人系统还设有充电站20,用于提供电源以便割草机器人自动充电。在本实施例中,所述边界30可以是虚拟边界,即割草机器人通过视觉传感器进行沿边工作,当然,在其他实施例中,所述边界30也可以是边界线,割草机器人利用视觉传感器及边界线结合进行沿边工作。所述边界30内部形成工作区域A,外部形成非工作区域B;其中,边界上还形成若干转向点;机器人工作过程中,本发明具体的行走及工作规则设置为沿边行走,如此,在行走过程中,若路遇转向点时,需要执行转弯操作;若机器人位于工作区域A或非工作区域B,则需要执行旋转操作;在该具体示例中,转弯是指左右轮速度不一样的前进,带有一定弧度;旋转包括原地旋转/转弯,其目的均是为了使机器人返回至边界线上,并驱动机器人沿边行走和工作。
另外,工作区域内通常还设置水池、花丛等需要防止割草机器人进入的障碍物50,在此不做进一步的赘述。
结合图3所示,本发明的第一实施例提供的机器人沿边工作方法,所述方法包括以下步骤:
S1、获取原始图像;
S2、将原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
S3、判断所述机器人是否处于沿边工作路径上。
本发明具体实施方式中,对于步骤S1,通过割草机器人上安装的摄像机实时拍摄机器人前方的场景,形成原始图像;所述场景为机器人前进方向上的地 面图像;进一步的,当所述主控制器接收到原始图像后,对原始图像进行解析;从而可通过原始图像判断机器人是否处于沿边工作路径上,以及进一步的分析机器人在沿边工作路径上的具体位置,以下内容中会详细描述。
结合图4A所示,原始图像实际为RGB格式的彩色图像;需要说明的是,在不同时刻,摄像机拍摄获得的原始图像大小相同,且为矩形;进一步的,为了便于计算,以原始图像的左上角为坐标原点(0,0),以原始图像的上边界向右侧延伸为X轴正向,以左边界向下延伸为Y轴正向建立直角坐标系;如此,当原始图像确定时,其上各个元素的坐标点的坐标数值确定。当然,该示例仅是为了描述方便,实际应用中,在原始图像确定时,坐标原点,X、Y轴的建立均可以根据需要具体调整。需要注意的是,坐标原点,X,Y轴也可以是建立于所述二值化图像,且与所述原始图像对应,而不局限于建立于所述原始图像。
对于步骤S2,本发明可实现方式中,将RGB格式的原始图像转换为二值化图像的方式具有多种;所述二值化图像为:图像中像素点仅具有两种灰度值,或者为0,或者为255,二值化处理过程为将整个原始图像呈现出明显的黑白效果的过程;在本发明具体实施方式中,将工作区域和非工作区域调整互为白色和黑色。
本发明具体示例中,结合图4B、4C所示,图4B为由图4A转换形成的HSV图像,其同样为彩色图像;图4C为由图4B转换形成的二值化图像;相应的,所述步骤S2具体包括:S21、将由RGB颜色空间构成的原始图像转换为HSV颜色空间构成的HSV图像,如此,可通过更明显的颜色特征区分出工作区域和非工作区域;S22、通过阈值的匹配将HSV图像转换为二值化图像;在该具体示例中,分别对H、S、V三通道预设阈值范围,其阈值范围分别为:[vlaueHLow,vlaueHHigh],[valueSLow,valueSHigh],[valueVLow,valueVHigh],若HSV图像中的任一像素对应的H、S、V的数值处于上述对应的预设阈值范围内,则将其灰度值调整为0或255其中之一,将H、S、V均处于预设范围外的像素的灰度值调整为0或255其中另一。
本发明较佳实施方式中,当前若需要对工作区域进行分析,则将工作区域的灰度值调整为0,将非工作区域的灰度值调整为255;当前若需要对非工作区域进行分析,则将工作区域的灰度值调整为255,将非工作区域的灰度值调整为0,如此,以适应主控制器仅对一种颜色(白色或者黑色)进行计算检测,提升分析效率;需要说明的是,灰度值的调整仅需要对上述阈值预设范围进行取反,既可以达到调整的目的,在此不做进一步的赘述。
本发明较佳实施方式中,为了提升计算精准度,结合图4D、4E所示,4D为对图4C进行平滑滤波处理后形成的图像;图4E为对图4D进行膨胀腐蚀处理后形成的图像;相应的,所述步骤S2和步骤S3之间,所述方法还包括:对二值化图像进行平滑滤波处理和/或膨胀腐蚀处理以去除二值化图像中的噪声;所述平滑滤波处理例如:中值滤波、均值滤波、高斯滤波等;所述膨胀腐蚀处理为形态学操作。
本发明较佳实施方式中,结合图5所示,本发明的沿边工作方法包含所述步骤S3仅是一种实施例,在其他实施例中,若所述机器人已经位于所述沿边工作路径上,则无需该步骤。所述步骤S3包括:S31、以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓;获取第一矩形轮廓的对角线,将其长度以maxLine[g]表示;判断所述maxLine[g]是否小于预设第一数值,若是,驱动机器人变向以重新规划并寻找沿边工作路径,直至所述maxLine[g]不小于预设第一数值;若否,则所述机器人处于所述沿边工作路径上。
在该较佳实施方式中,所述预设第一数值为系统预设的长度值,其大小可以根据需要具体调节;当maxLine[g]小于预设第一数值时,表示机器人处于非工作区域,如此,需要对机器人的位置进行调整使其回到沿边工作路径上;相应的,调整的方式可以驱动机器人旋转,并重新寻找沿边工作路径,进而回到沿边工作路径上。
另外,需要说明的是,在该较佳实施方式中,割草机器人前进过程中,其上设置的摄像机实施拍摄其前方环境的图像;相应的,获得的原始图像为矩形 区域;进一步的,围合工作区域的第一矩形轮廓中至少相邻两个边界共用原始图像的边界,所述沿边工作路径即机器人沿着边界线行走并工作。
本发明具体示例图4E所示,预设第一数值value1为22.3,通过计算获得maxLine[g]的值为156.2,即maxLine[g]>value1,此时,确定机器人处于沿边工作路径上。
较佳的,结合图6所示,所述步骤S3还包括:S32、以二值化图像为基础,获取所述工作区域的面积,将其以maxArea[g]表示;判断所述maxArea[g]是否小于预设第二数值,若是,则所述机器人处于沿边工作路径上;若否,驱动机器人变向以重新规划并寻找沿边工作路径,直至所述maxArea[g]小于预设第二数值。
在该较佳实施方式中,所述预设第二数值为系统预设的面积数值,其大小可以根据需要具体调节;当maxArea[g]不小于预设第二数值时,表示机器人处于工作区域,距离边界线较远,如此,需要对机器人的位置进行调整使其回到沿边工作路径上;相应的,调整的方式可以驱动机器人旋转,并重新寻找沿边工作路径,进而回到沿边工作路径上。
另外,当工作区域一定时,其面积的计算方式具有多种,例如:采用坐标的方式、计算像素点的方式等,在此不做详细赘述。
本发明具体示例图4E所示,预设第二数值value2为19150,通过计算获得maxArea[g]的值为10655,即maxArea[g]<value2,此时,确定机器人处于沿边工作路径上。
需要说明的是,上述步骤S31和步骤S32可择一执行,也可以同时执行或先后执行,当选择同时执行或先后执行上述两个步骤时,其计算结果相同。
进一步的,在步骤S3执行完成后,结合图7所示,所述方法还包括:S4、若所述机器人处于所述沿边工作路径上,则以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述 机器人是否到达所述沿边工作路径的转向点。
所述第一矩形轮廓和第二矩形轮廓的获取方式与上述内容中的描述相同,在此不做进一步的赘述。
本发明较佳实施方式中,结合图8所示,对于步骤
Figure PCTCN2020118339-appb-000001
S5,其具体包括:获取第一矩形轮廓的宽度W[g]、所述第一矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对纵坐标值Y[g],第二矩形轮廓的宽度W[b]、所述第二矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对的纵坐标值Y[b],以及获取所述原始图像或所述二值化图像的宽度W;若W[g]=W,则执行步骤P1;若W[g]<W,则执行步骤P2;P1,若W[b]<W,且Y[b]到达预设第一纵坐标值,则所述机器人当前处于第一转向点;P2,若Y[g]到达预设第二纵坐标值,则确认机器人当前处于第二转向点。
较佳的,对于步骤P1,若W[b]<W,且Y[b]不等于预设第一纵坐标值,则所述机器人沿当前方向继续执行沿边工作,直至Y[b]等于所述预设第一纵坐标值。
在该较佳实施方式中,所述预设第一纵坐标值和所述预设第二纵坐标值均为系统预设的像素点坐标的纵坐标值,其大小可以根据需要具体调节。
需要说明的,在该较佳实施方式中,原始图像的宽度W是是确定值,第一矩形轮廓为原始图像的一部分,因此,将W[g]与W进行大小比对时,不存在W[g]>W情况。
结合图4E所示,经过判断可知:机器人当前处于沿边工作路径上,进一步的,经过采集分析获取矩形款选部分为第一矩形轮廓;结合图4F所示,经过判断可知:机器人当前处于沿边工作路径上,进一步的,经过采集分析获取矩形款选部分为第二矩形轮廓;
本发明较佳实施方式中,对于步骤P1,当第一矩形轮廓的宽度与原始图像的宽度一致,第二矩形轮廓的宽度小于原始图像的宽度时,依据上述内容在原始图像建立的坐标系下,机器人沿工作路径行走,机器人越靠近转向点,其获 得的左上角在原始图像上的相对纵坐标值Y[b]越大,在Y[b]不等于预设第一纵坐标值的情况下,机器人保持沿边工作路径行走;当Y[b]等于预设第一纵坐标值时,表示机器人行走至沿边工作路径的转向点上,此时,需要按照预设规则驱动机器人转向,并保持处于沿边工作路径上。另外,需要说明的是,当第一矩形轮廓的宽度与原始图像的宽度一致,非工作区域的宽度小于原始图像的宽度时,若W[b]=W,则出现的可能情况为:机器人瞬间跑偏,此时,通过上述步骤S3的执行,可以将机器人重新调整到沿边工作路径上。
本发明较佳实施方式中,对于步骤P2,当第一矩形轮廓的宽度与原始图像的宽度不一致,即第一矩形轮廓的宽度<原始图像的宽度,第二矩形轮廓的宽度小于原始图像的宽度时,依据上述内容在原始图像建立的坐标系下,机器人沿工作路径行走,机器人越靠近转向点,其获得的左上角在原始图像上的相对纵坐标值Y[g]越大,在Y[g]不等于预设第二纵坐标值的情况下,机器人保持沿边工作路径行走;当Y[g]等于预设第二纵坐标值时,表示机器人行走至沿边工作路径的转向点上,此时,需要按照预设规则驱动机器人转向,并保持处于沿边工作路径上。
进一步地,所述方法还包括:当所述机器人处于所述第一转向点时,所述机器人执行顺时针或逆时针转弯逻辑其中之一,当所述机器人处于所述第二转向点时,所述机器人执行顺时针或逆时针转弯逻辑其中另一。
本发明一具体示例中,预设value1=22.3,value2=19150,W=160,预设第一纵坐标值Y1=90,预设第二纵坐标值Y2=80。
结合图9所示,图9左侧和右侧的二值化图像为同一图像,其区别在于,二值化图像中矩形框选部分不同,左侧款选部分为第一矩形轮廓,右侧款选部分为第二矩形轮廓;另外,左侧二值化图像中工作区域的灰度值设置为0,非工作区域的灰度值设置为255;左侧二值化图像中工作区域的灰度值设置为255,非工作区域的灰度值设置为0。在该示例中,通过解析可知:maxLine[g]=200,maxArea[g]=14382,W[g]=160,W[b]=108,第一矩形轮廓的 左上角坐标为(0,0),第二矩形轮廓的左上角坐标为(52,71),即Y[g]=0,Y[b]=71。经过比对可知:maxLine[g]>value1,maxArea[g]<value2,W[g]=W,执行步骤P1,进一步的,经过再次比对可知:W[b]<W,Y[b]<Y1;如此,可以判断机器人处于沿边工作路径上,且在朝向第一转向点方向行进;当某一时刻,Y[b]=Y1时,说明机器人到达第一转向点,需要执行转弯逻辑;在该示例中,当机器人到达第一转向点时,执行顺时针逻辑。
结合图10所示,图10左侧和右侧的二值化图像为同一图像,其区别在于,左侧款选部分为第一矩形轮廓;另外,左侧二值化图像中工作区域的灰度值设置为0,非工作区域的灰度值设置为255;左侧二值化图像中工作区域的灰度值设置为255,非工作区域的灰度值设置为0。在该具体示例中,通过解析可知:maxLine[g]=1024,maxArea[g]=4407,W[g]=80,第一矩形轮廓的左上角坐标为(0,56)。经过比对可知:maxLine[g]>value1,maxArea[g]<value2,W[g]<W,执行步骤P1,进一步的,经过再次比对可知:Y[g]<Y2;如此,可以判断机器人处于沿边工作路径上,且在朝向第二转向点方向行进;当某一时刻,Y[g]=Y2时,说明机器人到达第二转向点,需要执行转弯逻辑;在该示例中,当机器人到达第二转向点时,执行逆时针逻辑。
进一步地,在所述割草机器人的沿边工作过程中执行上述步骤S3仅是一种实施例,在本发明的第二实施例中,结合图11所示,所述机器人的沿边工作方法,也可以是:
获取原始图像;
将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达沿边工作路径的转向点。
具体来说,在本发明的第二实施例中,所述割草机器人也可以是先通过无线通信等其他方式寻找边界,在处于所述沿边工作路径后,通过所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达沿边工作路径的转向点,而不局限于始终通过视觉的方式进行沿边工作。关于具体的判断机器人是否到达沿边工作路径的转向点的内容与本发明的第一实施例一致,这里便不再赘述。
本发明较佳实施方式中,所述步骤S1之前,所述方法还不包括:在机器人初始化后,将其首次解析获得的二值化图像均分为左图像和右图像,分别统计左图像和右图像中像素点的数量,判断左图像像素点的数量是否大于右图像像素点的数量,若是,则驱动机器人逆时针时序工作,若否,则驱动机器人顺时针时序工作。
另外,需要说明的是,当确定机器人处于沿边工作路径上,并沿着沿边工作路径行走过程中,可以基于二值化图像采用边缘检测方法检测具体的沿边工作路径。结合图4G所示,本发明一具体示例中采用的边缘检测方法为canny算法;其检测原理为:统计边缘线与中心线的偏移量x,根据x调整机器人的行走路径;所示中心线即为二值化图像宽度方向1/2处的竖向直线;所示边缘线即为工作区域和非工作区域的交线;统计过程中,不限于求边缘线在x轴方向的均值,还可以利用霍夫直线检测求出边缘线的直线方程,并通过边缘线与中心线的夹角调整机器人的行走方向及行走路径,在此不做进一步的赘述。
本发明一实施方式中,还提供一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述所述机器人沿边工作方法的步骤。
本发明一实施方式中,还提供一种可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述所述机器人沿边工作方法的步骤。
结合图12所示,提供一种机器人沿边工作系统,所述系统包括:图像获取模块100,图像转换模块200以及解析模块300。
图像获取模块100用于获取原始图像;图像转换模块200用于将原始图像 转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;解析模块300用于以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
本发明较佳实施方式中,所述解析模块300具体用于:以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓;获取所述第一矩形轮廓的对角线,将其长度以maxLine[g]表示;判断所述maxLine[g]是否小于预设第一数值,若是,则驱动所述机器人变向以重新规划并寻找所述沿边工作路径,直至所述maxLine[g]不小于所述预设第一数值;若否,则所述机器人处于所述沿边工作路径上。
较佳的,所述解析模块300具体用于:以二值化图像为基础,获取所述工作区域的面积,将其以maxArea[g]表示;判断所述maxArea[g]是否小于预设第二数值,若是,则所述机器人处于沿边工作路径上;若否,驱动所述机器人变向以重新规划并寻找所述沿边工作路径,直至所述maxArea[g]小于所述预设第二数值。
本发明较佳实施方式中,所述解析模块300具体用于:获取所述第一矩形轮廓的宽度W[g]、所述第一矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对纵坐标值Y[g],所述第二矩形轮廓的宽度W[b]、所述第二矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对的纵坐标值Y[b],以及获取所述原始图像或所述二值化图像的宽度W;若W[g]=W,W[b]<W,且Y[b]等于预设第一纵坐标值,确认机器人当前处于第一转向点;若W[b]<W,且Y[b]不等于预设第一纵坐标值,则所述机器人沿当前方向继续执行沿边工作,直至Y[b]等于所述预设第一纵坐标值;若W[g]<W,Y[g]等于预设第二纵坐标值,确认机器人当前处于第二转向点;其中,当机器人处于第一转向点时,驱动机器人执行顺时针或逆时针转弯逻辑其中之一,当机器人处于第二转向点时,驱动机 器人执行顺时针或逆时针转弯逻辑其中另一。
本发明较佳实施方式中,所述系统还包括:初始化模块,用于在机器人初始化后,将通过图像转换模块200首次解析获得的二值化图像均分为左图像和右图像,分别统计左图像和右图像中像素点的数量,判断左图像像素点的数量是否大于右图像像素点的数量,若是,则驱动机器人逆时针时序工作,若否,则驱动机器人顺时针时序工作。
本发明较佳实施方式中,当确定机器人处于沿边工作路径上,并沿着沿边工作路径行走过程中,所述解析模块300还用于基于二值化图像采用边缘检测方法检测具体的沿边工作路径。
另外,需要说明的是,在本发明的较佳实施方式中,机器人沿边工作系统中的所述图像获取模块100用于实现步骤S1;图像转换模块200用于实现步骤S2;解析模块300用于实现步骤S3和步骤S4;所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统的具体工作过程,可以参考前述方法实施方式中的对应过程,在此不再赘述。
综上所述,本发明的机器人沿边工作方法、系统,机器人及可读存储介质,根据机器人上摄像机拍摄的图像区分工作区域和非工作区域,并根据工作区域的具体位置确认机器人是否在沿边工作路径上;从而使带有摄像机的机器人,可以通过视觉识别边界执行沿边割草功能;进一步的,结合非工作区域的具体位置,可进一步的判断工作路径上的转向点,从而进一步提高机器人的识别效率。
在本申请所提供的几个实施方式中,应该理解到,所揭露的模块,系统和方法,均可以通过其它的方式实现。以上所描述的系统实施方式仅仅是示意性的,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为 模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上,可以根据实际的需要选择其中的部分或者全部模块来实现本实施方式方案的目的。
另外,在本申请各个实施方式中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以2个或2个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
最后应说明的是:以上实施方式仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施方式对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施方式技术方案的精神和范围。

Claims (10)

  1. 一种机器人沿边工作方法,其特征在于,所述方法包括:
    获取原始图像;
    将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
    以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
    根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
  2. 根据权利要求1所述的机器人沿边工作方法,其特征在于,所述“根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点”包括:
    获取所述第一矩形轮廓的宽度W[g]、所述第一矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对纵坐标值Y[g],所述第二矩形轮廓的宽度W[b]、所述第二矩形轮廓的左上角在所述原始图像或所述二值化图像上的相对的纵坐标值Y[b],以及获取所述原始图像或所述二值化图像的宽度W;
    若W[g]=W,则执行步骤P1;
    若W[g]<W,则执行步骤P2;
    P1,若W[b]<W,且Y[b]等于预设第一纵坐标值,则所述机器人当前处于第一转向点;
    P2,若Y[g]等于预设第二纵坐标值,则所述机器人当前处于第二转向点。
  3. 根据权利要求2所述的机器人沿边工作方法,其特征在于,所述方法还包括:
    若W[b]<W,且Y[b]不等于预设第一纵坐标值,则所述机器人沿当前方向继续执行沿边工作,直至Y[b]等于所述预设第一纵坐标值。
  4. 根据权利要求3所述的机器人沿边工作方法,其特征在于,所述方法还 包括:当所述机器人处于所述第一转向点时,所述机器人执行顺时针或逆时针转弯逻辑其中之一,当所述机器人处于所述第二转向点时,所述机器人执行顺时针或逆时针转弯逻辑其中另一。
  5. 一种机器人沿边工作系统,其特征在于,所述系统包括:
    图像获取模块,用于获取原始图像;
    图像转换模块,用于将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
    解析模块,用于以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
    根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
  6. 一种机器人沿边工作方法,其特征在于,所述方法包括:
    获取原始图像;
    将所述原始图像转换为二值化图像,所述二值化图像包括具有第一像素值的工作区域和具有第二像素值的非工作区域;
    判断所述机器人是否处于沿边工作路径上;
    若所述机器人处于所述沿边工作路径上,则以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓,以及围合所述非工作区域、且为最小的第二矩形轮廓;
    根据所述第一矩形轮廓和所述第二矩形轮廓,判断所述机器人是否到达所述沿边工作路径的转向点。
  7. 根据权利要求6所述的机器人沿边工作方法,其特征在于,所述“判断所述机器人是否处于沿边工作路径上”包括:
    以所述二值化图像为基础,获取围合所述工作区域、且为最小的第一矩形轮廓;
    获取所述第一矩形轮廓的对角线,将其长度以maxLine[g]表示;
    判断所述maxLine[g]是否小于预设第一数值,
    若是,则驱动所述机器人变向以重新规划并寻找所述沿边工作路径,直至所述maxLine[g]不小于所述预设第一数值;
    若否,则所述机器人处于所述沿边工作路径上。
  8. 根据权利要求6所述的机器人沿边工作方法,其特征在于,所述“判断所述机器人是否处于沿边工作路径上”包括:
    以二值化图像为基础,获取所述工作区域的面积,将其以maxArea[g]表示;
    判断所述maxArea[g]是否小于预设第二数值,
    若是,则所述机器人处于沿边工作路径上;
    若否,驱动所述机器人变向以重新规划并寻找所述沿边工作路径,直至所述maxArea[g]小于所述预设第二数值。
  9. 一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-4、6-8中任一项所述机器人沿边工作方法的步骤。
  10. 一种可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-4、6-8中任一项所述机器人沿边工作方法的步骤。
PCT/CN2020/118339 2020-05-29 2020-09-28 机器人沿边工作方法、系统,机器人及可读存储介质 WO2021238000A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010471605.4 2020-05-29
CN202010471605.4A CN113807118B (zh) 2020-05-29 2020-05-29 机器人沿边工作方法、系统,机器人及可读存储介质

Publications (1)

Publication Number Publication Date
WO2021238000A1 true WO2021238000A1 (zh) 2021-12-02

Family

ID=78745529

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118339 WO2021238000A1 (zh) 2020-05-29 2020-09-28 机器人沿边工作方法、系统,机器人及可读存储介质

Country Status (2)

Country Link
CN (1) CN113807118B (zh)
WO (1) WO2021238000A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231757A1 (zh) * 2022-05-30 2023-12-07 珠海一微半导体股份有限公司 基于地图区域轮廓的设置方法与机器人沿边结束控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101427186B1 (ko) * 2013-06-12 2014-08-07 건국대학교 산학협력단 미지환경에서 지능형 로봇의 이동경로 생성을 위한 영역탐색 및 매핑 장치
CN105467985A (zh) * 2014-09-05 2016-04-06 科沃斯机器人有限公司 自移动表面行走机器人及其图像处理方法
CN105785986A (zh) * 2014-12-23 2016-07-20 苏州宝时得电动工具有限公司 自动工作设备
CN107505939A (zh) * 2017-05-13 2017-12-22 大连理工大学 一种移动机器人的全覆盖路径规划方法
CN109947093A (zh) * 2019-01-24 2019-06-28 广东工业大学 一种基于双目视觉的智能避障算法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323925A (ja) * 2001-04-26 2002-11-08 Matsushita Electric Ind Co Ltd 移動作業ロボット
KR101776823B1 (ko) * 2016-03-02 2017-09-11 가천대학교 산학협력단 외부 감시카메라를 이용한 모바일 로봇의 위치인식 방법 및 시스템
CN106155053A (zh) * 2016-06-24 2016-11-23 桑斌修 一种割草方法、装置以及系统
CN106325278B (zh) * 2016-09-30 2019-05-31 中国矿业大学 一种基于椭圆识别的机器人定位导航方法
CN109753075B (zh) * 2019-01-29 2022-02-08 中国农业科学院农业资源与农业区划研究所 一种基于视觉的农林园区机器人导航方法
CN110347153A (zh) * 2019-06-26 2019-10-18 深圳拓邦股份有限公司 一种边界识别方法、系统及移动机器人
CN110368275A (zh) * 2019-08-12 2019-10-25 广州大学 一种导盲机器人及导盲系统、导盲方法
CN110919653B (zh) * 2019-11-29 2021-09-17 深圳市优必选科技股份有限公司 机器人的爬楼控制方法、装置、存储介质和机器人
CN110874101B (zh) * 2019-11-29 2023-04-18 合肥哈工澳汀智能科技有限公司 一种机器人清扫路径的生成方法及装置
CN111353431B (zh) * 2020-02-28 2024-03-08 苏州科瓴精密机械科技有限公司 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101427186B1 (ko) * 2013-06-12 2014-08-07 건국대학교 산학협력단 미지환경에서 지능형 로봇의 이동경로 생성을 위한 영역탐색 및 매핑 장치
CN105467985A (zh) * 2014-09-05 2016-04-06 科沃斯机器人有限公司 自移动表面行走机器人及其图像处理方法
CN105785986A (zh) * 2014-12-23 2016-07-20 苏州宝时得电动工具有限公司 自动工作设备
CN107505939A (zh) * 2017-05-13 2017-12-22 大连理工大学 一种移动机器人的全覆盖路径规划方法
CN109947093A (zh) * 2019-01-24 2019-06-28 广东工业大学 一种基于双目视觉的智能避障算法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231757A1 (zh) * 2022-05-30 2023-12-07 珠海一微半导体股份有限公司 基于地图区域轮廓的设置方法与机器人沿边结束控制方法

Also Published As

Publication number Publication date
CN113807118B (zh) 2024-03-08
CN113807118A (zh) 2021-12-17

Similar Documents

Publication Publication Date Title
CN109063575B (zh) 一种基于单目视觉的智能割草机自主有序割草方法
WO2021238001A1 (zh) 机器人行走控制方法、系统,机器人及可读存储介质
US20220082391A1 (en) Map creation method for mobile robot and path planning method based on the map
WO2021243895A1 (zh) 基于图像识别工作位置的方法、系统,机器人及存储介质
CN111399505A (zh) 一种基于神经网络的移动机器人避障方法
CN110243372A (zh) 基于机器视觉的智能农机导航系统及方法
WO2021243894A1 (zh) 基于图像识别工作位置的方法、系统,机器人及存储介质
WO2021243897A1 (zh) 基于图像识别工作位置的方法、系统,机器人及存储介质
Lin et al. Construction of fisheye lens inverse perspective mapping model and its applications of obstacle detection
WO2021238000A1 (zh) 机器人沿边工作方法、系统,机器人及可读存储介质
CN113696180A (zh) 机器人自动回充方法、装置、存储介质及机器人系统
CN112720408B (zh) 一种全地形机器人视觉导航控制方法
CN207077433U (zh) 移动机器人
Gupta et al. Robust lane detection using multiple features
Tsai et al. The robust and fast approach for vision-based shadowy road boundary detection
EP4246370A1 (en) Image-based working area identification method and system, and robot
EP4242909A1 (en) Obstacle recognition method and apparatus, and device, medium and weeding robot
EP4242910A1 (en) Obstacle recognition method, apparatus and device, medium, and weeding robot
EP4123406A1 (en) Automatic working system, automatic walking device and method for controlling same, and computer-readable storage medium
Guo et al. Drivable road region detection based on homography estimation with road appearance and driving state models
Sun et al. The optimization of object detection and localization in complex background for vision-based robot
Rasmussen A hybrid vision+ ladar rural road follower
Chi et al. Visual Tracking Cleaner—A Robot Implements on the Whiteboard
CN115147713A (zh) 基于图像识别非工作区域的方法、系统、设备及介质
Bahgat A simple implementation for unmarked road tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937500

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937500

Country of ref document: EP

Kind code of ref document: A1