WO2022095171A1 - 一种障碍物识别方法、装置、设备、介质及除草机器人 - Google Patents

一种障碍物识别方法、装置、设备、介质及除草机器人 Download PDF

Info

Publication number
WO2022095171A1
WO2022095171A1 PCT/CN2020/132572 CN2020132572W WO2022095171A1 WO 2022095171 A1 WO2022095171 A1 WO 2022095171A1 CN 2020132572 W CN2020132572 W CN 2020132572W WO 2022095171 A1 WO2022095171 A1 WO 2022095171A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate
information
pixels
obstacle
image
Prior art date
Application number
PCT/CN2020/132572
Other languages
English (en)
French (fr)
Inventor
朱绍明
任雪
Original Assignee
苏州科瓴精密机械科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州科瓴精密机械科技有限公司 filed Critical 苏州科瓴精密机械科技有限公司
Priority to EP20960628.4A priority Critical patent/EP4242910A1/en
Priority to US18/251,960 priority patent/US20240013548A1/en
Publication of WO2022095171A1 publication Critical patent/WO2022095171A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01BSOIL WORKING IN AGRICULTURE OR FORESTRY; PARTS, DETAILS, OR ACCESSORIES OF AGRICULTURAL MACHINES OR IMPLEMENTS, IN GENERAL
    • A01B39/00Other machines specially adapted for working soil on which crops are growing
    • A01B39/12Other machines specially adapted for working soil on which crops are growing for special purposes, e.g. for special culture
    • A01B39/18Other machines specially adapted for working soil on which crops are growing for special purposes, e.g. for special culture for weeding
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01BSOIL WORKING IN AGRICULTURE OR FORESTRY; PARTS, DETAILS, OR ACCESSORIES OF AGRICULTURAL MACHINES OR IMPLEMENTS, IN GENERAL
    • A01B69/00Steering of agricultural machines or implements; Guiding agricultural machines or implements on a desired track
    • A01B69/001Steering by means of optical assistance, e.g. television cameras
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M21/00Apparatus for the destruction of unwanted vegetation, e.g. weeds

Definitions

  • Embodiments of the present invention relate to image processing technologies, and in particular, to an obstacle identification method, device, equipment, medium, and a weeding robot.
  • the boundary of the weeding area of the weeding robot is usually demarcated by burying the boundary line, which consumes a lot of manpower and material resources and increases the cost. And because there are restrictions on the embedding of the boundary lines, for example, the angle of the corners cannot be less than 90 degrees, so the shape of the weeding area is limited to a certain extent.
  • the embodiments of the present invention provide an obstacle identification method, device, equipment, medium and a weeding robot, so as to achieve the effect of improving the identification efficiency and accuracy of obstacles in a candidate weeding area of the weeding robot.
  • an embodiment of the present invention provides an obstacle identification method, the method comprising:
  • the outline information and the brightness information it is determined whether there is an obstacle in the image of the candidate weeding area.
  • an embodiment of the present invention further provides an obstacle identification device, the device comprising:
  • a candidate obstacle area determination module configured to determine a candidate obstacle area in the candidate weeding area image according to the color information of the candidate weeding area image;
  • an information acquisition module for acquiring the outline information of the candidate obstacle area and the brightness information of the image of the candidate weeding area
  • An obstacle determination module configured to determine whether there is an obstacle in the image of the candidate weeding area according to the outline information and the brightness information.
  • an embodiment of the present invention further provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the obstacle identification method as described above.
  • an embodiment of the present invention further provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the above-mentioned obstacle identification method.
  • an embodiment of the present invention further provides a weeding robot, which includes a robot body and the aforementioned electronic device.
  • the candidate obstacle region in the candidate weeding region image is determined according to the color information of the candidate weeding region image; the outline information of the candidate obstacle region and the brightness information of the candidate weeding region image are obtained; The outline information and the brightness information are used to determine whether there is an obstacle in the image of the candidate weeding area, so as to solve the problem that in the prior art, the boundary of the weeding area of the weeding robot is usually calibrated by burying the boundary line, which consumes a lot of manpower and energy. material resources, increasing the cost.
  • the effect of improving the identification efficiency and accuracy of obstacles in the candidate weeding area of the weeding robot is achieved.
  • FIG. 1 is a flowchart of an obstacle identification method according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart of an obstacle identification method according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic structural diagram of an obstacle identification device according to Embodiment 3 of the present invention.
  • FIG. 4 is a schematic structural diagram of an electronic device according to Embodiment 4 of the present invention.
  • Embodiment 1 is a flowchart of an obstacle identification method provided in Embodiment 1 of the present invention.
  • This embodiment is applicable to a situation where a weeding robot identifies an obstacle in a candidate weeding area, and the method can use the obstacle provided by the embodiment of the present invention It is performed by identifying means, which can be implemented by means of software and/or hardware.
  • the obstacle identification method provided by this embodiment includes:
  • Step 110 Determine a candidate obstacle area in the candidate weeding area image according to the color information of the candidate weeding area image.
  • the candidate weeding area is the possible working area of the weeding robot, which may be all weeds to be removed, that is, the weeding area; it may also be an obstacle or boundary area covered with grass on the surface of the area, or mottled due to light and other reasons. Hard-to-identify weeded areas.
  • the image of the candidate weeding area may be captured by a camera installed on the weeding robot, which is not limited in this embodiment.
  • the color information of the image of the candidate weeding area may be information such as Hue, Saturation, and Value of the image, which is not limited in this embodiment.
  • An area that may be an obstacle is determined from the image of the candidate weeding area by using color information. Exemplarily, an area in the image with a large difference in color from the surrounding area is determined as a candidate obstacle area according to the color information.
  • determining the candidate obstacle area in the candidate weeding area image according to the color information of the candidate weeding area image includes:
  • Morphological processing is performed on the color segmentation image, and an area with a preset color is determined from the morphologically processed color segmentation image as a candidate obstacle area.
  • the color segmentation image of the candidate weeding area image is obtained according to the color information of the candidate weeding area image, and the color segmentation can be performed according to color dynamic segmentation, edge texture method segmentation, fixed threshold segmentation, Otsu threshold segmentation, etc. to obtain the candidate weeding area image.
  • color segmentation image may be a binary image.
  • the color segmentation image is then subjected to morphological processing, wherein the morphological processing may be a negation operation, an opening operation, a closing operation, etc., which is not limited in this embodiment.
  • a region with a preset color is determined from the morphologically processed color segmentation image as a candidate obstacle region.
  • the black area is used as the weeding area
  • the white area is used as the candidate obstacle area, so as to further identify the candidate obstacle area.
  • the candidate weeding area is preliminarily classified by acquiring the color segmentation image, and the color segmentation image is morphologically processed to improve the accuracy of the candidate obstacle area determination.
  • Step 120 Obtain the outline information of the candidate obstacle area and the brightness information of the image of the candidate weeding area.
  • the contour information is the contour information of a single candidate obstacle region, and the acquisition method may be performing contour detection on the candidate obstacle region.
  • the contour information may include chromaticity information, roughness information, range information, etc. of the contour, which is not limited in this embodiment.
  • the roughness information is used to display the roughness of the edge of the candidate obstacle area, which can be the average roughness.
  • the calculation method of the average roughness can be divided by the sum of the roughness of all pixels in the obstacle area in the outline of a single candidate obstacle area. Take the total number of pixels in the obstacle area in the outline.
  • determining the roughness information includes:
  • an edge image is obtained
  • the roughness information is determined according to the gray value of the pixel in the edge information of the edge image.
  • the luminance channel image of the candidate weeding area image is obtained by channel separation of the image of the candidate weeding area.
  • the luminance channel image is preprocessed.
  • the preprocessing may include filtering processing, normalization processing, etc. No restrictions apply.
  • Edge extraction is performed on the preprocessed luminance channel image to obtain an edge image.
  • the canny operator can be used for edge extraction to improve the accuracy of edge information acquisition in the edge image.
  • the roughness information of the candidate obstacle area is determined by the edge information in the edge image of the corresponding position of the candidate obstacle area, and the edge information includes the gray value of the pixels in the outline of the candidate obstacle area.
  • the average roughness of the edge of the candidate obstacle region can be obtained by dividing the number of pixels with the gray value of the obstacle region in the contour equal to 255 by the total number of pixels in the obstacle region in the contour. .
  • the roughness information is determined by the gray value of the pixel in the edge information of the edge image, and the accuracy of obtaining the roughness information is improved, thereby improving the accuracy of obstacle recognition.
  • the brightness information of the image of the candidate weeding area is the brightness information of the entire image of the candidate weeding area, and the information related to the brightness in the image can be obtained by acquiring the brightness channel image of the image of the candidate weeding area. The relationship between the candidate obstacle area and the illumination is judged by the brightness information.
  • the brightness information of the image of the candidate weeding area includes: the number of pixels in the exposure state and the number of white pixels in the non-exposure state;
  • acquiring the brightness information of the image of the candidate weeding area includes:
  • the number of white pixels in the non-exposure state is obtained.
  • the number of pixels in the exposure state is the number of pixels in the exposure state in the image of the candidate weeding area.
  • the white pixels in the non-exposed state are the number of pixels in the image of the candidate weeding area that are not in the exposed state but appear white, such as pixels of obstacles that are themselves white in the image of the candidate weeding area.
  • the number of pixels in the exposure state can be obtained.
  • the acquisition method can be by obtaining the brightness channel image of the image of the candidate weeding area, and counting the number of pixels whose brightness value is greater than or equal to a preset threshold, such as 255. , as the number of pixels in the exposure state.
  • the number of white pixels in the non-exposed state is obtained.
  • the acquisition method may be by acquiring the luminance channel image and the chrominance channel image of the image of the candidate weeding area, and statistics that the luminance value in the luminance channel image is less than a preset threshold, such as 255, and the chrominance value in the chrominance channel image is less than or equal to the preset value. Threshold, such as 0, the number of pixels, as the number of white pixels in the non-exposure state.
  • the brightness information is determined by dividing the brightness information into the number of pixels in the exposure state and the number of white pixels in the non-exposure state, and determining the brightness information according to the brightness value and/or the chromaticity value of the pixels. Improve the accuracy of subsequent distinction between mottled weeding areas caused by lighting and other reasons in candidate obstacle areas and obstacle areas that are white.
  • Step 130 Determine whether there is an obstacle in the image of the candidate weeding area according to the outline information and the brightness information.
  • the contour information of the candidate obstacle area and the brightness information of the candidate weeding area image are compared with the preset information judgment conditions.
  • the preset information judgment condition is related to contour information and brightness information. If the preset information conditions are met, the candidate obstacle area is determined to be an obstacle area, that is, it is determined that there is an obstacle in the image of the candidate weeding area where the obstacle area is located, so that the weeding robot can perform subsequent obstacle processing.
  • the identified obstacle is an obstacle whose surface is covered with grass.
  • the candidate obstacle area in the candidate weeding area image is determined according to the color information of the candidate weeding area image; the outline information of the candidate obstacle area and the outline information of the candidate weeding area image are obtained.
  • Brightness information according to the outline information and the brightness information, determine whether there is an obstacle in the image of the candidate weeding area, so as to solve the problem in the prior art that the boundary of the weeding area of the weeding robot is usually calibrated by burying the boundary line, It consumes a lot of manpower and material resources and increases the cost.
  • the effect of improving the identification efficiency and accuracy of obstacles in the candidate weeding area of the weeding robot is achieved.
  • FIG. 2 is a flowchart of an obstacle identification method according to Embodiment 2 of the present invention.
  • the technical solution is directed to the process of determining whether there is an obstacle in the image of the candidate weeding area according to the outline information and the brightness information. Supplementary explanation.
  • this solution is specifically optimized as follows: the contour information of the candidate obstacle area includes: at least one of the number of chroma effective pixels, the proportion of chroma effective pixels, roughness information and range information, and the
  • the brightness information includes: the number of pixels in the exposure state and/or the number of white pixels in the non-exposure state; then, according to the outline information and the brightness information, determine whether there are obstacles in the candidate weeding area image, including:
  • Step 210 Determine a candidate obstacle area in the candidate weeding area image according to the color information of the candidate weeding area image.
  • Step 220 Obtain the outline information of the candidate obstacle area and the brightness information of the candidate weeding area image; the outline information of the candidate obstacle area includes: the number of effective chroma pixels, the ratio of effective chroma pixels, the roughness At least one of information and range information; the brightness information includes: the number of pixels in an exposed state and/or the number of white pixels in a non-exposed state.
  • the number of effective chroma pixels is the number of pixels in the outline of a single candidate obstacle region where the obstacle region is within the chromaticity range of valid pixel values.
  • the proportion of effective chrominance pixels is the proportion of the pixels in the chromaticity range of the effective pixel value in the outline of a single candidate obstacle area to all the pixels in the obstacle area in the outline.
  • the range information is used to indicate the size of the candidate obstacle region, and may be an area, a diagonal length, a width, a height, the number of pixels included, etc., which is not limited in this embodiment.
  • acquiring the number of effective chrominance pixels and/or the proportion of effective chrominance pixels includes:
  • the number of effective chrominance pixels and/or the proportion of effective chrominance pixels is acquired according to the chrominance segmentation threshold interval and the chrominance values of the pixels in the candidate obstacle region.
  • the color segmentation image of the candidate weeding area image is obtained according to the color information of the candidate weeding area image, and the color segmentation can be performed according to color dynamic segmentation, edge texture method segmentation, fixed threshold segmentation, Otsu threshold segmentation, etc., to obtain the candidate weeding area image.
  • the chromaticity segmentation threshold interval is used to convert the image of the candidate weeding area into a color segmentation image according to the chrominance segmentation threshold interval.
  • the chromaticity division threshold value interval is regarded as the effective pixel value chromaticity range, and the pixel whose chromaticity value is in the chrominance division threshold value interval is regarded as the chrominance effective pixel.
  • the number of chroma effective pixels in the obstacle area in the outline of a single candidate obstacle area is taken as the number of chroma effective pixels, and the chroma effective pixels in the obstacle area in the outline of a single candidate obstacle area account for all the obstacle areas in the outline.
  • the ratio of pixels is used as the chrominance effective pixel ratio.
  • Step 230 If the number of effective chrominance pixels is greater than a preset effective pixel number threshold and the range information is greater than a preset first range threshold, determine whether the proportion of effective chrominance pixels is less than a preset first effective pixel ratio. percentage threshold.
  • the preset first effective pixel quantity threshold may be an empirical value, which is not limited in this embodiment.
  • the number of chrominance effective pixels is greater than the preset effective pixel number threshold, it indicates that the possibility of the candidate obstacle area being a weeding area image increases, and it is necessary to further determine whether the candidate obstacle area is a weeding area or an obstacle area with a color similar to the weeding area.
  • the range information is greater than the preset first range threshold, so as to avoid processing too small candidate obstacle areas.
  • the contour information further includes location information.
  • the position information is used to display the distance between the candidate obstacle area and the weeding robot. For example, it is the y-axis coordinate value of the lower right corner of the minimum circumscribed rectangle of the candidate obstacle area. Of course, other representative coordinate values can also be selected. This is not limited.
  • the range information exceeds the preset first range threshold, and the position information is greater than the preset position threshold, then it is then judged whether the proportion of chroma effective pixels is within the preset chroma effective pixel Within the range of the pixel proportion threshold.
  • the number of chroma effective pixels is SContours i , where i is the number of the candidate obstacle area, the proportion of chroma effective pixels is SPContours i , and the range information is the diagonal length of the candidate obstacle area AContours i .diagonal or The height of the candidate obstacle area AContours i .height, and the position information is the y-axis coordinate value YContours i of the lower right corner of the minimum circumscribed rectangle of the candidate obstacle area. The larger the coordinate value, the closer the candidate obstacle area is to the robot.
  • the judgment condition is: if SContours i > a and AContours i .diagonal > 105 and YContours i >75; or if SContours i > a and AContours i .height > 70 and YContours > 75. Then judge again, whether SPContours i ⁇ 0.4
  • Step 240 If yes, determine whether there is an obstacle in the image of the candidate weeding area according to the outline information, the brightness information and the preset first information judgment condition.
  • the brightness information and the preset first information judgment condition determine whether there is an obstacle in the image of the candidate weeding area. That is, further analysis is performed on the candidate obstacle areas with relatively high probability of obstacles determined from the candidate obstacle areas with high possibility of weeding area.
  • the preset first information judgment condition may be adjusted according to a specific judgment situation, which is not limited in this embodiment.
  • determining whether there is an obstacle in the image of the candidate weeding area according to the outline information, the brightness information and the preset first information judgment condition includes:
  • the number of pixels in the exposure state is less than the preset threshold of the number of pixels in the first exposure state and greater than the threshold of the number of pixels in the preset second exposure state, and the roughness information is less than the preset first roughness threshold, then determine the number of pixels in the exposure state.
  • the second range threshold is to determine that there is an obstacle in the image of the candidate weeding area
  • the second roughness threshold is set, and the range information is greater than the preset third range threshold, it is determined that there is an obstacle in the image of the candidate weeding area.
  • the roughness information is the average roughness HContours i of the outline of the candidate obstacle area, the smaller the average roughness, the smoother the candidate obstacle area, and the range information is the number of pixels in the candidate obstacle area AContours i .pixels, The number of pixels in the exposure state is overbrightPix and the number of white pixels in the non-exposure state is zeroPix.
  • the preset threshold of the number of pixels in the first exposure state is 200
  • the preset threshold of the number of pixels in the second exposure state is 2500
  • the preset first roughness threshold is 0.25
  • the preset first information judgment condition is 200 ⁇ overbrightPix ⁇ 2500 and HContours i ⁇ 0.25, when this condition is met, it is determined that there is an obstacle in the image of the candidate weeding area, and the obstacle may be a painted flagstone road covered by grass.
  • the preset threshold of the number of pixels in the third exposure state is 500
  • the preset second chroma effective pixel ratio threshold is 0.29
  • the preset second range threshold is 7700
  • the preset first information judgment condition is 500 ⁇ overbrightPix And SPContours i ⁇ 0.29 and AContours i .pixels > 7700, when this condition is satisfied, it is determined that there are obstacles in the image of the candidate weeding area, and the obstacles may be leaves or wood slag.
  • the preset threshold for the number of pixels in the fourth exposure state is 400
  • the preset threshold for white pixels in the first non-exposure state is 80
  • the preset second roughness threshold is 0.27
  • the preset third range threshold is 6900.
  • An information judgment condition is 400 ⁇ overbrightPix and zeroPix>80 and HContours i ⁇ 0.27 and AContours i .pixels>6900, then when this condition is satisfied, it is determined that there is an obstacle in the image of the candidate weeding area.
  • Step 250 If not, determine whether there is an obstacle in the image of the candidate weeding area according to the outline information, the brightness information and the preset second information judgment condition.
  • the candidate weeding is determined according to the contour information, the brightness information and the preset second information judgment conditions Whether there are obstacles in the area image. That is, further analysis is performed on the candidate obstacle areas with relatively low probability of obstacles determined from the candidate obstacle areas with high possibility of weeding area.
  • the preset second information judgment condition may be adjusted according to a specific judgment situation, which is not limited in this embodiment.
  • determining whether there is an obstacle in the image of the candidate weeding area according to the outline information, the brightness information and the preset second information judgment condition includes:
  • the proportion threshold, and the range information is greater than the preset fourth range threshold, and the roughness information is less than the preset third roughness threshold, and the number of white pixels in the non-exposure state is greater than the preset second non-exposure state White pixel threshold, then it is determined that there is an obstacle in the image of the candidate weeding area;
  • the number of pixels in the exposure state is less than the preset number threshold of pixels in the seventh exposure state and greater than the threshold number of pixels in the preset eighth exposure state, and the proportion of effective chrominance pixels is less than the preset number of effective pixels in fourth chrominance ratio threshold, and the roughness information is smaller than the preset fourth roughness threshold, it is determined that there is an obstacle in the image of the candidate weeding area.
  • the preset threshold for the number of pixels in the fifth exposure state is 300
  • the preset threshold for the number of pixels in the sixth exposure state is 400
  • the preset threshold for the proportion of effective pixels in the third chromaticity is 0.6
  • the preset threshold for the fourth range is is 8000
  • the preset third roughness threshold is 0.26
  • the preset second non-exposure state white pixel threshold is 100
  • the preset second information judgment condition is 300 ⁇ overbrightPix ⁇ 400 and SPContours i ⁇ 0.6 and AContours i .pixels >8000 and HContours i ⁇ 0.26 and zeroPix>100, when this condition is satisfied, it is determined that there are obstacles in the image of the candidate weeding area, and the obstacle may be a flagstone road with grass growing from the gap.
  • the preset threshold for the number of pixels in the seventh exposure state is 200
  • the preset threshold for the number of pixels in the eighth exposure state is 2500
  • the preset threshold for the proportion of effective pixels in the fourth chroma is 0.6
  • the preset fourth threshold for roughness is 0.23
  • the preset second information judgment condition is 200 ⁇ overbrightPix ⁇ 2500 and SPContours i ⁇ 0.6 and HContours i ⁇ 0.23
  • the contour information and brightness information such as the number of effective chromaticity pixels, the proportion of effective chromaticity pixels, roughness information, and range information, so as to improve the accuracy of the area covered with grass.
  • the obstacle or boundary area, or mottled due to lighting and other reasons makes it difficult to determine whether it is an obstacle area.
  • the accuracy of the candidate obstacle area recognition is performed.
  • FIG. 3 is a schematic structural diagram of an obstacle identification device according to Embodiment 3 of the present invention.
  • the device can be implemented in hardware and/or software, can execute an obstacle identification method provided by any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method.
  • the device includes:
  • a candidate obstacle area determination module 310 configured to determine a candidate obstacle area in the candidate weeding area image according to the color information of the candidate weeding area image;
  • an information acquisition module 320 configured to acquire the outline information of the candidate obstacle area and the brightness information of the image of the candidate weeding area
  • the obstacle determination module 330 is configured to determine whether there is an obstacle in the image of the candidate weeding area according to the outline information and the brightness information.
  • the candidate obstacle area in the candidate weeding area image is determined according to the color information of the candidate weeding area image; the outline information of the candidate obstacle area and the outline information of the candidate weeding area image are obtained.
  • Brightness information according to the outline information and the brightness information, determine whether there is an obstacle in the image of the candidate weeding area, so as to solve the problem in the prior art that the boundary of the weeding area of the weeding robot is usually calibrated by burying the boundary line, It consumes a lot of manpower and material resources and increases the cost.
  • the effect of improving the identification efficiency and accuracy of obstacles in the candidate weeding area of the weeding robot is achieved.
  • the candidate obstacle area determination module includes:
  • a color segmentation image acquisition unit configured to acquire a color segmentation image of the candidate weeding area image according to the color information of the candidate weeding area image
  • a candidate obstacle area determination unit configured to perform morphological processing on the color segmentation image, and determine a preset color area as a candidate obstacle area from the morphologically processed color segmentation image.
  • the outline information of the candidate obstacle area includes: at least one of the number of chroma effective pixels, the proportion of chroma effective pixels, roughness information and range information;
  • the brightness information includes: the number of pixels in the exposed state and/or the number of white pixels in the non-exposed state;
  • the obstacle determination module includes:
  • a proportion determination unit configured to determine whether the proportion of effective chrominance pixels is less than a preset first range threshold if the number of effective chrominance pixels is greater than a preset effective pixel number threshold and the range information is greater than a preset first range threshold Threshold of effective pixel ratio of chroma;
  • a first obstacle determination unit configured to determine whether there is an obstacle in the candidate weeding area image according to the outline information, the brightness information and the preset first information judgment condition if the proportion judgment unit judges yes thing;
  • a second obstacle determination unit configured to determine whether there is an obstacle in the image of the candidate weeding area according to the outline information, the brightness information and the preset second information judgment condition if the proportion judgment unit judges no thing.
  • the first obstacle determination unit includes: a first obstacle determination subunit, configured to be used if the number of pixels in the exposure state is less than the preset first exposure state pixels
  • the number threshold is greater than the preset second exposure state pixel number threshold, and the roughness information is less than the preset first roughness threshold, determining that there is an obstacle in the image of the candidate weeding area;
  • the second obstacle determination subunit is used for if the number of pixels in the exposure state is greater than the preset threshold of the number of pixels in the third exposure state, and the proportion of effective chroma pixels is less than the proportion of effective pixels in the second preset chroma threshold, and the range information is greater than the preset second range threshold, it is determined that there is an obstacle in the image of the candidate weeding area;
  • the third obstacle determination subunit is used for if the number of pixels in the exposure state is greater than the preset threshold for the number of pixels in the fourth exposure state, and the number of white pixels in the non-exposure state is greater than the number of white pixels in the first non-exposure state threshold, and the roughness information is less than a preset second roughness threshold, and the range information is greater than a preset third range threshold, it is determined that there is an obstacle in the candidate weeding area image.
  • the second obstacle determination unit includes:
  • the fourth obstacle determination subunit is used for if the number of pixels in the exposure state is less than the preset threshold value of the number of pixels in the fifth exposure state and greater than the threshold value of the number of pixels in the preset sixth exposure state, and the effective chromaticity pixels occupy
  • the ratio is less than the preset third chroma effective pixel ratio threshold, and the range information is greater than the preset fourth range threshold, and the roughness information is less than the preset third roughness threshold, and the non-exposed state white pixels If the number is greater than the preset second non-exposure state white pixel threshold, it is determined that there are obstacles in the candidate weeding area image;
  • the fifth obstacle determination subunit is used for if the number of pixels in the exposure state is less than the preset threshold of the number of pixels in the seventh exposure state and greater than the threshold of the number of pixels in the preset eighth exposure state, and the effective chromaticity pixels occupy If the ratio is smaller than a preset fourth chroma effective pixel ratio threshold, and the roughness information is smaller than a preset fourth roughness threshold, it is determined that there is an obstacle in the candidate weeding area image.
  • FIG. 4 is a schematic structural diagram of an electronic device according to Embodiment 4 of the present invention.
  • the electronic device includes a processor 40, a memory 41, an input device 42, and an output device 43;
  • the number can be one or more, and one processor 40 is taken as an example in FIG. 4 ;
  • the processor 40, memory 41, input device 42 and output device 43 in the electronic device can be connected by a bus or in other ways. Take bus connection as an example.
  • the memory 41 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the obstacle identification method in the embodiment of the present invention.
  • the processor 40 executes various functional applications and data processing of the electronic device by running the software programs, instructions and modules stored in the memory 41 , that is, to implement the above-mentioned obstacle identification method.
  • the memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like.
  • memory 41 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • memory 41 may further include memory located remotely from processor 40, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • Embodiment 5 of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute an obstacle identification method when executed by a computer processor, and the method includes:
  • the outline information and the brightness information it is determined whether there is an obstacle in the image of the candidate weeding area.
  • a storage medium containing computer-executable instructions provided by an embodiment of the present invention, the computer-executable instructions of which are not limited to the above-mentioned method operations, and can also execute any of the obstacle identification methods provided by any embodiment of the present invention. related operations.
  • the present invention can be realized by software and necessary general-purpose hardware, and of course can also be realized by hardware, but in many cases the former is a better embodiment .
  • the technical solutions of the present invention can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products can be stored in a computer-readable storage medium, such as a floppy disk of a computer , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer , server, or network device, etc.) to execute the methods described in the various embodiments of the present invention.
  • a computer-readable storage medium such as a floppy disk of a computer , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc.
  • the included units and modules are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, The specific names of the functional units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present invention.
  • the sixth embodiment of the present invention provides a weeding robot, which includes a robot body, and also includes the electronic device described in any embodiment of the present invention.
  • the electronic device installed on the weeding robot can perform the relevant operations of the obstacle identification method described in any embodiment of the present invention.
  • the robot body may include two driving wheels on the left and right, which may be driven by a motor respectively, and the motor may be a brushless motor with a gear box and a Hall sensor.
  • the robot body realizes driving operations such as forward, backward, turning and arc by controlling the speed and direction of the two driving wheels.
  • the robot body also includes a universal wheel, a camera and a rechargeable battery, wherein the universal wheel plays a supporting and balancing role.
  • the camera is installed at the designated position of the robot and forms a preset angle with the horizontal direction to capture the image of the candidate weeding area.
  • the rechargeable battery is used to provide power for the robot to work.

Abstract

一种障碍物识别方法、装置、设备、介质及除草机器人,该方法包括:根据候选除草区域图像的颜色信息确定候选除草区域图像中的候选障碍物区域(110);获取候选障碍物区域的轮廓信息和候选除草区域图像的明度信息(120);根据轮廓信息和明度信息,确定候选除草区域图像中是否存在障碍物(130)。通过上述方法可以解决通常采用埋设边界线的方式对除草机器人的除草区域的边界进行标定,耗费大量的人力和物力,增加了成本,并且由于对边界线的埋设存在一定程度上限制了除草区域的形状的问题,实现提高除草机器人的候选除草区域中障碍物的识别效率和准确率的效果。

Description

一种障碍物识别方法、装置、设备、介质及除草机器人 技术领域
本发明实施例涉及图像处理技术,尤其涉及一种障碍物识别方法、装置、设备、介质及除草机器人。
背景技术
随着生活水平的提高,人们日益关注环境建设,因此城市绿化园林的建设愈发受到重视。与此同时,高效的绿化养护,如日常除草等,逐渐成为了一种需求。但由于传统除草机需要人工操控,因此具有自主工作功能的除草机器人逐渐兴起。
现有技术中,通常采用埋设边界线的方式对除草机器人的除草区域的边界进行标定,耗费大量的人力和物力,增加了成本。并且由于对边界线的埋设存在限制,例如拐角的角度不能小于90度,因此一定程度上限制了除草区域的形状。
发明内容
本发明实施例提供一种障碍物识别方法、装置、设备、介质及除草机器人,以实现提高除草机器人的候选除草区域中障碍物的识别效率和准确率的效果。
第一方面,本发明实施例提供了一种障碍物识别方法,该方法包括:
根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;
获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;
根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
第二方面,本发明实施例还提供了一种障碍物识别装置,该装置包括:
候选障碍物区域确定模块,用于根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;
信息获取模块,用于获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;
障碍物确定模块,用于根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
第三方面,本发明实施例还提供了一种电子设备,该电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如上所述的障碍物识别方法。
第四方面,本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上所述的障碍物识别方法。
第五方面,本发明实施例还提供了一种除草机器人,包括机器人本体,还包括上述的电子设备。
本发明实施例通过根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;根据所述轮廓信息和所述明度信息,确定所述候选 除草区域图像中是否存在障碍物,解决现有技术中通常采用埋设边界线的方式对除草机器人的除草区域的边界进行标定,耗费大量的人力和物力,增加了成本。并且由于对边界线的埋设存在限制一定程度上限制了除草区域的形状的问题,实现提高除草机器人的候选除草区域中障碍物的识别效率和准确率的效果。
附图说明
图1为本发明实施例一提供的一种障碍物识别方法的流程图;
图2为本发明实施例二提供的一种障碍物识别方法的流程图;
图3为本发明实施例三提供的一种障碍物识别装置的结构示意图;
图4为本发明实施例四提供的一种电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
实施例一
图1为本发明实施例一提供的一种障碍物识别方法的流程图,本实施例可适用于除草机器人识别候选除草区域中障碍物的情况,该方法可以由本发明实施例所提供的障碍物识别装置来执行,该装置可以由软件和/或硬件的方式实现。参见图1,本实施例提供的障碍物识别方法,包括:
步骤110、根据候选除草区域图像的颜色信息确定所述候选除草区域图像中 的候选障碍物区域。
其中,候选除草区域为除草机器人可能的工作区域,可能全为待除的杂草,即为除草区域;也可能为区域表面覆盖有草的障碍物或边界区域,或者由于光照等原因造成斑驳导致难以识别的除草区域。
候选除草区域图像可以由安装在除草机器人上的摄像机进行拍摄,本实施例对此不作限制。候选除草区域图像的颜色信息可以为图像的色调Hue、饱和度Saturation、明度Value等信息,本实施例对此不作限制。通过颜色信息从候选除草区域图像中确定可能为障碍物的区域,示例性的,根据颜色信息确定图像中与周围颜色相差较大的区域作为候选障碍物区域。
本实施例中,可选的,根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域,包括:
根据所述候选除草区域图像的颜色信息获取所述候选除草区域图像的颜色分割图像;
对所述颜色分割图像进行形态学处理,并从形态学处理后的颜色分割图像中确定预设颜色的区域为候选障碍物区域。
其中,根据候选除草区域图像的颜色信息获取候选除草区域图像的颜色分割图像,可以根据颜色动态分割、边缘纹理方法分割、固定阈值分割、大津阈值分割等方式进行颜色分割,以获得候选除草区域图像的颜色分割图像。其中,颜色分割图像可以为二值图像。再对颜色分割图像进行形态学处理,其中,形态学处理可以为取反操作、开操作、闭操作等,本实施例对此不作限制。
从形态学处理后的颜色分割图像中确定预设颜色的区域为候选障碍物区域。例如将黑色区域作为除草区域,白色区域作为候选障碍物区域,以便进一步对 候选障碍物区域进行识别。通过获取颜色分割图像对候选除草区域进行初步分类,并对颜色分割图像进行形态学处理以提高候选障碍物区域确定的准确性。
步骤120、获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息。
其中,轮廓信息为单个候选障碍物区域的轮廓信息,获取方式可以为对候选障碍物区域进行轮廓检测。轮廓信息可以包括轮廓的色度信息、粗糙度信息、范围信息等,本实施例对此不作限制。其中,粗糙度信息用于显示候选障碍物区域边缘的粗糙程度,可以为平均粗糙度,平均粗糙度的计算方式可以为将单个候选障碍物区域轮廓中障碍物区域所有像素的粗糙度之和除以轮廓中障碍物区域像素总个数。
本实施例中,可选的,确定粗糙度信息,包括:
获取所述候选除草区域图像的明度通道图像;
通过对所述明度通道图像进行边缘提取,获得边缘图像;
根据所述边缘图像的边缘信息中像素的灰度值确定所述粗糙度信息。
通过对候选除草区域图像进行通道分离,获取候选除草区域图像的明度通道图像,可选的,对明度通道图像进行预处理,预处理可以包括滤波处理、归一化处理等,本实施例对此不作限制。对预处理后的明度通道图像进行边缘提取,获得边缘图像,可以采用canny算子进行边缘提取,以提高边缘图像中边缘信息获取的准确率。
通过候选障碍物区域相应位置的边缘图像中的边缘信息确定候选障碍物区域的粗糙度信息,边缘信息包括候选障碍物区域轮廓中像素的灰度值。当粗糙度信息为平均粗糙度时,可以通过将轮廓中障碍物区域灰度值等于255的像素 个数除以轮廓中障碍物区域的像素总个数,得到候选障碍物区域边缘的平均粗糙度。
通过边缘图像的边缘信息中像素的灰度值确定粗糙度信息,提高粗糙度信息获取的准确率,从而提高障碍物识别的准确率。
候选除草区域图像的明度信息为候选除草区域图像整体的明度信息,可以通过获取候选除草区域图像的明度通道图像以获取图中与明度相关的信息。通过明度信息判断候选障碍物区域与光照之间的关系。
本实施例中,可选的,所述候选除草区域图像的明度信息包括:曝光状态像素个数和非曝光状态白色像素个数;
相应的,获取所述候选除草区域图像的明度信息,包括:
根据所述候选除草区域图像中像素的明度值,获取所述曝光状态像素个数;
根据所述候选除草区域图像中像素的明度值和色度值,获取所述非曝光状态白色像素个数。
其中,曝光状态像素个数为候选除草区域图像中处于曝光状态的像素的个数。非曝光状态白色像素为候选除草区域图像中不处于曝光状态但呈现白色的像素的个数,例如候选除草区域图像中本身为白色的障碍物的像素。
根据候选除草区域图像中像素的明度值,获取曝光状态像素个数,获取方式可以为通过获取候选除草区域图像的明度通道图像,统计其中明度值大于等于预设阈值,例如255,的像素个数,作为曝光状态像素个数。
根据候选除草区域图像中像素的明度和色度,获取非曝光状态白色像素个数。获取方式可以为通过获取候选除草区域图像的明度通道图像和色度通道图像,统计在明度通道图像中明度值小于预设阈值,例如255,且色度通道图像 中色度值为小于等于预设阈值,例如0,的像素个数,作为非曝光状态白色像素个数。
通过将明度信息分为曝光状态像素个数和非曝光状态白色像素个数,以及根据像素的明度值和/或色度值确定明度信息。提高后续区分候选障碍物区域中光照等原因造成斑驳的除草区域和本身为白色的障碍物区域的准确性。
步骤130、根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
将候选障碍物区域的轮廓信息和候选除草区域图像的明度信息与预设信息判断条件进行对比。其中,预设信息判断条件与轮廓信息和明度信息相关。若满足预设信息条件,则确定候选障碍物区域为障碍物区域,即确定障碍物区域所在的候选除草区域图像中存在障碍物,以便除草机器人进行后续障碍物处理。
示例性的,识别出的障碍物为区域表面覆盖有草的障碍物。
本实施例所提供的技术方案,通过根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物,解决现有技术中通常采用埋设边界线的方式对除草机器人的除草区域的边界进行标定,耗费大量的人力和物力,增加了成本。并且由于对边界线的埋设存在限制一定程度上限制了除草区域的形状的问题,实现提高除草机器人的候选除草区域中障碍物的识别效率和准确率的效果。
实施例二
图2为本发明实施例二提供的一种障碍物识别方法的流程图,本技术方案 是针对根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物的过程进行补充说明的。与上述方案相比,本方案具体优化为,所述候选障碍物区域的轮廓信息包括:色度有效像素数量、色度有效像素占比、粗糙度信息和范围信息中的至少一种,所述明度信息包括:曝光状态像素个数和/或非曝光状态白色像素个数;,则根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物,包括:
若所述色度有效像素数量大于预设有效像素数量阈值且所述范围信息大于预设第一范围阈值,判断所述色度有效像素占比是否小于预设第一色度有效像素占比阈值;
若是,则根据所述轮廓信息、所述明度信息和预设第一信息判断条件确定所述候选除草区域图像中是否存在障碍物;
若否,则根据所述轮廓信息、所述明度信息和预设第二信息判断条件确定所述候选除草区域图像中是否存在障碍物。具体的,障碍物识别的流程图如图2所示:
步骤210、根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域。
步骤220、获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;所述候选障碍物区域的轮廓信息包括:色度有效像素数量、色度有效像素占比、粗糙度信息和范围信息中的至少一种;所述明度信息包括:曝光状态像素个数和/或非曝光状态白色像素个数。
其中,色度有效像素数量为单个候选障碍物区域的轮廓中障碍物区域处于有效像素值色度范围的像素个数。色度有效像素占比为单个候选障碍物区域的 轮廓中障碍物区域处于有效像素值色度范围的像素占该轮廓中障碍物区域全部像素的比例。范围信息用于表示候选障碍物区域的大小,可以为面积、对角线长度,宽度、高度、包含像素个数等,本实施例对此不作限制。
本实施例中,可选的,获取所述色度有效像素数量和/或所述色度有效像素占比,包括:
根据所述候选除草区域图像的颜色信息确定所述候选除草区域图像的色度分割阈值区间;
根据所述色度分割阈值区间和所述候选障碍物区域中像素的色度值,获取所述色度有效像素数量和/或所述色度有效像素占比。
其中,根据候选除草区域图像的颜色信息获取候选除草区域图像的颜色分割图像,可以根据颜色动态分割、边缘纹理方法分割、固定阈值分割、大津阈值分割等方式进行颜色分割,获得候选除草区域图像的色度分割阈值区间,以根据色度分割阈值区间将候选除草区域图像转换为颜色分割图像。
将色度分割阈值区间作为有效像素值色度范围,将像素的色度值处于色度分割阈值区间的像素作为色度有效像素。将单个候选障碍物区域的轮廓中障碍物区域色度有效像素的个数作为色度有效像素数量,将单个候选障碍物区域的轮廓中障碍物区域色度有效像素占该轮廓中障碍物区域全部像素的比例作为色度有效像素占比。
通过根据候选除草区域图像确定的色度分割阈值区间,获取色度有效像素数量和/或色度有效像素占比,提高有效像素数量和/或色度有效像素占比与候选除草区域图像本身的关联程度,从而提高后续候选除草区域中障碍物的识别准确率。
步骤230、若所述色度有效像素数量大于预设有效像素数量阈值且所述范围信息大于预设第一范围阈值,判断所述色度有效像素占比是否小于预设第一色度有效像素占比阈值。
其中,预设第一有效像素数量阈值可以为经验值,本实施例对此不作限制。当色度有效像素数量大于预设有效像素数量阈值,表明候选障碍物区域是除草区域图像的可能性变大,需要进一步判断该候选障碍物区域是除草区域还是颜色类似除草区域的障碍物区域,且范围信息大于预设第一范围阈值,避免处理过小的候选障碍物区域,。
当满足上述条件时,再判断色度有效像素占比是否小于预设色度有效像素占比阈值,即在从除草区域的可能性较大的候选障碍物区域中确定是障碍物相对可能性较大的候选障碍物区域。
可选的,轮廓信息还包括位置信息。位置信息用于显示候选障碍物区域与除草机器人之间的距离,例如为候选障碍物区域最小外接矩形右下角的y轴坐标值,当然也可以选取其他有代表性的坐标值,本实施例对此不作限制。在通过色度有效像素数量和范围信息对候选障碍物区域判断的基础上,通过位置信息进一步判断,避免处理过远的候选障碍物区域,减少了处理的数据,提高了障碍物识别的效率。
例如若色度有效像素数量大于预设有效像素数量阈值、范围信息超过预设第一范围阈值,且位置信息大于预设位置阈值,则再判断色度有效像素占比是否在预设色度有效像素占比阈值的范围内。
示例性的,色度有效像素数量为SContours i,其中i为候选障碍物区域的编号,色度有效像素占比为SPContours i,范围信息为候选障碍物区域的对角线长 度AContours i.diagonal或者候选障碍物区域的高度AContours i.height,位置信息为候选障碍物区域最小外接矩形右下角的y轴坐标值YContours i,坐标值越大表明候选障碍物区域离机器人越近。若预设色度阈值为a,预设色度有效像素占比阈值为0.4,预设第一范围阈值的对角线阈值为105,高度阈值为70,预设位置阈值为75,则判断条件为:若SContours i>a且AContours i.diagonal>105且YContours i>75;或SContours i>a且AContours i.height>70且YContoursi>75。则再判断,是否SPContours i<0.4
步骤240、若是,则根据所述轮廓信息、所述明度信息和预设第一信息判断条件确定所述候选除草区域图像中是否存在障碍物。
当色度有效像素数量大于预设有效像素数量阈值,且范围信息大于预设第一范围阈值,且色度有效像素占比小于预设第一色度有效像素占比阈值时,根据轮廓信息、明度信息和预设第一信息判断条件确定候选除草区域图像中是否存在障碍物。即对从除草区域的可能性较大的候选障碍物区域中确定的是障碍物相对可能性较大的候选障碍物区域进行进一步分析。其中,预设第一信息判断条件可以根据具体判断情景进行调整,本实施例对此不作限制。
本实施例中,可选的,根据所述轮廓信息、所述明度信息和预设第一信息判断条件确定所述候选除草区域图像中是否存在障碍物,包括:
若所述曝光状态像素个数小于预设第一曝光状态像素个数阈值且大于预设第二曝光状态像素个数阈值,且所述粗糙度信息小于预设第一粗糙度阈值,则确定所述候选除草区域图像中存在障碍物;
若所述曝光状态像素个数大于预设第三曝光状态像素个数阈值,且所述色度有效像素占比小于预设第二色度有效像素占比阈值,且所述范围信息大于预 设第二范围阈值,则确定所述候选除草区域图像中存在障碍物;
若所述曝光状态像素个数大于预设第四曝光状态像素个数阈值,且所述非曝光状态白色像素个数大于预设第一非曝光状态白色像素阈值,且所述粗糙度信息小于预设第二粗糙度阈值,且所述范围信息大于预设第三范围阈值,则确定所述候选除草区域图像中存在障碍物。
若轮廓信息和明度信息满足预设第一信息判断条件,则确定候选除草区域图像中存在障碍物。
示例性的,粗糙度信息为候选障碍物区域轮廓的平均粗糙度HContours i,平均粗糙度越小,表明候选障碍物区域越光滑,范围信息为候选障碍物区域中像素个数AContours i.pixels,曝光状态像素个数overbrightPix和非曝光状态白色像素个数zeroPix。
示例性的,预设第一曝光状态像素个数阈值为200,预设第二曝光状态像素个数阈值为2500,预设第一粗糙度阈值为0.25,则预设第一信息判断条件为200<overbrightPix<2500且HContours i<0.25,则满足该条件时,确定候选除草区域图像中存在障碍物,障碍物可能为被草覆盖的有漆的石板路。
例如预设第三曝光状态像素个数阈值为500,预设第二色度有效像素占比阈值为0.29,且预设第二范围阈值为7700,则预设第一信息判断条件为500<overbrightPix且SPContours i<0.29且AContours i.pixels>7700,则满足该条件时,确定候选除草区域图像中存在障碍物,障碍物可能为树叶或者木渣。
例如预设第四曝光状态像素个数阈值为400,预设第一非曝光状态白色像素阈值为80,预设第二粗糙度阈值为0.27,预设第三范围阈值为6900,则预设第一信息判断条件为400<overbrightPix且zeroPix>80且HContours i<0.27且 AContours i.pixels>6900,则满足该条件时,确定候选除草区域图像中存在障碍物。
步骤250、若否,则根据所述轮廓信息、所述明度信息和预设第二信息判断条件确定所述候选除草区域图像中是否存在障碍物。
当色度有效像素数量大于预设有效像素数量阈值,且色度有效像素占比大于预设色度有效像素占比阈值时,根据轮廓信息、明度信息和预设第二信息判断条件确定候选除草区域图像中是否存在障碍物。即对从除草区域的可能性较大的候选障碍物区域中确定的是障碍物相对可能性较小的候选障碍物区域进行进一步分析。其中,预设第二信息判断条件可以根据具体判断情景进行调整,本实施例对此不作限制。
本实施例中间,可选的,根据所述轮廓信息、所述明度信息和预设第二信息判断条件确定所述候选除草区域图像中是否存在障碍物,包括:
若所述曝光状态像素个数小于预设第五曝光状态像素个数阈值且大于预设第六曝光状态像素个数阈值,且所述色度有效像素占比小于预设第三色度有效像素占比阈值,且所述范围信息大于预设第四范围阈值,且所述粗糙度信息小于预设第三粗糙度阈值,且所述非曝光状态白色像素个数大于预设第二非曝光状态白色像素阈值,则确定所述候选除草区域图像中存在障碍物;
若所述曝光状态像素个数小于预设第七曝光状态像素个数阈值且大于预设第八曝光状态像素个数阈值,且所述色度有效像素占比小于预设第四色度有效像素占比阈值,且所述粗糙度信息小于预设第四粗糙度阈值,则确定所述候选除草区域图像中存在障碍物。
若轮廓信息和明度信息满足预设第二信息判断条件,则确定候选除草区域图像中存在障碍物。
示例性的,预设第五曝光状态像素个数阈值为300,预设第六曝光状态像素个数阈值为400,预设第三色度有效像素占比阈值为0.6,预设第四范围阈值为8000,预设第三粗糙度阈值为0.26,预设第二非曝光状态白色像素阈值为100,则预设第二信息判断条件为300<overbrightPix<400且SPContours i<0.6且AContours i.pixels>8000且HContours i<0.26且zeroPix>100,则满足该条件时,确定候选除草区域图像中存在障碍物,障碍物可能为从缝隙中长出草的石板路。
例如,预设第七曝光状态像素个数阈值为200,预设第八曝光状态像素个数阈值为2500,预设第四色度有效像素占比阈值为0.6,预设第四粗糙度阈值为0.23,则预设第二信息判断条件为200<overbrightPix<2500且SPContours i<0.6且HContours i<0.23,则满足该条件时,确定候选除草区域图像中存在障碍物。
本发明实施例通过根据色度有效像素数量、色度有效像素占比、粗糙度信息、范围信息等轮廓信息和明度信息,确定候选除草区域图像中是否存在障碍物,提高对于区域表面覆盖有草的障碍物或边界区域,或者由于光照等原因造成斑驳导致难以确定是否为障碍物区域的候选障碍物区域识别的准确率。
实施例三
图3为本发明实施例三提供的一种障碍物识别装置的结构示意图。该装置可以由硬件和/或软件的方式来实现,可执行本发明任意实施例所提供的一种障碍物识别方法,具备执行方法相应的功能模块和有益效果。如图3所示,该装置包括:
候选障碍物区域确定模块310,用于根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;
信息获取模块320,用于获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;
障碍物确定模块330,用于根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
本实施例所提供的技术方案,通过根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物,解决现有技术中通常采用埋设边界线的方式对除草机器人的除草区域的边界进行标定,耗费大量的人力和物力,增加了成本。并且由于对边界线的埋设存在限制一定程度上限制了除草区域的形状的问题,实现提高除草机器人的候选除草区域中障碍物的识别效率和准确率的效果。
在上述各技术方案的基础上,可选的,所述候选障碍物区域确定模块,包括:
颜色分割图像获取单元,用于根据所述候选除草区域图像的颜色信息获取所述候选除草区域图像的颜色分割图像;
候选障碍物区域确定单元,用于对所述颜色分割图像进行形态学处理,并从形态学处理后的颜色分割图像中确定预设颜色的区域为候选障碍物区域。
在上述各技术方案的基础上,可选的,所述候选障碍物区域的轮廓信息包括:色度有效像素数量、色度有效像素占比、粗糙度信息和范围信息中的至少 一种;所述明度信息包括:曝光状态像素个数和/或非曝光状态白色像素个数;。
相应的,所述障碍物确定模块,包括:
占比判断单元,用于若所述色度有效像素数量大于预设有效像素数量阈值且所述范围信息大于预设第一范围阈值,判断所述色度有效像素占比是否小于预设第一色度有效像素占比阈值;
第一障碍物确定单元,用于若所述占比判断单元判断为是,则根据所述轮廓信息、所述明度信息和预设第一信息判断条件确定所述候选除草区域图像中是否存在障碍物;
第二障碍物确定单元,用于若所述占比判断单元判断为否,则根据所述轮廓信息、所述明度信息和预设第二信息判断条件确定所述候选除草区域图像中是否存在障碍物。
在上述各技术方案的基础上,可选的,所述第一障碍物确定单元,包括:第一障碍物确定子单元,用于若所述曝光状态像素个数小于预设第一曝光状态像素个数阈值且大于预设第二曝光状态像素个数阈值,且所述粗糙度信息小于预设第一粗糙度阈值,则确定所述候选除草区域图像中存在障碍物;
第二障碍物确定子单元,用于若所述曝光状态像素个数大于预设第三曝光状态像素个数阈值,且所述色度有效像素占比小于预设第二色度有效像素占比阈值,且所述范围信息大于预设第二范围阈值,则确定所述候选除草区域图像中存在障碍物;
第三障碍物确定子单元,用于若所述曝光状态像素个数大于预设第四曝光状态像素个数阈值,且所述非曝光状态白色像素个数大于预设第一非曝光状态白色像素阈值,且所述粗糙度信息小于预设第二粗糙度阈值,且所述范围信息 大于预设第三范围阈值,则确定所述候选除草区域图像中存在障碍物。
在上述各技术方案的基础上,可选的,所述第二障碍物确定单元,包括:
第四障碍物确定子单元,用于若所述曝光状态像素个数小于预设第五曝光状态像素个数阈值且大于预设第六曝光状态像素个数阈值,且所述色度有效像素占比小于预设第三色度有效像素占比阈值,且所述范围信息大于预设第四范围阈值,且所述粗糙度信息小于预设第三粗糙度阈值,且所述非曝光状态白色像素个数大于预设第二非曝光状态白色像素阈值,则确定所述候选除草区域图像中存在障碍物;
第五障碍物确定子单元,用于若所述曝光状态像素个数小于预设第七曝光状态像素个数阈值且大于预设第八曝光状态像素个数阈值,且所述色度有效像素占比小于预设第四色度有效像素占比阈值,且所述粗糙度信息小于预设第四粗糙度阈值,则确定所述候选除草区域图像中存在障碍物。
实施例四
图4为本发明实施例四提供的一种电子设备的结构示意图,如图4所示,该电子设备包括处理器40、存储器41、输入装置42和输出装置43;电子设备中处理器40的数量可以是一个或多个,图4中以一个处理器40为例;电子设备中的处理器40、存储器41、输入装置42和输出装置43可以通过总线或其他方式连接,图4中以通过总线连接为例。
存储器41作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的障碍物识别方法对应的程序指令/模块。处理器40通过运行存储在存储器41中的软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述的障碍物识别方法。
存储器41可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器41可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器41可进一步包括相对于处理器40远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
实施例五
本发明实施例五还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种障碍物识别方法,该方法包括:
根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;
获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;
根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本发明任意实施例所提供的障碍物识别方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上 或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
值得注意的是,上述障碍物识别装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
实施例六
本发明实施例六提供一种除草机器人,包括机器人本体,还包括本发明实任意实施例所述的电子设备。
具体的,安装在除草机器人上的电子设备可以执行本发明任意实施例所述的一种障碍物识别方法的相关操作。
其中,机器人本体可以包括左右两个主动动轮,可分别由电机驱动,电机可以为带减速箱和带霍尔传感器的无刷电机。机器人本体通过控制两个主动动轮的速度、方向实现向前、后退、转弯及圆弧等行驶操作。机器人本体还包括万向轮、摄像机和可充放电池,其中,万向轮起支撑平衡作用。摄像机安装于机器人的指定位置,与水平方向成预设夹角,以拍摄候选除草区域图像。可充放电池用于提供电源,供机器人工作。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员 会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种障碍物识别方法,其特征在于,包括:
    根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;
    获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;
    根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
  2. 根据权利要求1所述的方法,其特征在于,根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域,包括:
    根据所述候选除草区域图像的颜色信息获取所述候选除草区域图像的颜色分割图像;
    对所述颜色分割图像进行形态学处理,并从形态学处理后的颜色分割图像中确定预设颜色的区域为候选障碍物区域。
  3. 根据权利要求1所述的方法,其特征在于,所述候选障碍物区域的轮廓信息包括:色度有效像素数量、色度有效像素占比、粗糙度信息和范围信息中的至少一种;所述明度信息包括:曝光状态像素个数和/或非曝光状态白色像素个数;
    相应的,根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物,包括:
    若所述色度有效像素数量大于预设有效像素数量阈值且所述范围信息大于预设第一范围阈值,判断所述色度有效像素占比是否小于预设第一色度有效像素占比阈值;
    若是,则根据所述轮廓信息、所述明度信息和预设第一信息判断条件确定 所述候选除草区域图像中是否存在障碍物;
    若否,则根据所述轮廓信息、所述明度信息和预设第二信息判断条件确定所述候选除草区域图像中是否存在障碍物。
  4. 根据权利要求3所述的方法,其特征在于,根据所述轮廓信息、所述明度信息和预设第一信息判断条件确定所述候选除草区域图像中是否存在障碍物,包括:
    若所述曝光状态像素个数小于预设第一曝光状态像素个数阈值且大于预设第二曝光状态像素个数阈值,且所述粗糙度信息小于预设第一粗糙度阈值,则确定所述候选除草区域图像中存在障碍物;
    若所述曝光状态像素个数大于预设第三曝光状态像素个数阈值,且所述色度有效像素占比小于预设第二色度有效像素占比阈值,且所述范围信息大于预设第二范围阈值,则确定所述候选除草区域图像中存在障碍物;若所述曝光状态像素个数大于预设第四曝光状态像素个数阈值,且所述非曝光状态白色像素个数大于预设第一非曝光状态白色像素阈值,且所述粗糙度信息小于预设第二粗糙度阈值,且所述范围信息大于预设第三范围阈值,则确定所述候选除草区域图像中存在障碍物。
  5. 根据权利要求3所述的方法,其特征在于,根据所述轮廓信息、所述明度信息和预设第二信息判断条件确定所述候选除草区域图像中是否存在障碍物,包括:
    若所述曝光状态像素个数小于预设第五曝光状态像素个数阈值且大于预设第六曝光状态像素个数阈值,且所述色度有效像素占比小于预设第三色度有效像素占比阈值,且所述范围信息大于预设第四范围阈值,且所述粗糙度信息小 于预设第三粗糙度阈值,且所述非曝光状态白色像素个数大于预设第二非曝光状态白色像素阈值,则确定所述候选除草区域图像中存在障碍物;
    若所述曝光状态像素个数小于预设第七曝光状态像素个数阈值且大于预设第八曝光状态像素个数阈值,且所述色度有效像素占比小于预设第四色度有效像素占比阈值,且所述粗糙度信息小于预设第四粗糙度阈值,则确定所述候选除草区域图像中存在障碍物。
  6. 一种障碍物识别装置,其特征在于,包括:
    候选障碍物区域确定模块,用于根据候选除草区域图像的颜色信息确定所述候选除草区域图像中的候选障碍物区域;
    信息获取模块,用于获取所述候选障碍物区域的轮廓信息和所述候选除草区域图像的明度信息;
    障碍物确定模块,用于根据所述轮廓信息和所述明度信息,确定所述候选除草区域图像中是否存在障碍物。
  7. 根据权利要求6所述的装置,其特征在于,所述候选障碍物区域确定模块,包括:
    颜色分割图像获取单元,用于根据所述候选除草区域图像的颜色信息获取所述候选除草区域图像的颜色分割图像;
    候选障碍物区域确定单元,用于对所述颜色分割图像进行形态学处理,并从形态学处理后的颜色分割图像中确定预设颜色的区域为候选障碍物区域。
  8. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-5中任一所述的障碍物识别方法。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-5中任一所述的障碍物识别方法。
  10. 一种除草机器人,包括机器人本体,其特征在于,还包括如权利要求8中所述的电子设备。
PCT/CN2020/132572 2020-11-09 2020-11-30 一种障碍物识别方法、装置、设备、介质及除草机器人 WO2022095171A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20960628.4A EP4242910A1 (en) 2020-11-09 2020-11-30 Obstacle recognition method, apparatus and device, medium, and weeding robot
US18/251,960 US20240013548A1 (en) 2020-11-09 2020-11-30 Obstacle recognition method, apparatus, and device, medium and weeding robot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011241002.1 2020-11-09
CN202011241002.1A CN114494839A (zh) 2020-11-09 2020-11-09 一种障碍物识别方法、装置、设备、介质及除草机器人

Publications (1)

Publication Number Publication Date
WO2022095171A1 true WO2022095171A1 (zh) 2022-05-12

Family

ID=81458596

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132572 WO2022095171A1 (zh) 2020-11-09 2020-11-30 一种障碍物识别方法、装置、设备、介质及除草机器人

Country Status (4)

Country Link
US (1) US20240013548A1 (zh)
EP (1) EP4242910A1 (zh)
CN (1) CN114494839A (zh)
WO (1) WO2022095171A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240069549A1 (en) * 2022-08-31 2024-02-29 Positec Power Tools (Suzhou) Co., Ltd. Island/border distinguishing for a robot lawnmower

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111653A (zh) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 自动行走设备及其工作区域判断方法
CN104111460A (zh) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 自动行走设备及其障碍检测方法
CN105701844A (zh) * 2016-01-15 2016-06-22 苏州大学 基于颜色特征的障碍物或阴影检测方法
US20160259981A1 (en) * 2013-06-28 2016-09-08 Institute Of Automation, Chinese Academy Of Sciences Vehicle detection method based on hybrid image template

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155053A (zh) * 2016-06-24 2016-11-23 桑斌修 一种割草方法、装置以及系统
CN107139666B (zh) * 2017-05-19 2019-04-26 四川宝天智控系统有限公司 越障识别系统及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111653A (zh) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 自动行走设备及其工作区域判断方法
CN104111460A (zh) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 自动行走设备及其障碍检测方法
US20160259981A1 (en) * 2013-06-28 2016-09-08 Institute Of Automation, Chinese Academy Of Sciences Vehicle detection method based on hybrid image template
CN105701844A (zh) * 2016-01-15 2016-06-22 苏州大学 基于颜色特征的障碍物或阴影检测方法

Also Published As

Publication number Publication date
CN114494839A (zh) 2022-05-13
EP4242910A1 (en) 2023-09-13
US20240013548A1 (en) 2024-01-11

Similar Documents

Publication Publication Date Title
CN104036262B (zh) 一种lpr车牌筛选识别的方法和系统
Chen et al. A novel color edge detection algorithm in RGB color space
WO2021243895A1 (zh) 基于图像识别工作位置的方法、系统,机器人及存储介质
CN109584258B (zh) 草地边界识别方法及应用其的智能割草装置
WO2022095171A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
WO2022095137A1 (zh) 一种应用于自行走设备的障碍物识别方法及自行走设备
WO2019013252A1 (ja) 車両周囲認識装置
WO2022095170A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
WO2022095161A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
WO2021243894A1 (zh) 基于图像识别工作位置的方法、系统,机器人及存储介质
CN111161219B (zh) 一种适用于阴影环境的鲁棒单目视觉slam方法
WO2022135444A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
WO2022135437A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
CN111079530A (zh) 一种成熟草莓识别的方法
WO2022135342A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
WO2022135434A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
US20240019870A1 (en) Image-Based Working Area Identification Method and System, and Robot
WO2022135361A1 (zh) 一种障碍物识别方法、装置、设备、介质及除草机器人
US20240029268A1 (en) Image Analysis Method and Apparatus, Computer Device, and Readable Storage Medium
US20240029267A1 (en) Image Analysis Method and Apparatus, Computer Device, and Readable Storage Medium
CN115147712A (zh) 基于图像识别非工作区域的方法及系统
CN115147714A (zh) 基于图像识别非工作区域的方法及系统
CN115147713A (zh) 基于图像识别非工作区域的方法、系统、设备及介质
CN117094934A (zh) 图像分析方法、装置、计算机设备及可读存储介质
CN117893987A (zh) 一种基于视觉特征和数学模型的车道线检测和拟合方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20960628

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2020960628

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020960628

Country of ref document: EP

Effective date: 20230609