CN116824124A - Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium - Google Patents

Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium Download PDF

Info

Publication number
CN116824124A
CN116824124A CN202210869099.3A CN202210869099A CN116824124A CN 116824124 A CN116824124 A CN 116824124A CN 202210869099 A CN202210869099 A CN 202210869099A CN 116824124 A CN116824124 A CN 116824124A
Authority
CN
China
Prior art keywords
grassland
area
contour
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210869099.3A
Other languages
Chinese (zh)
Inventor
罗元泰
魏基栋
韩明名
王波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Songling Robot Chengdu Co ltd
Agilex Robotics Shenzhen Lt
Original Assignee
Songling Robot Chengdu Co ltd
Agilex Robotics Shenzhen Lt
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Songling Robot Chengdu Co ltd, Agilex Robotics Shenzhen Lt filed Critical Songling Robot Chengdu Co ltd
Priority to CN202210869099.3A priority Critical patent/CN116824124A/en
Publication of CN116824124A publication Critical patent/CN116824124A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a grassland boundary detection method, a grassland boundary detection device, a mowing robot and a storage medium, comprising the following steps: carrying out semantic segmentation on an area image containing a grassland area to obtain a target image corresponding to the grassland area; performing binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image; selecting a target grassland contour in the grassland contour based on a contour area corresponding to the grassland contour; traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area, wherein the scheme can improve the accuracy of grassland boundary detection, and further can improve the efficiency and accuracy of grassland boundary division.

Description

Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a grassland boundary detection method and device, a mowing robot and a storage medium.
Background
Mowing robots are widely used for maintenance of home yard lawns and trimming of large lawns. The mowing robot integrates the technologies of motion control, multi-sensor fusion, path planning and the like. In order to control the mowing robot to implement mowing operation, a mowing path of the mowing robot needs to be planned so that the mowing robot can completely cover all the operation areas.
At present, before mowing operation is performed, a mowing area is required to be manually divided, and as can be seen, the existing mowing boundary dividing scheme is low in efficiency, and poor in dividing accuracy of the mowing boundary caused by manual dividing can occur.
Disclosure of Invention
The embodiment of the application provides a grassland boundary detection method, a grassland boundary detection device, a grassland robot and a storage medium, which can improve the accuracy of grassland boundary detection and further improve the efficiency and accuracy of grassland boundary division.
In a first aspect, an embodiment of the present application provides a method for detecting a lawn boundary, including:
carrying out semantic segmentation on an area image containing a grassland area to obtain a target image corresponding to the grassland area;
performing binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image;
selecting a target grassland contour in the grassland contour based on a contour area corresponding to the grassland contour;
traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
Optionally, in some embodiments, the traversing the target grassland contour according to the preset interest point information and the preset traversing parameter to obtain a grassland boundary corresponding to the grassland area includes:
determining a jump step length corresponding to a preset traversal parameter;
extracting a region range corresponding to the preset interest point in the preset direction from the preset interest point information;
traversing the target grassland outline based on the jumping step length and the area range to obtain a grassland boundary corresponding to the grassland area.
Optionally, in some embodiments, the traversing the target grassland contour based on the jumping step length and the area range to obtain a grassland boundary corresponding to the grassland area includes:
setting a plurality of reference points in the region corresponding to the interest point based on the step length, wherein the distance between adjacent reference points is larger than the step length;
traversing each contour edge of the target grassland contour, and determining a reference point corresponding to the contour point in a preset direction as a target point in the traversing process;
every time a target contour point is determined, performing next traversal in the area range according to the traversal parameters;
and outputting a boundary point set according to the traversing result, and determining a grassland boundary corresponding to the grassland area based on the boundary point set.
Optionally, in some embodiments, further comprising:
if the coordinates of the currently traversed target point corresponding to the preset direction are detected to exceed the area range, performing the next traversal in the area range according to the traversal parameters.
Optionally, in some embodiments, the selecting a target grass contour in the grass contours based on the contour areas corresponding to the grass contours includes:
sequencing the extracted grassland contours according to the contour areas corresponding to the grassland contours from large to small;
and selecting a preset number of grassland contours as target grassland contours from the large grassland contours to the small grassland contours in the ordered grassland contours.
Optionally, in some embodiments, after the binarizing the target image, the method further includes:
performing image reduction processing on the processed image, and performing expansion operation on the reduced image to obtain an expanded image;
the extracting the grassland outline corresponding to the grassland area in the processed image comprises the following steps: and extracting the grassland outline corresponding to the grassland area from the expanded image.
Optionally, in some embodiments, the performing semantic segmentation on the region image including the grassland region to obtain a target image corresponding to the grassland region includes:
acquiring a preset semantic segmentation model;
obtaining labels corresponding to each pixel in the region image by inputting the region image values containing the grassland region into the semantic segmentation model;
and dividing the target image corresponding to the grassland area in the area image based on the label corresponding to each pixel in the area image.
In a second aspect, an embodiment of the present application provides a grassland boundary detection apparatus, including:
the segmentation module is used for carrying out semantic segmentation on the region image containing the grassland region to obtain a target image corresponding to the grassland region;
the processing module is used for carrying out binarization processing on the target image;
the extraction module is used for extracting the grassland outline corresponding to the grassland area in the processed image;
the selecting module is used for selecting a target grassland contour in the grassland contour based on the contour area corresponding to the grassland contour;
and the traversing module is used for traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
According to the embodiment of the application, after a region image containing a grassland region is subjected to semantic segmentation to obtain a target image corresponding to the grassland region, the target image is subjected to binarization processing, grassland contours corresponding to the grassland region are extracted from the processed image, then, a target grassland contour is selected from the grassland contours based on contour areas corresponding to the grassland contours, finally, the target grassland contour is traversed according to preset interest point information and preset traversal parameters to obtain grassland boundaries corresponding to the grassland region, in the grassland boundary detection scheme provided by the application, grassland contours corresponding to the grassland region are extracted from the binarized target image, and then, the grassland boundaries corresponding to the grassland region are obtained through traversing the target grassland contour according to the preset interest point information and the preset traversal parameters, so that the problems of poor dividing precision and low dividing efficiency caused by manually dividing the grassland are avoided, and the grassland boundary detection efficiency and the accuracy of dividing of the grassland boundary can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic view of a grassland boundary detection method according to an embodiment of the present application;
FIG. 1b is a schematic flow chart of a method for detecting a grassland boundary according to an embodiment of the application;
FIG. 1c is a schematic structural diagram of a semantic segmentation model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a grassland boundary detecting device according to an embodiment of the present application;
fig. 3 is a schematic structural view of a mowing robot according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. In addition, the connection may be for both the fixing action and the circuit communication action.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are merely for convenience in describing embodiments of the application and to simplify the description, and do not denote or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus are not to be construed as limiting the application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
The embodiment of the application provides a grassland boundary detection method and device, a mowing robot and a storage medium.
The grassland boundary detection device can be integrated in a micro control unit (Microcontroller Unit, MCU) of the mowing robot, and also can be integrated in an intelligent terminal or a server, wherein the MCU is also called a single-chip microcomputer (Single Chip Microcomputer) or a single-chip microcomputer, the frequency and the specification of a central processing unit (Central Process Unit, CPU) are properly reduced, and peripheral interfaces such as a memory (memory), a counter (Timer), a USB, an analog-to-digital conversion/digital-analog conversion, UART, PLC, DMA and the like are formed into a chip-level computer, so that different combination control is performed for different application occasions. The robot that mows can walk automatically, prevents the collision, and the automatic charging that returns in the scope possesses safety inspection and battery power detection, possesses certain climbing ability, is particularly suitable for places such as family courtyard, public green land to prune the maintenance on the lawn, and its characteristics are: automatic mowing, grass scraps cleaning, automatic rain shielding, automatic charging, automatic obstacle avoidance, small and exquisite appearance, electronic virtual fence, network control and the like.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, etc. The terminal and the server may be directly or indirectly connected through a wired or wireless communication manner, and the server may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and an artificial intelligent platform.
For example, referring to fig. 1a, the present application provides a mowing system, which includes a mowing robot 10, a server 20 and a user equipment 30, wherein the mowing robot 10 is provided with a vision sensor P, and the vision sensor P is located behind an ultrasonic sensor, so that the shielding of the ultrasonic sensor can be avoided, and the obstacle avoidance function of the following mowing robot 10 is affected.
Specifically, before the mowing robot 10 performs the mowing operation, the mowing robot 10 may collect an image in a visual field range, when the mowing robot 10 collects an area image including a lawn area, the area image is semantically segmented to obtain a target image corresponding to the lawn area, then the mowing robot 10 performs binarization processing on the target image, extracts a lawn contour corresponding to the lawn area from the processed image, then the mowing robot 10 selects a target lawn wheel from the lawn contour based on a contour area corresponding to the lawn contour, and finally the mowing robot 10 traverses the target lawn contour according to preset interest point information and preset traversal parameters to obtain a lawn boundary corresponding to the lawn area.
According to the grassland boundary detection scheme provided by the application, the grassland contour corresponding to the grassland region is extracted from the binarized target image, and then the target grassland contour is traversed according to the preset interest point information and the preset traversal parameters to obtain the grassland boundary corresponding to the grassland region, so that the grassland is divided, the problems of poor dividing precision and low dividing efficiency caused by manual grassland division are avoided, and therefore, the accuracy of grassland boundary detection can be improved, and the efficiency and accuracy of grassland boundary division can be improved.
The following will describe in detail. It should be noted that the following description order of embodiments is not a limitation of the priority order of embodiments.
A method of grassland boundary detection, comprising: carrying out semantic segmentation on the region image containing the grassland region to obtain a target image corresponding to the grassland region; performing binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image; selecting a target grassland contour from the grassland contours based on the contour areas corresponding to the grassland contours; traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
Referring to fig. 1b, fig. 1b is a schematic flow chart of a grassland boundary detection method according to an embodiment of the application. The concrete flow of the grassland boundary detection method can be as follows:
101. and carrying out semantic segmentation on the region image containing the grassland region to obtain a target image corresponding to the grassland region.
The area image may be an RGB color image, that is, an image in an RGB color mode, where the RGB color mode is a color standard in industry, and is obtained by changing three color channels of red (R), green (G), and blue (B) and overlapping the three color channels with each other, and RGB is a color representing the three color channels of red, green, and blue. For example, specifically, an area image including a grassland area may be acquired by a camera installed on the mowing robot, after the area image is acquired, the area image may be semantically segmented based on a deep learning manner to obtain a target image corresponding to the grassland area, that is, optionally, in some embodiments, the step of performing semantically segmentation on the area image including the grassland area to obtain the target image corresponding to the grassland area may specifically include:
(11) Acquiring a preset semantic segmentation model;
(12) The method comprises the steps of (1) inputting an area image value containing a grassland area into a semantic segmentation model to obtain a label corresponding to each pixel in the area image;
(13) And dividing a target image corresponding to the grassland area in the area image based on the label corresponding to each pixel in the area image.
For example, the semantic segmentation model is constructed based on a deep labv3plus model, referring to fig. 1c, the semantic segmentation model provided by the present application includes a 2-fold downsampling convolution module a, an interpolation upsampling convolution module b, an output convolution module c, an ASSP module d, and a cascade fusion module d, wherein each 2-fold downsampling convolution model adopts a depth separation convolution to construct a residual module, so as to reduce model parameters and calculation amount. Meanwhile, in order to obtain more accurate grassland boundaries, a shallow space feature layer is added in the semantic segmentation model compared with the original deep LabV3plus, so that more detail information can be acquired, the recognition accuracy of the grassland boundaries is improved, and the problem of boundary blurring or jagged boundaries caused by space detail information loss is solved. The ASPP module uses parameters consistent with the original DeepLabV v3plus to increase the receptive field of the model. The cascade fusion module adopts a 3x3 convolution layer and a 1x1 convolution layer which are connected in series to fuse different layer characteristics, and finally adopts a 1x1 convolution layer and an interpolation up-sampling layer as an output layer module.
102. And carrying out binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image.
In order to facilitate the subsequent recognition of the grass boundary in the target image, the application binarizes the target image, namely, the gray value of the pixel point on the image is set to be 0 or 255, namely, the whole image presents obvious visual effects of only black and white.
After binarizing the target image, a grassland contour corresponding to the grassland area can be extracted from the processed image, and it should be noted that the contour is one of important features of the image and is sometimes easily confused with an edge. Edges are where the difference in the images is relatively significant, while contours are edge lines that make up graphics and objects, which are part of the edges. For binary images of a single shape, the object contours and edges are coincident. In OpenCV, contour information of an image may be extracted by a findContours function, and optionally, a grassland contour corresponding to a grassland area may be extracted in the processed image by the findContours function.
To reduce the complexity of subsequent computations, optionally, in some embodiments, the processed image may be scaled down and subjected to a dilation operation to smooth the contour of the edge, i.e., after the step of "binarizing the target image", it may specifically further include: and performing image reduction processing on the processed image, and performing expansion operation on the reduced image to obtain an expanded image.
Optionally, in some embodiments, the step of extracting the grassland outline corresponding to the grassland area in the processed image may specifically include: and extracting the corresponding grassland outline of the grassland area from the expanded image.
For example, it is possible to reduce the size of the processed image by 0.5 times, that is, 1/2 of the size of the processed image, and perform a dilation operation on the reduced image, thereby smoothing the outline of the edge.
103. And selecting a target grassland contour from the grassland contours based on the contour area corresponding to the grassland contour.
For example, the arrangement may be performed according to a preset order according to the contour area corresponding to the grass contour, and then, a corresponding target grass contour is selected from the ranked grass contours, that is, optionally, in some embodiments, the step of selecting a target grass contour from the grass contours based on the contour area corresponding to the grass contour may specifically include:
(21) Sequencing the extracted grassland contours according to the contour areas corresponding to the grassland contours from large to small;
(22) And selecting a preset number of grassland contours as target grassland contours from the large grassland contours to the small grassland contours in the ordered grassland contours.
Alternatively, in some embodiments, the extracted grassland contours are sorted from large to small, and the first 3 grassland contours in the sorted grassland contours are selected as the target grassland contours, and the number of choices may be set according to the actual situation, for example, if only one grassland contour is extracted, the grassland contour is selected as the target grassland contour.
104. Traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
For example, specifically, the step length and the interest point information of the traversal may be preset, the interest point information may include a region range corresponding to the preset interest point in the preset direction, then, the target grassland contour is traversed based on the step length and the region range corresponding to the preset interest point in the preset direction to obtain a grassland boundary corresponding to the grassland area, that is, optionally, in some embodiments, the step of "traversing the target grassland contour according to the preset interest point information and the preset traversal parameter to obtain the grassland boundary corresponding to the grassland area" may specifically include:
(31) Determining a jump step length corresponding to a preset traversal parameter;
(32) Extracting a region range corresponding to the preset interest point in the preset direction from the preset interest point information;
(33) Traversing the target grassland contour based on the jumping step length and the area range to obtain a grassland boundary corresponding to the grassland area.
Specifically, the region-of-interest range Roi in the first direction (e.g., the x-axis direction) along the contour may be preset: [ x_min, x_max ], and a step length M of profile jump traversal, then setting N equidistant or non-equidistant reference points in the Roi range by combining the step length M, wherein the distance between adjacent reference points is larger than the step length M, and simultaneously defining Id serial numbers corresponding to the N values, such as: x_value= { x_0, x_1, x_2, …, x_n-3, x_n-2, x_n-1}; x_id= { N-2, …,3,1,0,2,4,..n-1 }, then traversing the outline to obtain all points defining positions in a second direction perpendicular to the first direction (i.e. y-axis direction) to determine the grass boundary, i.e. optionally, in some embodiments, the step "traversing the target grass outline based on the jump step size and the area range to obtain the grass boundary corresponding to the grass area" may specifically include:
(41) Setting a plurality of reference points in the region corresponding to the interest point based on the step length, wherein the distance between adjacent reference points is larger than the step length;
(42) Traversing each contour edge of the target grassland contour, and determining a reference point corresponding to the contour point in a preset direction as a target point in the traversing process;
(43) Every time a target point is determined, performing the next traversal in the area range according to the traversal parameters;
(44) And outputting a boundary point set according to the traversing result, and determining a grassland boundary corresponding to the grassland area based on the boundary point set.
For example, specifically, in the traversal process, the reference point corresponding to the contour point in the preset direction is determined as the target point, and after each target point is determined, one jump traversal can be performed, that is, after one target point is determined, the next contour point is traversed based on the step length M, so that the traversal times are reduced, and the efficiency of determining the mowing boundary is improved; furthermore, it should be noted that, when the coordinates of the reference point in the second direction perpendicular to the first direction are beyond the area range, the next contour point may be traversed based on the step size M, that is, optionally, in some embodiments, the grassland boundary detection method of the present application may specifically further include: if the coordinates of the currently traversed target point corresponding to the preset direction are detected to be beyond the area range, the next traversal is performed in the area range according to the traversal parameters.
Adding all the target points to the boundary point set, and then reserving the starting point and the end point of the line segment, namely, the target point a, the target point b and the target point c are all positioned on the same line segment, and the target point b is positioned between the target point a and the target point c, so that the target point b can be removed; optionally, the target point can be vectorized into a polygon, a rectangle or an ellipse, and the like, specifically can be adjusted according to actual conditions, and after the boundary point set is processed, the grassland boundary corresponding to the grassland area is output.
The grassland boundary detection flow is completed.
According to the embodiment of the application, after a region image containing a grassland region is subjected to semantic segmentation to obtain a target image corresponding to the grassland region, the target image is subjected to binarization processing, grassland contours corresponding to the grassland region are extracted from the processed image, then, the target grassland contour is selected from the grassland contours based on the contour areas corresponding to the grassland contours, finally, the target grassland contour is traversed according to preset interest point information and preset traversal parameters to obtain grassland boundaries corresponding to the grassland region, and in the grassland boundary detection scheme provided by the application, grassland contours corresponding to the grassland region are extracted from the binarized target image, and then, the grassland contours corresponding to the grassland region are traversed according to preset interest point information and preset traversal parameters, so that the grassland boundary is divided, the problems of poor dividing precision and low dividing efficiency caused by manual grassland division are avoided, and the accuracy of grassland boundary detection can be improved, and the grassland boundary dividing efficiency and accuracy can be improved.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a grassland boundary detection device according to an embodiment of the application, where the grassland boundary detection device may include a segmentation module 201, a processing module 202, an extraction module 203, a selection module 204, and a traversing module 205, and may specifically be as follows:
the segmentation module 201 is configured to perform semantic segmentation on an area image including a grassland area, so as to obtain a target image corresponding to the grassland area.
For example, specifically, the segmentation module 201 may collect an area image including a grassland area through a camera installed on the mowing robot, and after the area image is obtained, may perform semantic segmentation on the area image based on a deep learning manner, so as to obtain a target image corresponding to the grassland area.
Alternatively, in some embodiments, the segmentation module 201 may be specifically configured to: acquiring a preset semantic segmentation model; the method comprises the steps of (1) inputting an area image value containing a grassland area into a semantic segmentation model to obtain a label corresponding to each pixel in the area image; and dividing a target image corresponding to the grassland area in the area image based on the label corresponding to each pixel in the area image.
The processing module 202 is configured to perform binarization processing on the target image.
Alternatively, the processing module 202 may extract the corresponding grass contour of the grass area in the processed image by a findContours function.
Optionally, in some embodiments, the processing module 202 may also be configured to: and performing image reduction processing on the processed image, and performing expansion operation on the reduced image to obtain an expanded image.
And the extracting module 203 is used for extracting the grassland outline corresponding to the grassland area in the processed image.
Optionally, in some embodiments, the extraction module 203 may specifically be configured to: and extracting the corresponding grassland outline of the grassland area from the expanded image.
A selection module 204, configured to select a target grassland contour from the grassland contours based on a contour area corresponding to the grassland contour.
For example, the selecting module 204 may be arranged according to a preset order according to the contour area corresponding to the lawn contour, and then select a corresponding target lawn contour from the ordered lawn contours, that is, optionally, in some embodiments, the selecting module 204 may be specifically configured to: sequencing the extracted grassland contours according to the contour areas corresponding to the grassland contours from large to small; and selecting a preset number of grassland contours as target grassland contours from the large grassland contours to the small grassland contours in the ordered grassland contours.
The traversing module 205 is configured to traverse the target grassland contour according to the preset interest point information and the preset traversing parameter, so as to obtain a grassland boundary corresponding to the grassland area.
For example, specifically, the step size and the interest point information of the traversal may be preset, the interest point information may include an area range corresponding to the preset interest point in the preset direction, and then the traversal module 205 traverses the target grassland contour based on the step size and the area range corresponding to the preset interest point in the preset direction, to obtain the grassland boundary corresponding to the grassland area.
Optionally, in some embodiments, the traversal module 205 may specifically include:
the determining unit is used for determining the jump step length corresponding to the preset traversal parameter;
the extraction unit is used for extracting the region range corresponding to the preset interest point in the preset direction from the preset interest point information;
the traversing unit is used for traversing the target grassland contour based on the jumping step length and the area range to obtain the grassland boundary corresponding to the grassland area.
Alternatively, in some embodiments, the traversal unit may specifically be configured to: setting a plurality of reference points in the region corresponding to the interest point based on the step length, wherein the distance between adjacent reference points is larger than the step length; traversing each contour edge of the target grassland contour, and determining a reference point corresponding to the contour point in a preset direction as a target point in the traversing process; every time a target point is determined, performing the next traversal in the area range according to the traversal parameters; and outputting a boundary point set according to the traversing result, and determining a grassland boundary corresponding to the grassland area based on the boundary point set.
According to the method, after a region image containing a grassland region is subjected to semantic segmentation by the segmentation module 201 to obtain a target image corresponding to the grassland region, the target image is subjected to binarization processing by the processing module 202, then grassland contours corresponding to the grassland region are extracted from the processed image by the extraction module 203, then the target grassland contours are selected from the grassland contours by the selection module 204 based on the contour areas corresponding to the grassland contours, finally the target grassland contours are traversed by the traversing module 205 according to preset interest point information and preset traversing parameters to obtain grassland boundaries corresponding to the grassland region, the grassland boundaries corresponding to the grassland region are extracted from the binarized target image in the grassland boundary detection scheme provided by the application, and then the grassland boundaries corresponding to the grassland region are traversed according to the preset interest point information and the preset traversing parameters, so that the problems of poor dividing precision and low dividing efficiency caused by manual grassland dividing are avoided, and the grassland boundary detection accuracy and the grassland boundary dividing accuracy can be improved.
In addition, the embodiment of the application further provides a mowing robot, as shown in fig. 3, which shows a schematic structural diagram of the mowing robot according to the embodiment of the application, specifically:
the mowing robot may include a control module 301, a travel mechanism 302, a cutting module 303, a power source 304, and the like. It will be appreciated by those skilled in the art that the configuration of the lawn mowing robot shown in fig. 3 is not limiting of the lawn mowing robot, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the control module 301 is a control center of the mowing robot, and the control module 301 may specifically include a central processing unit (Central Process Unit, CPU), a memory, an input/output port, a system bus, a timer/counter, a digital-to-analog converter, an analog-to-digital converter, and other components, where the CPU executes various functions of the mowing robot and processes data by running or executing software programs and/or modules stored in the memory, and calling data stored in the memory; preferably, the CPU may integrate an application processor that primarily handles operating systems and applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the CPU.
The memory may be used to store software programs and modules, and the CPU executes various functional applications and data processing by running the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the mowing robot, or the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the CPU.
The advancing mechanism 302 is electrically connected with the control module 301, and is configured to adjust an advancing speed and an advancing direction of the mowing robot in response to a control signal transmitted by the control module 301, so as to realize a self-moving function of the mowing robot.
The cutting module 303 is electrically connected to the control module 301, and is configured to adjust the height and rotational speed of the cutter disc in response to the control signal transmitted by the control module, thereby implementing a mowing operation.
The power supply 304 may be logically connected to the control module 301 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 304 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the mowing robot may further include a communication module, a sensor module, a prompt module, etc., which will not be described herein.
The communication module is used for receiving and transmitting signals in the information receiving and transmitting process, and realizes signal receiving and transmitting with the user equipment, the base station or the server by establishing communication connection with the user equipment, the base station or the server.
The sensor module is used for collecting internal environment information or external environment information, and feeding the collected environment data back to the control module for decision making, so that the precise positioning and intelligent obstacle avoidance functions of the mowing robot are realized. In an embodiment of the application, the sensor module at least comprises a visual sensor for collecting graphic codes in the working environment. Optionally, the sensor module may further include: ultrasonic sensors, infrared sensors, collision sensors, rain sensors, lidar sensors, inertial measurement units, wheel speed meters, position sensors, and other sensors, without limitation.
The prompting module is used for prompting the current working state of the mowing robot for the user. In this scheme, the suggestion module includes but is not limited to pilot lamp, buzzer etc.. For example, the mowing robot may prompt the user for a current power state, an operating state of the motor, an operating state of the sensor, etc. through the indicator light. For another example, when the mowing robot is detected to be faulty or stolen, an alarm prompt can be realized through a buzzer.
In this embodiment, the processor in the control module 301 loads executable files corresponding to the processes of one or more application programs into the memory according to the following instructions, and the processor executes the application programs stored in the memory, so as to implement various functions as follows:
carrying out semantic segmentation on the region image containing the grassland region to obtain a target image corresponding to the grassland region; performing binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image; selecting a target grassland contour from the grassland contours based on the contour areas corresponding to the grassland contours; traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
According to the embodiment of the application, after a region image containing a grassland region is subjected to semantic segmentation to obtain a target image corresponding to the grassland region, the target image is subjected to binarization processing, grassland contours corresponding to the grassland region are extracted from the processed image, then, the target grassland contour is selected from the grassland contours based on the contour areas corresponding to the grassland contours, finally, the target grassland contour is traversed according to preset interest point information and preset traversal parameters to obtain grassland boundaries corresponding to the grassland region, and in the grassland boundary detection scheme provided by the application, grassland contours corresponding to the grassland region are extracted from the binarized target image, and then, the grassland contours corresponding to the grassland region are traversed according to preset interest point information and preset traversal parameters, so that the grassland boundary is divided, the problems of poor dividing precision and low dividing efficiency caused by manual grassland division are avoided, and the accuracy of grassland boundary detection can be improved, and the grassland boundary dividing efficiency and accuracy can be improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any of the lawn boundary detection methods provided by embodiments of the present application. For example, the instructions may perform the steps of:
carrying out semantic segmentation on the region image containing the grassland region to obtain a target image corresponding to the grassland region; performing binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image; selecting a target grassland contour from the grassland contours based on the contour areas corresponding to the grassland contours; traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium can execute the steps in any of the grassland boundary detection methods provided by the embodiments of the present application, so that the beneficial effects that any of the grassland boundary detection methods provided by the embodiments of the present application can be achieved, and detailed descriptions of the foregoing embodiments are omitted.
The foregoing describes in detail a method, an apparatus, a mowing robot and a storage medium for detecting a grass boundary, which are provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. A method for detecting a grass boundary, comprising:
carrying out semantic segmentation on an area image containing a grassland area to obtain a target image corresponding to the grassland area;
performing binarization processing on the target image, and extracting a grassland contour corresponding to the grassland area from the processed image;
selecting a target grassland contour in the grassland contour based on a contour area corresponding to the grassland contour;
traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
2. The method according to claim 1, wherein traversing the target grassland contour according to the preset interest point information and the preset traversal parameters to obtain the grassland boundary corresponding to the grassland area comprises:
determining a jump step length corresponding to a preset traversal parameter;
extracting a region range corresponding to the preset interest point in the preset direction from the preset interest point information;
traversing the target grassland outline based on the jumping step length and the area range to obtain a grassland boundary corresponding to the grassland area.
3. The method of claim 2, wherein traversing the target grassland contour based on the jump step size and the area range to obtain the grassland boundary corresponding to the grassland area comprises:
setting a plurality of reference points in the region corresponding to the interest point based on the step length, wherein the distance between adjacent reference points is larger than the step length;
traversing each contour edge of the target grassland contour, and determining a reference point corresponding to the contour point in a preset direction as a target point in the traversing process;
every time a target contour point is determined, performing next traversal in the area range according to the traversal parameters;
and outputting a boundary point set according to the traversing result, and determining a grassland boundary corresponding to the grassland area based on the boundary point set.
4. A method according to claim 3, further comprising:
if the coordinates of the currently traversed target point corresponding to the preset direction are detected to exceed the area range, performing the next traversal in the area range according to the traversal parameters.
5. The method of any one of claims 1 to 4, wherein the selecting a target grass profile in the grass profiles based on the corresponding profile areas of the grass profiles comprises:
sequencing the extracted grassland contours according to the contour areas corresponding to the grassland contours from large to small;
and selecting a preset number of grassland contours as target grassland contours from the large grassland contours to the small grassland contours in the ordered grassland contours.
6. The method according to any one of claims 1 to 4, further comprising, after the binarizing the target image:
performing image reduction processing on the processed image, and performing expansion operation on the reduced image to obtain an expanded image;
the extracting the grassland outline corresponding to the grassland area in the processed image comprises the following steps: and extracting the grassland outline corresponding to the grassland area from the expanded image.
7. The method according to any one of claims 1 to 4, wherein the semantically segmenting the region image including the grassland region to obtain the target image corresponding to the grassland region includes:
acquiring a preset semantic segmentation model;
obtaining labels corresponding to each pixel in the region image by inputting the region image values containing the grassland region into the semantic segmentation model;
and dividing the target image corresponding to the grassland area in the area image based on the label corresponding to each pixel in the area image.
8. A grassland boundary detection device, characterized by comprising:
the segmentation module is used for carrying out semantic segmentation on the region image containing the grassland region to obtain a target image corresponding to the grassland region;
the processing module is used for carrying out binarization processing on the target image;
the extraction module is used for extracting the grassland outline corresponding to the grassland area in the processed image;
the selecting module is used for selecting a target grassland contour in the grassland contour based on the contour area corresponding to the grassland contour;
and the traversing module is used for traversing the target grassland outline according to the preset interest point information and the preset traversing parameters to obtain the grassland boundary corresponding to the grassland area.
9. A robot lawnmower comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the lawn boundary detection method of any of claims 1 to 7 when the program is executed by the processor.
10. A storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the lawn boundary detection method according to any of claims 1 to 7.
CN202210869099.3A 2022-07-22 2022-07-22 Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium Pending CN116824124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210869099.3A CN116824124A (en) 2022-07-22 2022-07-22 Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210869099.3A CN116824124A (en) 2022-07-22 2022-07-22 Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium

Publications (1)

Publication Number Publication Date
CN116824124A true CN116824124A (en) 2023-09-29

Family

ID=88124540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210869099.3A Pending CN116824124A (en) 2022-07-22 2022-07-22 Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium

Country Status (1)

Country Link
CN (1) CN116824124A (en)

Similar Documents

Publication Publication Date Title
US11282210B2 (en) Method and apparatus for segmenting point cloud data, storage medium, and electronic device
CN113296495B (en) Path forming method and device of self-mobile equipment and automatic working system
CN114596555B (en) Obstacle point cloud data screening method and device, electronic equipment and storage medium
CN115423865A (en) Obstacle detection method, obstacle detection device, mowing robot, and storage medium
CN112150805B (en) Determination method, device and equipment of travelable area and storage medium
CN113191297A (en) Pavement identification method and device and electronic equipment
CN115016502A (en) Intelligent obstacle avoidance method, mowing robot and storage medium
US20230205212A1 (en) Mapping method for mobile robot, mobile robot and computer-readable storage medium
CN115053689A (en) Intelligent obstacle avoidance method and device, mowing robot and storage medium
CN116824124A (en) Grassland boundary detection method, grassland boundary detection device, mowing robot and storage medium
CN115083199A (en) Parking space information determination method and related equipment thereof
WO2024008016A1 (en) Operation map construction method and apparatus, mowing robot, and storage medium
CN116434181A (en) Ground point detection method, device, electronic equipment and medium
CN115226476A (en) Mowing method, mowing device, mowing robot and storage medium
CN115053690A (en) Mowing method, mowing device, mowing robot and storage medium
CN115088463A (en) Mowing method, mowing device, mowing robot and storage medium
CN114972731A (en) Traffic light detection and identification method and device, moving tool and storage medium
CN113693898A (en) Blind guiding method and device based on point cloud three-dimensional modeling and electronic equipment
CN113657331A (en) Warning line infrared induction identification method and device, computer equipment and storage medium
CN115713690A (en) Lawn boundary recognition method and device, mowing robot and storage medium
CN115617053B (en) Obstacle traversing method, obstacle traversing device, mowing robot and storage medium
CN113473076A (en) Community alarm method and server
CN115191213A (en) Robot positioning method, robot positioning device, mowing robot and storage medium
CN115359245A (en) Obstacle detection method, obstacle detection device, mowing robot, and storage medium
CN116824130A (en) Image depth estimation method, image depth estimation device, mowing robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination