CN114863381A - Recognition method and device for mowing area, electronic equipment and storage medium - Google Patents
Recognition method and device for mowing area, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114863381A CN114863381A CN202110074177.6A CN202110074177A CN114863381A CN 114863381 A CN114863381 A CN 114863381A CN 202110074177 A CN202110074177 A CN 202110074177A CN 114863381 A CN114863381 A CN 114863381A
- Authority
- CN
- China
- Prior art keywords
- area
- image
- mask
- grass
- mowed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 244000025254 Cannabis sativa Species 0.000 claims abstract description 49
- 230000006870 function Effects 0.000 claims description 27
- 230000011218 segmentation Effects 0.000 claims description 21
- 238000010586 diagram Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 description 10
- 230000009466 transformation Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Radar, Positioning & Navigation (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
Abstract
The application discloses a recognition method and device of a mowing area, electronic equipment and a storage medium, wherein the method comprises the following steps: collecting an image of a target area; identifying meadow areas and non-meadow areas in the target area; identifying mowed and non-mowed areas of the grass region; planning a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area.
Description
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method and an apparatus for identifying a mowing area, an electronic device, and a storage medium.
Background
In the process of mowing, the mower can only perform mowing operation according to a path set by a user in an area preset by the user, however, since the user cannot accurately and flexibly plan a mowing path, the mower can only be preset to perform mowing operation according to a fixed track within a certain area range, and thus, the mower may repeatedly perform mowing operation in the area where grass is already cut, and the mowing efficiency is reduced.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for identifying a mowing area, an electronic device, and a storage medium.
The embodiment of the application provides a method for identifying a mowing area, which comprises the following steps:
collecting an image of a target area;
identifying meadow areas and non-meadow areas in the target area;
identifying mowed and non-mowed areas of the grass region;
planning a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area.
In an alternative embodiment of the present application, the identifying grass and non-grass regions in the target area comprises:
and processing the image of the target area by using a first network model to obtain a first mask image, wherein the first mask image comprises a first area and a second area, the first area is used for representing a non-grassland area, and the second area is used for representing a grassland area.
In an alternative embodiment of the present application,
the processing the image of the target area by using the first network model to obtain a first mask map includes:
processing the image of the target area by using a first network model to obtain a first characteristic diagram, and performing deconvolution processing on the first characteristic diagram to obtain a first mask diagram;
the identifying of mowed and non-mowed areas in the grass region comprises:
splicing the first mask image and the first feature image to obtain a spliced feature image;
and processing the spliced feature map by using a second network model to obtain a second mask map, wherein the second mask map comprises the first area, a third area and a fourth area, the third area is used for representing an un-mowed area in the grass area, and the fourth area is used for representing a mowed area in the grass area.
In an optional embodiment of the present application, before the splicing processing is performed on the first mask map and the first feature map, the method further includes:
and performing downsampling processing on the first mask image, wherein the resolution of the first mask image after the downsampling processing is the same as the resolution of the first feature image.
In an optional embodiment of the present application, the splicing the first mask map and the first feature map to obtain a spliced feature map includes:
and splicing the first mask image and the first feature image in the channel direction to obtain a spliced feature image.
In an optional embodiment of the present application, the processing the spliced feature map by using the second network model to obtain a second mask map includes:
performing semantic segmentation processing on the spliced feature map by using a second network model, wherein the semantic segmentation processing is used for identifying grasslands with different heights in the second region;
determining the third region and the fourth region in the second region based on a semantic segmentation processing result, and generating a second mask map based on the first region, the third region and the fourth region.
In an optional embodiment of the present application, the loss function of the first network model is a first cross entropy loss function, and the loss function of the second network model is a second cross entropy loss function.
The embodiment of the present application further provides an identification apparatus for a mowing area, the apparatus includes:
the acquisition unit is used for acquiring an image of a target area;
a first identification unit for identifying meadow areas and non-meadow areas in the target area;
a second identification unit for identifying a mowed area and an unhatched area in the grass area;
a planning unit to plan a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area.
In an optional embodiment of the present application, the first identification unit is specifically configured to: and processing the image of the target area by using a first network model to obtain a first mask image, wherein the first mask image comprises a first area and a second area, the first area is used for representing a non-grassland area, and the second area is used for representing a grassland area.
In an alternative embodiment of the present application,
the first identification unit is specifically configured to: processing the image of the target area by using a first network model to obtain a first characteristic diagram, and performing deconvolution processing on the first characteristic diagram to obtain a first mask diagram;
the second identification unit is specifically configured to: splicing the first mask image and the first feature image to obtain a spliced feature image; and processing the spliced feature map by using a second network model to obtain a second mask map, wherein the second mask map comprises the first area, a third area and a fourth area, the third area is used for representing an un-mowed area in the grass area, and the fourth area is used for representing a mowed area in the grass area.
In an optional embodiment of the present application, before the splicing processing is performed on the first mask map and the first feature map, the apparatus further includes:
and the processing unit is used for carrying out downsampling processing on the first mask image, wherein the resolution of the first mask image after the downsampling processing is the same as the resolution of the first feature image.
In an optional embodiment of the present application, the second identifying unit is further specifically configured to: and splicing the first mask image and the first feature image in the channel direction to obtain a spliced feature image.
In an optional embodiment of the present application, the second identifying unit is further specifically configured to: performing semantic segmentation processing on the spliced feature map by using a second network model, wherein the semantic segmentation processing is used for identifying grasslands with different heights in the second region; determining the third region and the fourth region in the second region based on a semantic segmentation processing result, and generating a second mask map based on the first region, the third region and the fourth region.
In an optional embodiment of the present application, the loss function of the first network model is a first cross entropy loss function, and the loss function of the second network model is a second cross entropy loss function.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores computer executable instructions, and the processor can realize the mowing area identification method when the processor runs the computer executable instructions on the memory.
The embodiment of the application also provides a computer storage medium, wherein the storage medium stores executable instructions, and the executable instructions are executed by a processor to realize the mowing area identification method.
According to the technical scheme of the embodiment of the application, the image of the target area is collected; identifying meadow areas and non-meadow areas in the target area; identifying mowed and non-mowed areas of the grass area; planning a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area. In this way, in the process of mowing by the mower, the mowed area and the uncut area in the target area can be distinguished, so that the mowing path can be planned only for the uncut area, and the mowing operation can be executed according to the planned path, thereby improving the mowing efficiency and shortening the mowing time.
Drawings
Fig. 1 is a schematic flow chart of a mowing area identification method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of a process for identifying a mowing area provided by an embodiment of the present application;
fig. 3 is a schematic structural composition diagram of an identification device of a mowing area provided by an embodiment of the application;
fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a schematic flowchart of a mowing area identification method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step 101: an image of the target area is acquired.
The technical scheme of the embodiment of the application is applied to equipment for mowing, such as a mower. The image acquisition device is arranged on the mowing equipment, and the image acquisition device can be used for acquiring an image of a certain area range in the traveling process of the mowing equipment, such as an image of an area right in front of the traveling direction of the mowing equipment.
Of course, the mowing apparatus may also be provided with a plurality of image capturing devices, the plurality of image capturing devices are respectively disposed at different positions of the mowing apparatus, so as to capture images of a plurality of area ranges corresponding to a plurality of directions around the mowing apparatus, and the processor in the mowing apparatus captures images of a plurality of area ranges corresponding to a plurality of directions by combining the plurality of image capturing devices.
Step 102: identifying meadow areas and non-meadow areas in the target area.
In an optional embodiment of the application, identification of the grassland area and the non-grassland area in the target area is realized by using the first network model, optionally, the first network model is a deep learning model, the image of the target area acquired by the image acquisition device is input into the first network model, and the grassland area and the non-grassland area in the image can be identified by using the first network model. The first network model identifies, when processing an image of a target area, portions other than a lawn in the image, such as sky, people, stones, stakes, and the like, as non-lawn areas. The embodiment of the present application does not specifically limit the specific type of the first network model as long as objects other than grasslands and grasslands in the image of the target area can be recognized.
In an alternative embodiment of the present application, the process of identifying grassy areas and non-grassy areas in the target area can be specifically realized by the following steps:
and processing the image of the target area by using a first network model to obtain a first mask image, wherein the first mask image comprises a first area and a second area, the first area is used for representing a non-grassland area, and the second area is used for representing a grassland area.
In the embodiment of the present application, in order to distinguish a grassy area from a non-grassy area in a target area, two different types of areas belonging to the grassy area and the non-grassy area in the target area are displayed in different display manners (for example, different colors), and an image of the target area may be processed by using a first network model to obtain a first mask map that can distinctively display the grassy area and the non-grassy area. Here, the first network model, after processing the image of the target area, classifies lawns in the target area into the same type, classifies objects other than the lawn, such as stones, sky, and the like, into the same type, and finally outputs a first mask map including the two types of areas.
In a preferred embodiment, the first network model may be a semantic segmentation model, such as deep series, full Convolutional neural Networks (FCN), U-net series. The semantic segmentation model can identify the image of the target region at a pixel level and mark the class of an object to which each pixel in the image of the target region belongs. The image of the target area is input into the semantic segmentation model, and each pixel in the image of the target area is classified according to two types of grassland and non-grassland by the semantic segmentation model, so that the grassland area and the non-grassland area in the image of the target area are finally identified.
Fig. 2 is a schematic diagram of a process of identifying a mowing area according to an embodiment of the present application, in fig. 2, a first network model is an FCN, and processing an image of a target area using the FCN model can identify a lawn area and a non-lawn area in the image of the target area. In the FCN model, CNN block represents a volume block, "…" represents a plurality of consecutive CNN blocks, and deconv represents deconvolution.
Step 103: identifying mowed and non-mowed areas of the grass area.
In one embodiment, the first mask map is obtained by deconvoluting the first feature map; specifically, the step of processing the image of the target region by using the first network model to obtain the first mask map specifically includes: and processing the image of the target area by using a first network model to obtain a first characteristic diagram, and performing deconvolution processing on the first characteristic diagram to obtain a first mask diagram.
Accordingly, in one embodiment, the step of identifying mowed and non-mowed areas in the grass region using the second web model may be implemented by:
splicing the first mask image and the first feature image to obtain a spliced feature image;
and processing the spliced feature map by using a second network model to obtain a second mask map, wherein the second mask map comprises the first area, a third area and a fourth area, the third area is used for representing an un-mowed area in the grass area, and the fourth area is used for representing a mowed area in the grass area.
Specifically, as shown in fig. 2, for a first feature map generated by a backbone network of a first network model, a first mask map is obtained by performing deconvolution operation on the first feature map. The feature corresponding to the first feature map is an implicit feature in an image of the target region extracted by the first network model, and the implicit feature is a high-dimensional implicit feature and is used for representing the difference between grassland and non-grassland in the image of the target region.
The first mask image and the first characteristic image are spliced to obtain a spliced characteristic image, the spliced characteristic image can be input into a second network model, the second network model continues to process the spliced characteristic image, a second area, namely a lawn area, is divided into a mowed area and an unhatched area according to different heights of lawns, and finally a second mask image containing three areas, namely a non-lawn area, a mowed area and an unhatched area, is generated by combining the first area.
In the embodiment of the application, in order to distinguish between a mowed area and an unhatched area in a grass area of a target area, two different types of areas belonging to the mowed area and the unhatched area in the grass area are displayed in different display modes (such as different colors), and then, in combination with a non-grass area in the identified grass area, a second network model is used for processing a splicing feature map to obtain a second mask map capable of distinguishing the mowed area, the unhatched area and the non-grass area.
In an embodiment, the step of performing a stitching process on the first mask map and the first feature map to obtain a stitched feature map specifically includes: and splicing the first mask image and the first characteristic image in the channel direction to obtain a spliced characteristic image.
Specifically, the first mask map and the first feature map are spliced, specifically, the first mask map and the first feature map are integrally stacked in the channel direction (i.e., channel). As shown in fig. 2, the first mask map and the first feature map can be stitched by using a stitching module (i.e., the stitching in fig. 2), more features of the image for the target area can be learned by stitching the first mask map and the first feature map, and the learned features can reflect differences between non-meadow areas and meadow areas with different heights in the image.
In an embodiment, processing the spliced feature map by using a second network model to obtain a second mask map specifically includes:
performing semantic segmentation processing on the spliced feature map by using a second network model, wherein the semantic segmentation processing is used for identifying grasslands with different heights in the second area;
determining the third region and the fourth region in the second region based on a semantic segmentation processing result, and generating a second mask map based on the first region, the third region and the fourth region.
Specifically, the second network model comprises more than one CNN block, the spliced feature map is input into the second network model, the second network model further performs fine-grained semantic segmentation on the spliced feature map, specifically performs feature extraction on features in the spliced feature map again, and then performs convolution recovery, learning of spatial details, supervised learning and the like to realize fine-grained semantic segmentation, identify different heights of the grassland in the grassland area, divide the grassland area into a mowed area and an unhatched area, and finally outputs a second mask map comprising three areas, namely a non-grassland area, a mowed area and an unhatched area.
In an optional embodiment of the present application, before the first mask map and the first feature map are subjected to the stitching processing, the first mask map is further subjected to downsampling processing, so that a resolution of the downsampled first mask map is the same as a resolution of the first feature map.
As shown in fig. 2, since the resolutions of the first mask map and the first feature map may be different, before the first mask map and the first feature map are spliced, the first mask map may be downsampled (implemented by using a downsampling module in fig. 2), so as to reduce the resolution of the first mask map, and make the resolution of the first mask map be the same as the resolution of the first feature map.
In one embodiment, the loss function of the first network model is a first cross-entropy loss function, and the loss function of the second network model is a second cross-entropy loss function.
The cross entropy loss function is a loss function used in the classification problem, and can be used as the loss function of the first network model and the loss function of the second network model respectively, so that each object in the images input into the two network models can be accurately classified.
And the two network models have independent loss functions, are trained respectively and can optimize network parameters respectively.
Step 104: planning a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area.
In an alternative embodiment of the present application, based on the generated second mask map including the non-lawn area, the grass-cut area and the grass-uncut area, the mowing path of the mower can be planned, so that the mower travels along the planned path, and performs mowing operation during traveling.
Specifically, for the second mask map, the second mask map is subjected to perspective transformation by using a perspective transformation matrix, so that a map of the target area can be obtained.
The perspective transformation matrix is a transformation matrix that transforms the pixel coordinates of each object in the second mask map into real world coordinates. And each object in the map of the target area obtained by performing perspective transformation on the second mask map by using the perspective transformation matrix has corresponding real world coordinates.
The perspective transformation matrix may be determined based on internal and external parameters of the image acquisition device. The internal parameters are parameters related to the image capturing device itself, such as a focal length and a pixel size of the image capturing device. The external parameters of the image capturing device are parameters of the image capturing device in the world coordinate system, such as the position, rotational direction, etc. of the image capturing device. The internal parameters of the image acquisition device can be calibrated, and the internal parameters are generally determined after the image acquisition device is calibrated. For an image capture device mounted on a lawn mowing apparatus, the position thereof may vary with respect to the lawn mower, and thus, external parameters of the image capture device may vary. The sensor may be used to detect a change in position of the image capture device and determine external parameters of the image capture device based on data detected by the image sensor.
After the internal parameters and the external parameters of the image acquisition device are determined, a perspective transformation matrix can be obtained based on the two parameters.
After the map of the target area is obtained, path planning can be carried out on the target area, the planned path does not include obstacles and a cut area, the mower only carries out mowing operation on the area without grass cutting in the operation process, and the mower is prevented from repeating operation on the area with grass cutting.
According to the technical scheme of the embodiment of the application, the mown area and the non-mown area in the working area of the mower can be distinguished in the process of mowing by the mower, so that the mowing path can be planned only for the non-mown area, the mowing operation can be executed according to the planned path, the mowing efficiency is improved, and the mowing time is shortened.
Fig. 3 is a schematic structural composition diagram of a recognition device of a mowing area according to an embodiment of the present application, and as shown in fig. 3, the recognition device of the mowing area includes:
an acquisition unit 301 for acquiring an image of a target region;
a first identification unit 302 for identifying meadow areas and non-meadow areas in the target area;
a second identifying unit 303 for identifying a mowed area and an unhatched area in the grass area;
a planning unit 304 for planning a mowing path of the target area based on the identified non-grass area, the mowed area, and the non-grass area.
In an optional embodiment of the present application, the first identifying unit 302 is specifically configured to: and processing the image of the target area by using a first network model to obtain a first mask image, wherein the first mask image comprises a first area and a second area, the first area is used for representing a non-grassland area, and the second area is used for representing a grassland area.
In an alternative embodiment of the present application,
the first identifying unit 302 is specifically configured to: processing the image of the target area by using a first network model to obtain a first feature map, and performing deconvolution processing on the first feature map to obtain a first mask map;
the second identifying unit 303 is specifically configured to: splicing the first mask image and the first feature image to obtain a spliced feature image; and processing the spliced feature map by using a second network model to obtain a second mask map, wherein the second mask map comprises the first area, a third area and a fourth area, the third area is used for representing an un-mowed area in the grass area, and the fourth area is used for representing a mowed area in the grass area.
In an optional embodiment of the present application, before the splicing processing is performed on the first mask map and the first feature map, the apparatus further includes:
a processing unit 305, configured to perform downsampling on the first mask map, where a resolution of the downsampled first mask map is the same as a resolution of the first feature map.
In an optional embodiment of the present application, the second identifying unit 303 is further specifically configured to: and splicing the first mask image and the first feature image in the channel direction to obtain a spliced feature image.
In an optional embodiment of the present application, the second identifying unit 303 is further specifically configured to: performing semantic segmentation processing on the spliced feature map by using a second network model, wherein the semantic segmentation processing is used for identifying grasslands with different heights in the second area; determining the third region and the fourth region in the second region based on a semantic segmentation processing result, and generating a second mask map based on the first region, the third region and the fourth region.
In an optional embodiment of the present application, the loss function of the first network model is a first cross entropy loss function, and the loss function of the second network model is a second cross entropy loss function.
It will be appreciated by those skilled in the art that the functions implemented by the units in the identification device of a mowing area shown in fig. 3 can be understood with reference to the foregoing description of the identification method of a mowing area. The functions of the units in the mowing area identification device shown in fig. 3 can be realized by a program running on a processor, and can also be realized by a specific logic circuit.
It can be understood that the acquisition Unit 301, the first identification Unit 302, the second identification Unit 303, the planning Unit 304, and the Processing Unit 305 in the image acquisition apparatus may be implemented by a Central Processing Unit (CPU) of a terminal device, a Digital Signal Processor (DSP), a Micro Control Unit (MCU) or a Programmable Gate Array (FPGA) in practical applications.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application, and as shown in fig. 4, the electronic device includes: a communication component 403 for data transmission, at least one processor 401 and a memory 402 for storing computer programs capable of running on the processor 401. The various components in the terminal are coupled together by a bus system 404. It is understood that the bus system 404 is used to enable communications among the components. The bus system 404 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 404 in FIG. 4.
Wherein the processor 401, when executing the computer program, performs at least the steps of the method shown in fig. 1.
It will be appreciated that the memory 402 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 402 described in embodiments herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the embodiments of the present application may be applied to the processor 401, or implemented by the processor 401. The processor 401 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 401. The processor 401 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 401 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 402, and the processor 401 reads the information in the memory 402 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the electronic Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general purpose processors, controllers, MCUs, microprocessors (microprocessors), or other electronic components for performing the aforementioned method of obtaining training data.
The present application also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is used to execute at least the steps of the method shown in the foregoing embodiments when executed by a processor. The computer readable storage medium may be specifically a memory. The memory may be memory 402 as shown in fig. 4.
The technical solutions described in the embodiments of the present application can be arbitrarily combined without conflict.
In the several embodiments provided in the present application, it should be understood that the disclosed method and intelligent device may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one second processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.
Claims (10)
1. A method of identifying a mowing area, the method comprising:
collecting an image of a target area;
identifying meadow areas and non-meadow areas in the target area;
identifying mowed and non-mowed areas of the grass region;
planning a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area.
2. The method of claim 1, wherein the identifying grass and non-grass regions in the target area comprises:
and processing the image of the target area by using a first network model to obtain a first mask image, wherein the first mask image comprises a first area and a second area, the first area is used for representing a non-grassland area, and the second area is used for representing a grassland area.
3. The method of claim 2,
the processing the image of the target area by using the first network model to obtain a first mask map includes:
processing the image of the target area by using a first network model to obtain a first characteristic diagram, and performing deconvolution processing on the first characteristic diagram to obtain a first mask diagram;
the identifying of mowed and non-mowed areas in the grass region comprises:
splicing the first mask image and the first feature image to obtain a spliced feature image;
and processing the spliced feature map by using a second network model to obtain a second mask map, wherein the second mask map comprises the first area, a third area and a fourth area, the third area is used for representing an un-mowed area in the grass area, and the fourth area is used for representing a mowed area in the grass area.
4. The method of claim 3, wherein prior to the stitching the first mask map and the first feature map, the method further comprises:
and performing downsampling processing on the first mask image, wherein the resolution of the first mask image after the downsampling processing is the same as the resolution of the first feature image.
5. The method according to claim 3, wherein the stitching the first mask map and the first feature map to obtain a stitched feature map comprises:
and splicing the first mask image and the first feature image in the channel direction to obtain a spliced feature image.
6. The method of claim 3, wherein processing the stitched feature map using the second network model to obtain a second mask map comprises:
performing semantic segmentation processing on the spliced feature map by using a second network model, wherein the semantic segmentation processing is used for identifying grasslands with different heights in the second area;
determining the third region and the fourth region in the second region based on a semantic segmentation processing result, and generating a second mask map based on the first region, the third region and the fourth region.
7. The method according to any of claims 3 to 6, wherein the loss function of the first network model is a first cross-entropy loss function and the loss function of the second network model is a second cross-entropy loss function.
8. An apparatus for identifying a mowing area, the apparatus comprising:
the acquisition unit is used for acquiring an image of a target area;
a first identification unit for identifying a meadow area and a non-meadow area in the target area;
a second identification unit for identifying a mowed area and an unhatched area in the grass area;
a planning unit to plan a mowing path for the target area based on the identified non-grass area, the mowed area, and the non-grass area.
9. An electronic device comprising a memory having computer-executable instructions stored thereon and a processor that when executed by the processor performs the method of any of claims 1-7.
10. A computer storage medium having stored thereon executable instructions that when executed by a processor implement the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110074177.6A CN114863381A (en) | 2021-01-20 | 2021-01-20 | Recognition method and device for mowing area, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110074177.6A CN114863381A (en) | 2021-01-20 | 2021-01-20 | Recognition method and device for mowing area, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114863381A true CN114863381A (en) | 2022-08-05 |
Family
ID=82623473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110074177.6A Pending CN114863381A (en) | 2021-01-20 | 2021-01-20 | Recognition method and device for mowing area, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114863381A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117516552A (en) * | 2024-01-08 | 2024-02-06 | 锐驰激光(深圳)有限公司 | Cross path planning method, device and equipment of intelligent mower and storage medium |
CN117516513A (en) * | 2024-01-08 | 2024-02-06 | 锐驰激光(深圳)有限公司 | Intelligent mower path planning method, device, equipment and storage medium |
CN118168559A (en) * | 2024-05-13 | 2024-06-11 | 锐驰激光(深圳)有限公司 | Mowing path planning method, mowing path planning device, mowing path planning equipment, mowing path planning storage medium and mowing path planning product |
-
2021
- 2021-01-20 CN CN202110074177.6A patent/CN114863381A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117516552A (en) * | 2024-01-08 | 2024-02-06 | 锐驰激光(深圳)有限公司 | Cross path planning method, device and equipment of intelligent mower and storage medium |
CN117516513A (en) * | 2024-01-08 | 2024-02-06 | 锐驰激光(深圳)有限公司 | Intelligent mower path planning method, device, equipment and storage medium |
CN118168559A (en) * | 2024-05-13 | 2024-06-11 | 锐驰激光(深圳)有限公司 | Mowing path planning method, mowing path planning device, mowing path planning equipment, mowing path planning storage medium and mowing path planning product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114863381A (en) | Recognition method and device for mowing area, electronic equipment and storage medium | |
CN113670292B (en) | Map drawing method and device, sweeper, storage medium and electronic device | |
CN111126287A (en) | Remote sensing image dense target deep learning detection method | |
CN112884764A (en) | Method and device for extracting land parcel in image, electronic equipment and storage medium | |
CN113627402B (en) | Image identification method and related device | |
CN111295666A (en) | Lane line detection method, device, control equipment and storage medium | |
KR20210047230A (en) | Fruit tree disease Classification System AND METHOD Using Generative Adversarial Networks | |
CN113850136A (en) | Yolov5 and BCNN-based vehicle orientation identification method and system | |
CN111932545A (en) | Image processing method, target counting method and related device thereof | |
CN114937293A (en) | Agricultural service management method and system based on GIS | |
CN117152719B (en) | Weeding obstacle detection method, weeding obstacle detection equipment, weeding obstacle detection storage medium and weeding obstacle detection device | |
CN114359599A (en) | Image processing method, storage medium and computer terminal | |
CN114511500A (en) | Image processing method, storage medium, and computer terminal | |
CN109146973B (en) | Robot site feature recognition and positioning method, device, equipment and storage medium | |
CN115618602A (en) | Lane-level scene simulation method and system | |
CN114863382A (en) | Recognition method and device for mowing area, electronic equipment and storage medium | |
CN115713780A (en) | Pig group posture identification method based on generation countermeasure network | |
Chen et al. | Research on navigation line extraction of garden mobile robot based on edge detection | |
WO2023231022A1 (en) | Image recognition method, self-moving device and storage medium | |
CN118094469B (en) | Fusion method, device, equipment and storage medium of multi-source heterogeneous data | |
CN111815772B (en) | Plateau mountain land utilization method, system, storage medium and computer equipment | |
CN117911882B (en) | Monitoring method and system for forestry protection planning dynamic data | |
US12080051B1 (en) | Camera apparatus and method of detecting crop plants irrespective of crop image data variations | |
CN118747835A (en) | Image sample processing method, device and equipment of target recognition model and computer readable storage medium | |
CN116018924A (en) | Mowing method and device of mower and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |