CN115147713A - Method, system, device and medium for identifying non-working area based on image - Google Patents

Method, system, device and medium for identifying non-working area based on image Download PDF

Info

Publication number
CN115147713A
CN115147713A CN202110277128.2A CN202110277128A CN115147713A CN 115147713 A CN115147713 A CN 115147713A CN 202110277128 A CN202110277128 A CN 202110277128A CN 115147713 A CN115147713 A CN 115147713A
Authority
CN
China
Prior art keywords
image
working area
parameter
rectangular outline
judging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110277128.2A
Other languages
Chinese (zh)
Inventor
朱绍明
任雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Cleva Electric Appliance Co Ltd
Original Assignee
Suzhou Cleva Electric Appliance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Cleva Electric Appliance Co Ltd filed Critical Suzhou Cleva Electric Appliance Co Ltd
Priority to CN202110277128.2A priority Critical patent/CN115147713A/en
Publication of CN115147713A publication Critical patent/CN115147713A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D75/00Accessories for harvesters or mowers
    • A01D75/18Safety devices for parts of the machines

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, equipment and a medium for identifying a non-working area based on an image, wherein the method comprises the following steps: acquiring an original image; separating an H channel image and a V channel image from an original image; processing the V-channel image to form an edge image and a binary image; respectively acquiring each pre-judged working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image, and acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline; acquiring a first parameter representing the size of each rectangular outline, a second parameter representing the position of the rectangular outline, a third parameter representing the roughness of the image and a fourth parameter representing the number of pixels on the basis of the binary image; and confirming whether the front of the robot is a real non-working area or not according to the first parameter, the second parameter, the third parameter and the fourth parameter. According to the invention, the non-working area can be accurately identified through image identification, and the service performance of the robot is improved.

Description

Method, system, device and medium for identifying non-working area based on image
Technical Field
The invention relates to the field of intelligent control, in particular to a method, a system, equipment and a medium for identifying a non-working area based on an image.
Background
The low repetition rate and the high coverage rate are the objectives of the traversing robot such as the mobile robot for dust collection, grass cutting, swimming pool cleaning and the like. Taking the mobile robot as an intelligent mowing robot as an example, the mowing robot takes a lawn surrounded by a boundary as a working area to perform mowing operation, and the area outside the lawn is defined as a non-working area.
In the prior art, the boundary of a lawn working area is usually calibrated in a manner of embedding a boundary line, and the manner needs much manpower and material resources, so that the use cost of the mobile robot is increased; moreover, this approach has certain requirements for wiring, such as: the angle of the corners cannot be less than 90 degrees, thus limiting the shape of the lawn work area to some extent.
Disclosure of Invention
To solve the above technical problems, an object of the present invention is to provide a method, system, device, and medium for identifying a non-working area based on an image.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for identifying a non-working area based on an image, the method including:
acquiring an original image;
separating an H channel image and a V channel image from an original image;
performing edge extraction on the V-channel image to form an edge image and performing binarization processing on the V-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
respectively acquiring each pre-judged working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image, and acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline;
acquiring a first parameter representing the size of each rectangular outline and a second parameter representing the position of the rectangular outline based on the binary image;
acquiring a third parameter representing the roughness of the image based on the binary image and the edge image;
acquiring a fourth parameter representing the number of pixel points based on the binary image and the H channel image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
By the method, whether the non-working area exists in front of the robot can be accurately judged through image recognition, cost is saved, and usability of the robot is improved.
As a further improvement of an embodiment of the present invention, the binarizing processing the V-channel image to form a binarized image includes:
sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
performing edge extraction on the V-channel preprocessed image to form an edge image;
carrying out the segmentation of the V-channel preprocessed image by an Otsu threshold method to form a segmented image;
and carrying out opening and closing operation processing on the segmented image to form a binary image.
With the above preferred embodiment, the formation process of the binarized image is specifically described.
As a further refinement of an embodiment of the present invention, the method includes:
if the area of the current rectangular outline corresponding to the circle is a pre-judgment non-working area, configuring the first parameter A1 as: the area of the rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and at least one of the number of pixel points corresponding to a circled pre-judging non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the pre-judging non-working area of the rectangular outline, which correspond to the number of pixels covered by the pre-judging non-working area in the H channel image, in the preset chromaticity coverage range to the number of pixels covered by the pre-judging non-working area in the rectangular outline;
if the area of the current rectangular outline corresponding to the circle is a pre-judgment working area, configuring the first parameter A1 as: the method comprises the following steps of (1) at least one of the area of a rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and the number of pixel points corresponding to a circled pre-judging working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging working area of the rectangular outline, which correspond to the edge of the pre-judging working area of the edge image, to the number of pixel points covered by the pre-judging working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged working area of the rectangular outline, which correspond to the number of pixels covered by the H-channel image in the preset chromaticity coverage range, to the number of pixels covered by the prejudged working area of the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset roughness threshold value, and A4 is smaller than a preset number threshold value, the fact that the front of the robot is a real non-working area is confirmed.
Through the above preferred embodiment, the first parameter, the second parameter, the third parameter and the fourth parameter are specifically defined, and the current position of the robot is accurately identified through a specific rule.
As a further improvement of an embodiment of the present invention, if it is confirmed that the front of the robot is a real non-working area, the method further includes:
and driving the robot to execute obstacle avoidance logic.
According to the embodiment, the robot is confirmed to encounter the obstacle through image recognition, obstacle avoidance operation is carried out, and the working efficiency is improved.
In order to achieve one of the above objects, an embodiment of the present invention provides a system for identifying a non-working area based on an image, the system including: the acquisition module is used for acquiring an original image;
the conversion module is used for separating an H channel image and a V channel image from an original image;
performing edge extraction on the V-channel image to form an edge image and performing binarization processing on the V-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
the analysis module is used for respectively acquiring each pre-judged working area in the enclosed binary image and the smallest rectangular contour and acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular contour on the basis of the binary image;
acquiring a first parameter representing the size of each rectangular outline and a second parameter representing the position of the rectangular outline based on the binary image;
acquiring a third parameter representing the roughness of the image based on the binary image and the edge image;
acquiring a fourth parameter representing the number of pixel points based on the binary image and the H channel image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
By the aid of the system, whether a non-working area exists in front of the robot or not can be accurately judged through image recognition, cost is saved, and usability of the robot is improved.
As a further improvement of an embodiment of the present invention, the conversion module is configured to: sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
performing edge extraction on the V-channel preprocessed image to form an edge image;
carrying out the segmentation of the V-channel preprocessed image by an Otsu threshold method to form a segmented image;
and sequentially carrying out opening and closing operation processing on the segmented image to form a binary image.
With the above preferred embodiment, the formation process of the first binarized image is described in detail.
As a further improvement of an embodiment of the present invention, the parsing module is configured to: if the area of the current rectangular outline corresponding to the circle is a pre-judgment non-working area, configuring the first parameter A1 as: the area of the rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and at least one of the number of pixel points corresponding to a circled pre-judging non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline;
if the area of the current rectangular outline corresponding to the circle is a pre-judgment working area, configuring the first parameter A1 as: the method comprises the following steps of (1) at least one of the area of a rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and the number of pixel points corresponding to a circled pre-judging working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging working area of the rectangular outline, which correspond to the edge of the pre-judging working area of the edge image, to the number of pixel points covered by the pre-judging working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged working area of the rectangular outline, which correspond to the number of pixels covered by the H-channel image in the preset chromaticity coverage range, to the number of pixels covered by the prejudged working area of the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset roughness threshold value, and A4 is smaller than a preset number threshold value, the front of the robot is determined to be a real non-working area.
The system specifically defines the first parameter, the second parameter, the third parameter and the fourth parameter, and accurately identifies the current position of the robot through specific rules.
As a further improvement of an embodiment of the present invention, if it is confirmed that the front of the robot is a real non-working area, the analysis module is further configured to: and driving the robot to execute obstacle avoidance logic.
According to the system, the robot is confirmed to encounter the obstacle through image recognition, obstacle avoidance operation is carried out, and the working efficiency is improved.
In order to achieve one of the above objects, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the steps of the method for identifying a non-working area based on an image.
In order to achieve one of the above objects of the invention, an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, which, when being executed by a processor, implements the steps in the method for identifying a non-working area based on an image as described above.
Compared with the prior art, the method, the system, the equipment and the medium for identifying the non-working area based on the image can accurately identify the non-working area through the image identification, save the cost and improve the service performance of the robot.
Drawings
FIG. 1 is a schematic structural diagram of a robot lawnmower system according to the present invention;
FIG. 2 is a schematic flow chart of a method for identifying non-working areas based on images provided by the present invention;
fig. 3 is a schematic block diagram of a system for identifying a non-working area based on an image according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
The robot system of the invention can be a mowing robot system, a sweeping robot system, a snow sweeper system, a leaf suction machine system, a golf course ball picking-up machine system and the like, each system can automatically walk in a working area and carry out corresponding work, in the specific example of the invention, the robot system is taken as the mowing robot system for example, and correspondingly, the working area can be a lawn.
As shown in fig. 1, the robot lawnmower system of the present invention includes: a mowing Robot (RM).
The robot lawnmower includes: the body 10, the walking unit, the image acquisition unit and the control unit that set up on the body 10. The walking unit comprises: a driving wheel 111, a driven wheel 113, and a motor for driving the driving wheel 111; the motor can be a brushless motor with a reduction box and a Hall sensor; after the motor is started, the driving wheel 111 can be driven to walk through the reduction gearbox, and the traveling actions such as forward and backward linear running, pivot turning, circular arc running and the like can be realized by controlling the speed and the direction of the two wheels; the driven wheel 113 may be a universal wheel, which is generally set to 1 or 2, and mainly plays a role of supporting balance.
The image acquisition unit is used for acquiring a scene within a certain range of the visual angle, in the specific embodiment of the invention, the camera 12 is arranged at the upper part of the body 10, and forms a certain included angle with the horizontal direction, so that the scene within a certain range of the mowing robot can be shot; the camera 12 typically captures a scene within a certain range of the front of the lawn mowing robot.
The control unit is a main controller 13 that performs image processing, for example: MCU or DSP, etc.
Further, the robot lawnmower further comprises: a working mechanism for working, and a power supply 14; in this embodiment, the working mechanism is a mower deck, and various sensors for sensing the walking state of the walking robot, such as: dumping, ground clearance, collision sensors, geomagnetism, gyroscopes, etc., are not described in detail herein.
As shown in fig. 2, a first embodiment of the present invention provides a method for identifying a non-working area based on an image, the method comprising the steps of:
s1, acquiring an original image;
s2, separating an H-channel image and a V-channel image from the original image;
performing edge extraction on the V-channel image to form an edge image and performing binarization processing on the V-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
s3, respectively acquiring each pre-judged working area in the enclosed binary image and the smallest rectangular contour on the basis of the binary image, and acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular contour;
acquiring a first parameter representing the size of each rectangular outline and a second parameter representing the position of the rectangular outline based on the binary image;
acquiring a third parameter representing the roughness of the image based on the binary image and the edge image;
acquiring a fourth parameter representing the number of pixel points based on the binarized image and the H channel image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
In the specific embodiment of the invention, for the step S1, a camera installed on the mowing robot shoots a scene in front of the robot in real time to form an original image; the scene is a ground image in the advancing direction of the robot; further, after the main controller receives the original image, analyzing the original image; so that the working position of the robot for capturing the original image, which is acquired by the robot, can be judged by the original image, which will be described in detail below. In this specific example, the format of the original image is not specifically limited, and is, for example: RGB format, HSV format color images.
For the step S2, if the original image is in RGB format, carrying out format conversion on the original image in RGB format to form HSV image, if the original image is in HSV format, not needing conversion, and directly separating H channel image and V channel image from HSV image; the implementation manners are all the prior art, and are various, which are not described herein.
Carrying out binarization processing on the V channel image to form a binarization image; it is known that: the V channel image is a gray scale image, and the converted binary image is as follows: the pixel points in the image have only two gray values, for example: the two gray values are respectively 0 and 255, and the binarization processing process is a process of enabling the whole original image to show obvious black and white effects.
In a preferred embodiment of the present invention, the binarizing processing the V-channel image to form a binarized image includes: sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image so as to remove noise in the V-channel image; performing edge extraction on the V-channel preprocessed image to form an edge image for calculating roughness, for example: the adopted edge detection method is canny algorithm; carrying out the segmentation of the V-channel preprocessed image by an Otsu threshold method to form a segmented image, wherein the segmentation method is not limited to the Otsu threshold method, and for example, a fixed threshold method can also be used; and carrying out opening and closing operation processing on the segmentation image to form the binarization image. Here, the opening and closing operations are all conventional image processing methods, which are also the prior art, and are not further described herein.
Preferably, for step S3, the method comprises: if the area of the current rectangular outline corresponding to the circle is a pre-judgment non-working area, configuring the first parameter A1 as: the area of the rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and at least one of the number of pixel points corresponding to a circled non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline;
if the area of the current rectangular outline corresponding to the circle is a pre-judgment working area, configuring the first parameter A1 as: the method comprises the following steps of (1) at least one of the area of a rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and the number of pixel points corresponding to a circled pre-judging working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging working area of the rectangular outline, which correspond to the edge of the pre-judging working area of the edge image, to the number of pixel points covered by the pre-judging working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged working area of the rectangular outline, which correspond to the number of pixels covered by the H-channel image in the preset chromaticity coverage range, to the number of pixels covered by the prejudged working area of the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset roughness threshold value, and A4 is smaller than a preset number threshold value, the fact that the front of the robot is a real non-working area is confirmed.
Here, it should be noted that, based on the binarized image, the pre-judgment working region and the pre-judgment non-working region included in the binarized image both independently form a rectangular contour, and the rectangular contour corresponding to the pre-judgment working region and the rectangular contour corresponding to the pre-judgment non-working region are not particularly distinguished.
Here, when the first parameter A1 is an area corresponding to a rectangular outline, the preset size threshold is also a corresponding area threshold; correspondingly, when the first parameter A1 is at least one of a diagonal length, a rectangular contour width, a rectangular contour height, and a pixel number of a pre-determined working area/a pre-determined non-working area in the rectangular contour, the first parameter A1 is changed accordingly, specifically, the preset size threshold is one of a diagonal length threshold, a rectangular contour width threshold, a rectangular contour height threshold, and a pixel number threshold of a pre-determined working area in the rectangular contour/a pixel number threshold of a pre-determined non-working area in the rectangular contour;
in the specific example of the present invention, the configuration area threshold is 5% × the area of the original image, the diagonal length threshold is 22% × the diagonal length of the original image, the threshold of the number of pixels in the pre-determined working region in the rectangular contour is 5% × the number of pixels in the original image, the threshold of the number of pixels in the pre-determined non-working region in the rectangular contour is 5% the number of pixels in the original image, the threshold of the width of the rectangular contour is 19% × the width of the original image, and the height of the rectangular contour is 26% the height of the original image.
The second parameter A2 is configured as a coordinate parameter value of the rectangular profile, where the coordinate parameter value may be a Y-axis coordinate of a lower right corner of the rectangular profile, or a center position coordinate of the rectangular profile, or an average value of Y values of pixel coordinates in the rectangular profile, and so on, and further description is not given here.
The third parameter A3 is configured to be the roughness of the rectangular contour, that is, the ratio of the number of pixels covered by the pre-judging non-working area of the rectangular contour, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixels covered by the pre-judging non-working area in the rectangular contour. Preferably, the value of configuration A3 is 0.26;
the fourth parameter A4 is configured to be an effective pixel value ratio in the rectangular contour, that is, a ratio of the number of pixels in the pre-judgment working area of the rectangular contour corresponding to the H-channel image in the preset chromaticity coverage range to the number of pixels in the pre-judgment working area of the rectangular contour, where the preset chromaticity coverage range of the H-channel image is usually a chromaticity value range of a common color of the lawn, such as a green chromaticity range. Preferably, configuration A4 is 0.8.
Further, if it is determined that the front of the robot is a real non-working area, the method further includes: the driving robot executes obstacle avoidance logic, for example: the reverse steering is not described herein.
Referring to fig. 3, there is provided a system for identifying a non-working area based on an image, the system comprising: an obtaining module 100, a converting module 200 and an analyzing module 300.
The obtaining module 100 is configured to obtain an original image;
the conversion module 200 is configured to separate an H-channel image and a V-channel image from an original image;
performing edge extraction on the V-channel image to form an edge image and performing binarization processing on the V-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
the analysis module 300 is configured to obtain, based on the binarized image, a smallest rectangular contour of each pre-determined working region in the enclosed binarized image, and obtain a smallest rectangular contour of each pre-determined non-working region in the enclosed binarized image;
acquiring a first parameter representing the size of each rectangular outline and a second parameter representing the position of the rectangular outline based on the binary image;
acquiring a third parameter representing the roughness of the image based on the binary image and the edge image;
acquiring a fourth parameter representing the number of pixel points based on the binarized image and the H channel image;
and determining whether the front of the robot is a real non-working area or not according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
Specifically, the obtaining module 100 is configured to implement step S1; the conversion module 200 is configured to implement step S2; the analysis module 300 is used for implementing the step S3 and executing the calculation of the obstacle avoidance logic for driving the robot; it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Preferably, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor executes the computer program to implement the steps in the method for identifying a non-working area based on an image as described above.
Preferably, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method for identifying a non-working area based on an image as described above.
In summary, the method, the system device and the medium for identifying the non-working area based on the image can accurately identify the non-working area through the image identification, save the cost and improve the service performance of the robot.
In the several embodiments provided in the present application, it should be understood that the disclosed modules, systems and methods may be implemented in other manners. The above-described system embodiments are merely illustrative, and the division of the modules into only one logical functional division may be implemented in practice in other ways, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, that is, may be located in one place, or may also be distributed on a plurality of network modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional module in the embodiments of the present application may be integrated into one analysis module, or each module may exist alone physically, or 2 or more modules may be integrated into one module. The integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
Finally, it should be noted that: the above embodiments are only intended to illustrate the technical solution of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may be modified or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the embodiments of the present application.

Claims (10)

1. A method for identifying a non-working area based on an image, the method comprising:
acquiring an original image;
separating an H channel image and a V channel image from an original image;
performing edge extraction on the V-channel image to form an edge image and performing binarization processing on the V-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
respectively acquiring each pre-judged working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image, and acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline;
acquiring a first parameter representing the size of each rectangular outline and a second parameter representing the position of the rectangular outline based on the binary image;
acquiring a third parameter representing the roughness of the image based on the binary image and the edge image;
acquiring a fourth parameter representing the number of pixel points based on the binarized image and the H channel image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
2. The method for identifying the non-working area based on the image according to claim 1, wherein the binarizing the V-channel image to form a binarized image comprises:
sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
performing edge extraction on the V-channel preprocessed image to form an edge image;
carrying out the segmentation of the V-channel preprocessed image by an Otsu threshold method to form a segmented image;
and carrying out opening and closing operation processing on the segmentation image to form a binary image.
3. The method for image-based identification of non-working areas according to claim 2, characterized in that the method comprises:
if the area of the current rectangular outline corresponding to the circle is a pre-judgment non-working area, configuring the first parameter A1 as: the area of the rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and at least one of the number of pixel points corresponding to a circled non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline;
if the area of the current rectangular outline corresponding to the circle is a pre-judgment working area, configuring the first parameter A1 as: the method comprises the following steps of (1) judging at least one of the area of a rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and the number of pixel points corresponding to a circled pre-judgment working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging working area of the rectangular outline, which correspond to the edge of the pre-judging working area of the edge image, to the number of pixel points covered by the pre-judging working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged working area of the rectangular outline, which correspond to the number of pixels covered by the H-channel image in the preset chromaticity coverage range, to the number of pixels covered by the prejudged working area of the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset roughness threshold value, and A4 is smaller than a preset number threshold value, the front of the robot is determined to be a real non-working area.
4. The image-based method for identifying a non-working area according to any one of claims 1 to 3, wherein if the front of the robot is confirmed to be a real non-working area, the method further comprises:
and driving the robot to execute obstacle avoidance logic.
5. A system for identifying non-working areas based on images, the system comprising:
the acquisition module is used for acquiring an original image;
the conversion module is used for separating an H channel image and a V channel image from an original image;
performing edge extraction on the V-channel image to form an edge image and performing binarization processing on the V-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
the analysis module is used for respectively acquiring each pre-judged working area in the enclosed binary image and the smallest rectangular contour and acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular contour on the basis of the binary image;
acquiring a first parameter representing the size of each rectangular outline and a second parameter representing the position of the rectangular outline based on the binary image;
acquiring a third parameter representing the roughness of the image based on the binary image and the edge image;
acquiring a fourth parameter representing the number of pixel points based on the binary image and the H channel image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
6. The image-based non-working area identification system of claim 5, wherein the conversion module is configured to: sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
performing edge extraction on the V-channel preprocessed image to form an edge image;
carrying out Dajin threshold value method segmentation on the V channel preprocessed image to form a segmented image;
and carrying out opening and closing operation processing on the segmentation image to form a binary image.
7. The image-based non-working area identification system of claim 6, wherein the parsing module is configured to:
if the area enclosed by the current rectangular outline is a pre-judgment non-working area, configuring the first parameter A1 as follows: the area of the rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and at least one of the number of pixel points corresponding to a circled pre-judging non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline;
if the area of the current rectangular outline corresponding to the circle is a pre-judgment working area, configuring the first parameter A1 as: the method comprises the following steps of (1) at least one of the area of a rectangular outline, the length of a diagonal line, the width of the rectangular outline, the height of the rectangular outline and the number of pixel points corresponding to a circled pre-judging working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixel points covered by the pre-judging working area of the rectangular outline, which correspond to the edge of the pre-judging working area of the edge image, to the number of pixel points covered by the pre-judging working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixels of the prejudged working area of the rectangular outline, which correspond to the number of pixels covered by the H-channel image in the preset chromaticity coverage range, to the number of pixels covered by the prejudged working area of the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset roughness threshold value, and A4 is smaller than a preset number threshold value, the fact that the front of the robot is a real non-working area is confirmed.
8. The image-based system for identifying a non-working area according to any one of claims 5 to 7, wherein if the front of the robot is confirmed to be a real non-working area, the analysis module is further configured to: and driving the robot to execute obstacle avoidance logic.
9. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method for image-based identification of non-working areas according to any one of claims 1-4 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for identifying a non-working area on the basis of an image as claimed in any one of claims 1 to 4.
CN202110277128.2A 2021-03-15 2021-03-15 Method, system, device and medium for identifying non-working area based on image Pending CN115147713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110277128.2A CN115147713A (en) 2021-03-15 2021-03-15 Method, system, device and medium for identifying non-working area based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277128.2A CN115147713A (en) 2021-03-15 2021-03-15 Method, system, device and medium for identifying non-working area based on image

Publications (1)

Publication Number Publication Date
CN115147713A true CN115147713A (en) 2022-10-04

Family

ID=83403877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277128.2A Pending CN115147713A (en) 2021-03-15 2021-03-15 Method, system, device and medium for identifying non-working area based on image

Country Status (1)

Country Link
CN (1) CN115147713A (en)

Similar Documents

Publication Publication Date Title
CN113761970B (en) Method, system, robot and storage medium for identifying working position based on image
CN113822094B (en) Method, system, robot and storage medium for identifying working position based on image
CN113822095B (en) Method, system, robot and storage medium for identifying working position based on image
CN113805571B (en) Robot walking control method, system, robot and readable storage medium
CN111353431B (en) Automatic working system, automatic walking equipment, control method thereof and computer readable storage medium
EP4242964A1 (en) Obstacle recognition method applied to automatic traveling device and automatic traveling device
CN111104933A (en) Map processing method, mobile robot, and computer-readable storage medium
WO2022099755A1 (en) Image-based working area identification method and system, and robot
CN115147713A (en) Method, system, device and medium for identifying non-working area based on image
CN115147714A (en) Method and system for identifying non-working area based on image
EP4242910A1 (en) Obstacle recognition method, apparatus and device, medium, and weeding robot
WO2021238000A1 (en) Boundary-following working method and system of robot, robot, and readable storage medium
EP4242909A1 (en) Obstacle recognition method and apparatus, and device, medium and weeding robot
EP4123406A1 (en) Automatic working system, automatic walking device and method for controlling same, and computer-readable storage medium
CN115147712A (en) Method and system for identifying non-working area based on image
EP4242908A1 (en) Obstacle recognition method and apparatus, device, medium and weeding robot
CN113223034A (en) Road edge detection and tracking method
CN113848872B (en) Automatic walking device, control method thereof and readable storage medium
JPH07222504A (en) Method for controlling travel of self-traveling working vehicle
CN117274949A (en) Obstacle based on camera, boundary detection method and automatic walking equipment
CN116795117A (en) Automatic recharging method and device for robot, storage medium and robot system
CN116758434A (en) Boundary recognition method, boundary recognition device, computer readable medium and self-mobile device
CN113496146A (en) Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium
JPH07222505A (en) Method for controlling travel of self-traveling working vehicle
CN114677318A (en) Obstacle identification method, device, equipment, medium and weeding robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination