CN115147714A - Method and system for identifying non-working area based on image - Google Patents

Method and system for identifying non-working area based on image Download PDF

Info

Publication number
CN115147714A
CN115147714A CN202110277132.9A CN202110277132A CN115147714A CN 115147714 A CN115147714 A CN 115147714A CN 202110277132 A CN202110277132 A CN 202110277132A CN 115147714 A CN115147714 A CN 115147714A
Authority
CN
China
Prior art keywords
image
parameter
working area
channel
rectangular outline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110277132.9A
Other languages
Chinese (zh)
Inventor
朱绍明
任雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Cleva Electric Appliance Co Ltd
Original Assignee
Suzhou Cleva Electric Appliance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Cleva Electric Appliance Co Ltd filed Critical Suzhou Cleva Electric Appliance Co Ltd
Priority to CN202110277132.9A priority Critical patent/CN115147714A/en
Publication of CN115147714A publication Critical patent/CN115147714A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for identifying a non-working area based on an image, wherein the method comprises the following steps: acquiring an original image and separating an H channel image and a V channel image; converting the H channel image into a binary image; converting the V-channel image into an edge image; on the basis of the binary image, acquiring a minimum rectangular outline enclosing each pre-judged non-working area in the binary image; acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline, a third parameter representing the effective pixel value ratio and a fourth parameter representing the roughness of an image; and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value. The invention can accurately identify the non-working area through image identification, save cost and improve the service performance of the robot.

Description

Method and system for identifying non-working area based on image
Technical Field
The invention relates to the field of intelligent control, in particular to a method and a system for identifying a non-working area based on an image.
Background
The low repetition rate and the high coverage rate are the objectives of the traversing robot such as the mobile robot for dust collection, grass cutting, swimming pool cleaning and the like. Taking the mobile robot as an intelligent mowing robot as an example, the mowing robot takes a lawn surrounded by a boundary as a working area to perform mowing operation, and the area outside the lawn is defined as a non-working area.
In the prior art, the boundary of a lawn working area is usually calibrated in a boundary line embedding mode, and the mode needs much manpower and material resources, so that the use cost of a mobile robot is increased; moreover, this approach has certain requirements for wiring, such as: the angle of the corners cannot be less than 90 degrees, thus limiting the shape of the lawn work area to some extent.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method and a system for identifying a non-working area based on an image.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for identifying a non-working area based on an image, the method including: acquiring an original image;
separating an H channel image and a V channel image from an original image;
carrying out binarization processing on the H-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
performing edge extraction on the V-channel image to form an edge image;
respectively acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image;
acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline and a third parameter representing the effective pixel value ratio of the image based on the binary image;
acquiring a fourth parameter representing the roughness of the image based on the binary image and the edge image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
By the method, whether the non-working area exists in front of the robot or not can be accurately judged through image recognition, cost is saved, and usability of the robot is improved.
As a further improvement of an embodiment of the present invention, the binarizing processing the H-channel image to form a binarized image includes:
sequentially carrying out filtering processing and normalization processing on the H channel image to obtain a first H channel preprocessed image;
performing edge extraction on the first H channel preprocessed image to form a second H channel preprocessed image;
and sequentially carrying out expansion and closing operation processing on the second H channel preprocessed image to form a binary image.
With the above preferred embodiment, the formation process of the binarized image is specifically described.
As a further improvement of an embodiment of the present invention, performing edge extraction on the V-channel image to form an edge image includes:
sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
and performing edge extraction on the V-channel preprocessed image to form an edge image.
With the above preferred embodiments, the formation process of the edge image is specifically described.
As a further refinement of an embodiment of the present invention, the method includes:
configuring the first parameter A1 as: the method comprises the following steps of (1) judging at least one of the area, the diagonal length, the width and the height of a rectangular outline, and the number of pixel points of a non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset number threshold value, and A4 is smaller than a preset roughness threshold value, the fact that the front of the robot is a real non-working area is confirmed.
Through the above preferred embodiment, the first parameter, the second parameter, the third parameter and the fourth parameter are specifically defined, and the current position of the robot is accurately identified through a specific rule.
As a further improvement of an embodiment of the present invention, if it is confirmed that the front of the robot is a real non-working area, the method further includes:
and driving the robot to execute obstacle avoidance logic.
According to the embodiment, the robot is confirmed to encounter the obstacle through image recognition, obstacle avoidance operation is carried out, and the working efficiency is improved.
In order to achieve one of the above objects, an embodiment of the present invention provides a system for identifying a non-working area based on an image, the system including: the acquisition module is used for acquiring an original image;
the conversion module is used for separating an H channel image and a V channel image from an original image;
carrying out binarization processing on the H-channel image to form a binarized image, wherein the binarized image comprises at least one prejudgment working area with a first pixel value and/or at least one prejudgment non-working area with a second pixel value;
performing edge extraction on the V-channel image to form an edge image;
the analysis module is used for respectively acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image;
acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline and a third parameter representing the ratio of effective pixel values based on the binary image;
acquiring a fourth parameter representing the roughness of the image based on the binary image and the edge image;
and determining whether the front of the robot is a real non-working area or not according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
Through the system, whether a non-working area exists in front of the robot or not can be accurately judged through image recognition, cost is saved, and the use performance of the robot is improved.
As a further improvement of an embodiment of the present invention, the conversion module is configured to: sequentially carrying out filtering processing and normalization processing on the H channel image to obtain a first H channel preprocessed image;
performing edge extraction on the first H channel preprocessed image to form a second H channel preprocessed image;
and sequentially carrying out expansion and closing operation processing on the second H channel preprocessed image to form a binary image.
With the above preferred embodiment, the process of forming the binarized image is specifically described.
As a further improvement of an embodiment of the present invention, the conversion module is configured to: sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
and performing edge extraction on the V-channel preprocessed image to form an edge image.
With the above preferred embodiments, the process of forming the edge image is specifically described.
As a further improvement of an embodiment of the present invention, the parsing module is configured to: configuring the first parameter A1 as: the method comprises the following steps of (1) judging at least one of the area, the diagonal length, the width and the height of a rectangular outline, and the number of pixel points of a non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixels of the pre-judging non-working area of the rectangular outline, which correspond to the number of pixels covered by the pre-judging non-working area in the H channel image, in the preset chromaticity coverage range to the number of pixels covered by the pre-judging non-working area in the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset number threshold value, and A4 is smaller than a preset roughness threshold value, the fact that the front of the robot is a real non-working area is confirmed.
The system specifically defines the first parameter, the second parameter, the third parameter and the fourth parameter, and accurately identifies the current position of the robot through specific rules.
As a further improvement of an embodiment of the present invention, if it is confirmed that the front of the robot is a real non-working area, the analysis module is further configured to: and driving the robot to execute obstacle avoidance logic.
According to the system, the robot is confirmed to encounter the obstacle through image recognition, obstacle avoidance operation is carried out, and the working efficiency is improved.
Compared with the prior art, the method and the system for identifying the non-working area based on the image can accurately identify the non-working area through the image identification, save the cost and improve the service performance of the robot.
Drawings
FIG. 1 is a schematic structural diagram of a robot lawnmower system according to the present invention;
FIG. 2 is a schematic flow chart of a method for identifying non-working areas based on images provided by the present invention;
FIG. 3 is a block diagram of a system for identifying non-working areas based on images according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
The robot system of the invention can be a mowing robot system, a sweeping robot system, a snow sweeper system, a leaf suction machine system, a golf course ball picking-up machine system and the like, each system can automatically walk in a working area and carry out corresponding work, in the specific example of the invention, the robot system is taken as the mowing robot system for example, and correspondingly, the working area can be a lawn.
As shown in fig. 1, the robot lawnmower system of the present invention includes: a mowing Robot (RM).
The robot lawnmower includes: the body 10, the walking unit, the image acquisition unit and the control unit that set up on the body 10. The walking unit comprises: a driving wheel 111, a driven wheel 113, and a motor for driving the driving wheel 111; the motor can be a brushless motor with a reduction box and a Hall sensor; after the motor is started, the driving wheel 111 can be driven to travel through the reduction gearbox, and the traveling actions such as forward and backward linear running, in-situ turning, circular arc running and the like can be realized by controlling the speed and the direction of the two wheels; the driven wheel 113 may be a universal wheel, which is generally provided in 1 or 2, and mainly plays a role of supporting balance.
The image acquisition unit is used for acquiring a scene within a certain range of a visual angle, and in the specific embodiment of the invention, the camera 12 is arranged at the upper part of the body 10 and forms a certain included angle with the horizontal direction, so that the scene within the certain range of the mowing robot can be shot; the camera 12 typically captures a scene in a range in front of the mowing robot.
The control unit is a main controller 13 that performs image processing, for example: MCU or DSP, etc.
Further, the robot lawnmower further comprises: a working mechanism for working, and a power supply 14; in this embodiment, the working mechanism is a grass cutter head, and various sensors for sensing the walking state of the walking robot, such as: dumping, ground clearance, collision sensors, geomagnetism, gyroscopes, etc., are not described in detail herein.
As shown in fig. 2, a first embodiment of the present invention provides a method for identifying a non-working area based on an image, the method comprising the steps of:
s1, acquiring an original image;
s2, separating an H channel image and a V channel image from the original image;
carrying out binarization processing on the H-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
performing edge extraction on the V-channel image to form an edge image;
s3, respectively acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image;
acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline and a third parameter representing the effective pixel value ratio of the image based on the binary image;
acquiring a fourth parameter representing the roughness of the image based on the binary image and the edge image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
And determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
In the specific embodiment of the invention, for the step S1, a camera installed on the mowing robot shoots a scene in front of the robot in real time to form an original image; the scene is a ground image in the advancing direction of the robot; further, after the main controller receives the original image, analyzing the original image; so that the working position of the robot for capturing the original image, which is acquired by the robot, can be judged by the original image, which will be described in detail below. In this specific example, the format of the original image is not specifically limited, and is, for example: RGB format, HSV format color images.
For the step S2, if the original image is in RGB format, carrying out format conversion on the original image in RGB format to form HSV image, if the original image is in HSV format, not needing conversion, and directly separating H channel image and V channel image from HSV image; the implementation manners are all the prior art, and are various, which are not described herein.
Carrying out binarization processing on the H channel image to form a binarization image; it is known that: the H channel image is a gray scale image, and the converted binary image is as follows: the pixel points in the image have only two gray values, for example: the two gray values are respectively 0 and 255, and the binarization processing process is a process of enabling the whole original image to show obvious black and white effects.
In a preferred embodiment of the present invention, the binarizing processing the H-channel image to form a binarized image includes: sequentially carrying out filtering processing and normalization processing on the H-channel image to obtain a first H-channel preprocessed image so as to remove noise in the H-channel image, and carrying out edge extraction on the first H-channel preprocessed image to form a second H-channel preprocessed image; for example: the adopted edge detection method is canny algorithm; sequentially carrying out expansion and closing operation processing on the second H channel preprocessed image to form a binary image; the expansion and closing operations are both conventional image processing methods, which are also the prior art, and are not further described herein.
Preferably, the edge extracting the V-channel image to form an edge image includes: sequentially performing filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image, so as to remove noise in the V-channel image and perform edge extraction on the V-channel preprocessed image to form an edge image, for example: the adopted edge detection method is canny algorithm.
Preferably, for step S3, the method includes: configuring the first parameter A1 as: the method comprises the following steps of (1) judging at least one of the area, the diagonal length, the width and the height of a rectangular outline, and the number of pixel points of a non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset number threshold value, and A4 is smaller than a preset roughness threshold value, the fact that the front of the robot is a real non-working area is confirmed.
Here, when the first parameter A1 is an area corresponding to a rectangular outline, the preset size threshold is also a corresponding area threshold; correspondingly, when the first parameter A1 is at least one of a diagonal length, a rectangular outline width, a rectangular outline height, and a number of pixels in a pre-determined non-working area in the rectangular outline, the first parameter A1 is changed accordingly, specifically, the preset size threshold is a diagonal length threshold, and one of a rectangular outline width threshold, a rectangular outline height threshold, and a number of pixels in a pre-determined non-working area in the rectangular outline is one of;
in a specific example of the invention, an area threshold value is configured to be 5% of the area of the original image, a diagonal length threshold value is 22% of the diagonal length of the original image, and a pixel number threshold value of a pre-judging non-working area in a rectangular outline is 5% of the pixel number in the original image; the rectangular outline width threshold is 50% of the width of the original image and the rectangular outline height is 50% of the height of the original image.
The second parameter A2 is configured as a coordinate parameter value of the rectangular profile, where the coordinate parameter value may be a Y-axis coordinate of a lower right corner of the rectangular profile, or a center position coordinate of the rectangular profile, or an average value of Y values of pixel coordinates in the first profile, and so on, and further description is not given here.
The third parameter A3 is configured to be an effective pixel value ratio in the rectangular contour, that is, a ratio of the number of pixels in the pre-determined non-working area of the rectangular contour corresponding to the H-channel image within the pre-determined chromaticity coverage range, which is usually a chromaticity value range of a common color of the lawn, such as a green chromaticity range, to the number of pixels in the pre-determined non-working area of the rectangular contour. Preferably, the value of the configuration A3 is 0.8;
the fourth parameter A4 is configured to be the roughness of the rectangular contour, that is, the ratio of the number of pixels covered by the pre-judging non-working area of the rectangular contour to the number of pixels covered by the pre-judging non-working area of the rectangular contour, which correspond to the edge of the pre-judging non-working area of the edge image. Preferably, configuration A4 is 0.26.
Further, if it is determined that the front of the robot is a real non-working area, the method further includes: driving the robot to execute an obstacle avoidance logic, for example: the reverse steering is not described herein.
Referring to fig. 3, there is provided a system for identifying a non-working area based on an image, the system comprising: an obtaining module 100, a converting module 200 and an analyzing module 300.
The obtaining module 100 is configured to obtain an original image;
the conversion module 200 is configured to separate an H-channel image and a V-channel image from an original image;
carrying out binarization processing on the H-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
performing edge extraction on the V-channel image to form an edge image;
the analysis module 300 is configured to, based on the binarized image, respectively obtain a minimum rectangular contour enclosing each pre-determined non-working area in the binarized image;
acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline and a third parameter representing the ratio of effective pixel values based on the binary image;
acquiring a fourth parameter representing the roughness of the image based on the binary image and the edge image;
and determining whether the front of the robot is a real non-working area or not according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
Specifically, the obtaining module 100 is configured to implement step S1; the conversion module 200 is configured to implement step S2; the analysis module 300 is used for implementing the step S3 and executing the calculation of the obstacle avoidance logic for driving the robot; it can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
In summary, the method and system for identifying the non-working area based on the image can accurately identify the non-working area through the image identification, thereby saving the cost and improving the service performance of the robot.
In the several embodiments provided in the present application, it should be understood that the disclosed modules, systems and methods may be implemented in other manners. The above-described system embodiments are merely illustrative, and the division of the modules into only one logical functional division may be implemented in practice in other ways, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, that is, may be located in one place, or may also be distributed on a plurality of network modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional module in the embodiments of the present application may be integrated into one analysis module, or each module may exist alone physically, or 2 or more modules may be integrated into one module. The integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
Finally, it should be noted that: the above embodiments are only intended to illustrate the technical solution of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may be modified or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present application.

Claims (10)

1. A method for identifying non-working areas based on an image, the method comprising:
acquiring an original image;
separating an H channel image and a V channel image from an original image;
carrying out binarization processing on the H-channel image to form a binarized image, wherein the binarized image comprises at least one pre-judging working area with a first pixel value and/or at least one pre-judging non-working area with a second pixel value;
performing edge extraction on the V-channel image to form an edge image;
respectively acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image;
acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline and a third parameter representing the effective pixel value ratio of the image based on the binary image;
acquiring a fourth parameter representing the roughness of the image based on the binary image and the edge image;
and determining whether the front of the robot is a real non-working area or not according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
2. The method for identifying the non-working area based on the image as claimed in claim 1, wherein the binarizing processing the H channel image to form a binarized image comprises:
sequentially carrying out filtering processing and normalization processing on the H channel image to obtain a first H channel preprocessed image;
performing edge extraction on the first H channel preprocessed image to form a second H channel preprocessed image;
and sequentially carrying out expansion and closing operation processing on the second H channel preprocessed image to form a binary image.
3. The method of claim 1, wherein performing edge extraction on the V-channel image to form an edge image comprises:
sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
and performing edge extraction on the V-channel preprocessed image to form an edge image.
4. The method of claim 2, wherein the method comprises:
configuring the first parameter A1 as: the method comprises the following steps of (1) judging at least one of the area, the diagonal length, the width and the height of a rectangular outline, and the number of pixel points of a non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the size relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset number threshold value, and A4 is smaller than a preset roughness threshold value, the fact that the front of the robot is a real non-working area is confirmed.
5. The image-based method for identifying the non-working area according to any one of claims 1 to 4, wherein if it is confirmed that the front of the robot is a real non-working area, the method further comprises:
and driving the robot to execute obstacle avoidance logic.
6. A system for identifying non-working areas based on images, the system comprising:
the acquisition module is used for acquiring an original image;
the conversion module is used for separating an H channel image and a V channel image from an original image;
carrying out binarization processing on the H-channel image to form a binarized image, wherein the binarized image comprises at least one prejudgment working area with a first pixel value and/or at least one prejudgment non-working area with a second pixel value;
performing edge extraction on the V-channel image to form an edge image;
the analysis module is used for respectively acquiring each pre-judged non-working area in the enclosed binary image and the smallest rectangular outline on the basis of the binary image;
acquiring a first parameter representing the size of a rectangular outline, a second parameter representing the position of the rectangular outline and a third parameter representing the ratio of effective pixel values based on the binary image;
acquiring a fourth parameter representing the roughness of the image based on the binary image and the edge image;
and determining whether the front of the robot is a real non-working area or not according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and a preset parameter value.
7. The image-based non-working area identification system of claim 6, wherein the conversion module is configured to:
sequentially carrying out filtering processing and normalization processing on the H channel image to obtain a first H channel preprocessed image;
performing edge extraction on the first H channel preprocessed image to form a second H channel preprocessed image;
and sequentially carrying out expansion and closing operation processing on the second H channel preprocessed image to form a binary image.
8. The image-based non-working area identification system of claim 6, wherein the conversion module is configured to: sequentially carrying out filtering processing and normalization processing on the V-channel image to obtain a V-channel preprocessed image;
and performing edge extraction on the V-channel preprocessed image to form an edge image.
9. The image-based non-working area identification system of claim 7, wherein the parsing module is configured to: configuring the first parameter A1 as: the method comprises the following steps of (1) judging at least one of the area, the diagonal length, the width and the height of a rectangular outline, and the number of pixel points of a non-working area in the rectangular outline; configuring the second parameter A2 as: coordinate parameter values of the rectangular profile; configuring the third parameter A3 as: the ratio of the number of pixels of the prejudged non-working area of the rectangular outline, which correspond to the number of pixels of the H channel image in the preset chromaticity coverage range, to the number of pixels of the prejudged non-working area of the rectangular outline; configuring the fourth parameter A4 as: the ratio of the number of pixel points covered by the pre-judging non-working area of the rectangular outline, which correspond to the edge of the pre-judging non-working area of the edge image, to the number of pixel points covered by the pre-judging non-working area in the rectangular outline;
confirming whether the front of the robot is a real non-working area according to the magnitude relation of the first parameter, the second parameter, the third parameter, the fourth parameter and the preset parameter value comprises the following steps:
if the following conditions are met simultaneously:
and if A1 is larger than a preset size threshold value, A2 is larger than a preset coordinate threshold value, A3 is smaller than a preset number threshold value, and A4 is smaller than a preset roughness threshold value, the fact that the front of the robot is a real non-working area is confirmed.
10. The image-based system for identifying a non-working area according to any one of claims 6 to 9, wherein if the front of the robot is confirmed to be a real non-working area, the analysis module is further configured to: and driving the robot to execute obstacle avoidance logic.
CN202110277132.9A 2021-03-15 2021-03-15 Method and system for identifying non-working area based on image Pending CN115147714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110277132.9A CN115147714A (en) 2021-03-15 2021-03-15 Method and system for identifying non-working area based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277132.9A CN115147714A (en) 2021-03-15 2021-03-15 Method and system for identifying non-working area based on image

Publications (1)

Publication Number Publication Date
CN115147714A true CN115147714A (en) 2022-10-04

Family

ID=83403964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277132.9A Pending CN115147714A (en) 2021-03-15 2021-03-15 Method and system for identifying non-working area based on image

Country Status (1)

Country Link
CN (1) CN115147714A (en)

Similar Documents

Publication Publication Date Title
CN113761970B (en) Method, system, robot and storage medium for identifying working position based on image
CN113822095B (en) Method, system, robot and storage medium for identifying working position based on image
CN111353431B (en) Automatic working system, automatic walking equipment, control method thereof and computer readable storage medium
EP4242964A1 (en) Obstacle recognition method applied to automatic traveling device and automatic traveling device
CN113805571B (en) Robot walking control method, system, robot and readable storage medium
CN113822094B (en) Method, system, robot and storage medium for identifying working position based on image
CN115147714A (en) Method and system for identifying non-working area based on image
CN114494842A (en) Method and system for identifying working area based on image and robot
CN115147713A (en) Method, system, device and medium for identifying non-working area based on image
EP4242910A1 (en) Obstacle recognition method, apparatus and device, medium, and weeding robot
EP4242909A1 (en) Obstacle recognition method and apparatus, and device, medium and weeding robot
WO2021238000A1 (en) Boundary-following working method and system of robot, robot, and readable storage medium
CN115147712A (en) Method and system for identifying non-working area based on image
EP4123406A1 (en) Automatic working system, automatic walking device and method for controlling same, and computer-readable storage medium
EP4242908A1 (en) Obstacle recognition method and apparatus, device, medium and weeding robot
CN113848872B (en) Automatic walking device, control method thereof and readable storage medium
JPH07222504A (en) Method for controlling travel of self-traveling working vehicle
CN117274949A (en) Obstacle based on camera, boundary detection method and automatic walking equipment
JP3502652B2 (en) Travel control method for autonomous traveling work vehicle
JPH07222505A (en) Method for controlling travel of self-traveling working vehicle
CN113496146A (en) Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination