CN113311830A - Automatic walking equipment and target area identification method - Google Patents

Automatic walking equipment and target area identification method Download PDF

Info

Publication number
CN113311830A
CN113311830A CN202110518427.0A CN202110518427A CN113311830A CN 113311830 A CN113311830 A CN 113311830A CN 202110518427 A CN202110518427 A CN 202110518427A CN 113311830 A CN113311830 A CN 113311830A
Authority
CN
China
Prior art keywords
walking
cell
area
cells
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110518427.0A
Other languages
Chinese (zh)
Inventor
邵勇
傅睿卿
郭会文
吴新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Positec Power Tools Suzhou Co Ltd
Original Assignee
Positec Power Tools Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Positec Power Tools Suzhou Co Ltd filed Critical Positec Power Tools Suzhou Co Ltd
Priority to CN202110518427.0A priority Critical patent/CN113311830A/en
Publication of CN113311830A publication Critical patent/CN113311830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for identifying a target area of walking of automatic walking equipment, which is characterized by comprising the following steps of: s10, acquiring an image of a walking target area of the automatic walking equipment; s20, dividing the image into a plurality of cells, wherein each cell has at least one adjacent cell; s30, identifying whether a target area corresponding to the cell is a working area or not according to the color information of the designated pixel in the cell and the texture characteristic value of the cell, and obtaining an identification result; s60, for each cell, the recognition result obtained by the step S30 is changed or maintained according to the recognition result of the adjacent cell.

Description

Automatic walking equipment and target area identification method
Technical Field
The invention relates to automatic walking equipment and a method for identifying a target area by the automatic walking equipment.
Background
With the continuous progress of computer technology and artificial intelligence technology, automatic walking devices similar to intelligent robots have started to walk slowly into people's lives. Samsung, irex, etc., have developed fully automatic cleaners and have been put on the market. The full-automatic dust collector is small in size, integrates an environment sensor, a self-driving system, a dust collection system, a battery and a charging system, can automatically return to a stop station when the energy is low without manual control, automatically cruises indoors, is in butt joint and charges, and then continues crusing and collecting dust. Meanwhile, companies such as hasskarna developed similar intelligent lawn mowers that can automatically mow and charge in a user's lawn without user intervention. The automatic mowing system is greatly popular because the user is freed from tedious and time-consuming housework such as cleaning, lawn maintenance and the like without being required to invest energy management after being set once.
The walking area of the conventional automatic lawn mowers is generally determined by setting a physical boundary line, such as a wire or fence, and detecting the physical boundary line by the automatic lawn mowers. The process of boundary wiring is troublesome, time-consuming and labor-consuming, and a non-grass area may exist in the boundary line or an area needing to be mowed exists outside the boundary line, so that the method adopting the physical boundary line is not flexible and convenient.
The solution for identifying and determining the walking area by using an electronic means is characterized in that due to the diversity of the walking area, the existing means is often used for generating noise, the accuracy for identifying the walking area is low, the judgment of the automatic walking equipment is influenced, the walking area is easy to leave, and the normal work of the automatic walking equipment is influenced to a certain extent.
There is therefore a need for improvements in the prior art to allow more accurate identification of the walking area, thereby facilitating the operation of automated walking equipment.
Disclosure of Invention
In view of the above, an objective of the present invention is to provide a method for accurately identifying a target area and controlling an automatic walking device to walk according to an identification result, and an automatic walking device using the method.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a method for controlling the walking of automatic walking equipment is characterized by comprising the following steps: s10, acquiring an image of the walking target area of the automatic walking equipment; s20, dividing the image into a plurality of cells, wherein each cell has at least one adjacent cell; s30, identifying whether a target area corresponding to the cell is a working area or not according to the color information of the designated pixel in the cell and the texture characteristic value of the cell, and obtaining an identification result; s40, dividing the image into a plurality of sub image blocks, wherein each sub image block comprises a plurality of adjacent cells, judging whether a target area corresponding to the sub image block is a walkable area according to the identification result of the cells in the sub image block, and obtaining a judgment result; and S50, controlling the walking direction of the automatic walking equipment according to the judgment result.
Preferably, after obtaining the recognition result, the method further includes, for each cell, adjusting the recognition result of the cell according to the recognition result of the cell and its neighboring cells.
Preferably, the method further comprises the steps of: s61, selecting a cell and acquiring the identification result; s62, counting the number of adjacent cells having the same recognition result as that in the step S61; s63, calculating the proportion of the number in the step S62 in the total number of the adjacent cells; s64, if the proportion exceeds or reaches the preset value, keeping the identification result of the cell appointed in the step S61 unchanged; if the ratio is smaller than the preset value, the recognition result of the cell designated in step S61 is changed, wherein the fourth preset value is greater than or equal to 50%.
Preferably, the adjacent cells include cells laterally and longitudinally adjacent to the selected cell.
Preferably, the adjacent cells further comprise cells adjacent to the selected cell in a direction at an angle of 45 degrees to the transverse and longitudinal directions.
Preferably, the method further comprises the steps of: s66, selecting a cell, and acquiring the reliability Y1 of the cell on the recognition result, wherein the reliability Y1 is a numerical value between 0% and 100%; s67, calculating 1-Y1, and marking the result as N1; s68, obtaining the reliability Ya and Yb … of all adjacent cells of the selected cell to the identification result, wherein the reliability Ya and Yb … are a numerical value between 0% and 100%; s69, calculating 1-Ya and 1-Yb …, and marking the results as Na and Nb …; s70, obtaining a weighted sum Y2 by weighted summation of Ya and Yb …, and obtaining a weighted sum N2 by weighted summation of Nb and Nc …, wherein the weighting coefficients are the same; s71, respectively calculating results of Y1+ alpha N1 and Y2+ alpha N2 and comparing the results, wherein alpha is a coefficient; and S72, if the result of Y1+ alpha N1 is greater than or equal to the result of Y2+ alpha N2, keeping the identification result of the designated cell unchanged, and if the result of Y1+ alpha N1 is less than the result of Y2+ alpha N2, changing the identification result of the designated cell.
Preferably, the S40 step further includes the steps of: s41, dividing the image into a plurality of sub image blocks, acquiring the number of cells contained in each sub image block, and marking the number of cells as B; collecting the identification results of the cells in the subimage blocks, counting the number of the cells of which the identification results are working areas, and marking the cells as A; if A: and B is smaller than a third preset value, judging that the target area corresponding to the sub image block is not a walkable area, otherwise, judging that the target area corresponding to the sub image block is a walkable area.
Preferably, the method further includes continuously shooting the same target area to form a multi-frame image, judging whether the target area corresponding to the sub image block is a walkable area according to a judgment result of the same sub image block in each frame of image, and obtaining the judgment result.
Preferably, the method further comprises the steps of: s81, continuously shooting the same target area to form a multi-frame image; s82, selecting a sub image block in one frame of image, and obtaining a judgment result through the step S40; s83, setting an initial parameter value, calculating the parameter value according to the judgment result obtained in the step S82, and if the judgment result is a walkable area, adding a first parameter associated with the judgment result to the initial parameter value to form a current parameter value; if the judgment result is not the walkable area, keeping the parameter value unchanged; s84, selecting the next frame of image, calculating the current parameter value according to the judgment result obtained in the step S82, and if the judgment result is a walkable area, adding a first parameter associated with the judgment result to the current parameter value to form a new current parameter value; if the judgment result is not the walkable area, keeping the current parameter value unchanged; and S85, comparing the current parameter value with the threshold value, and if the current parameter value is greater than or equal to the threshold value, determining that the target area corresponding to the sub image block is a walkable area.
Preferably, the step S84 further includes: after the next frame image is selected and before the current parameter value is operated, subtracting a preset second parameter from the current parameter value, wherein the second parameter is smaller than the first parameter.
Preferably, the sub image blocks include a middle part, a left part and a right part, which respectively correspond to a middle area, a left area and a right area of the target area.
In order to achieve the purpose, the invention adopts another technical scheme that: an automatic walking device, which is characterized in that: the image acquisition device is used for shooting a target area and generating an image, the image acquisition device drives the walking module to walk of the automatic walking equipment, the main control module is connected with the image acquisition device and the walking module to control the automatic walking equipment to work, the main control module comprises a dividing unit, an identification unit, a judgment unit and a control unit, the dividing unit divides the image into a plurality of cells and transmits dividing results to the identification unit, the identification unit identifies whether the target area corresponding to the cells is a working area or not and transmits the identification results to the judgment unit, the judgment unit judges whether the area corresponding to sub image blocks comprising the cells is a walkable area or not and transmits the judgment results to the control unit, and the control unit judges the results according to the judgment results, and controlling the walking direction of the walking module.
Preferably, the main control module further includes a correction unit, and the correction unit adjusts the recognition result of the cell according to the recognition result of the cell and the adjacent cell thereof for each cell.
Preferably, the adjacent cells include cells laterally and longitudinally adjacent to the selected cell.
Preferably, the adjacent cells further comprise cells adjacent to the selected cell in a direction at an angle of 45 degrees to the transverse and longitudinal directions.
Preferably, the determining unit further includes a sub image block dividing unit, the sub image block dividing unit divides the image into a plurality of sub image blocks, and the determining unit determines whether a corresponding sub image block is a walkable area according to the recognition result of the cells included in the sub image block.
Preferably, the sub image blocks include three sub image blocks of a middle portion, a left portion and a right portion.
Preferably, the main control module further includes a recording unit recorded with initial parameter values, the image acquisition device continuously shoots the same target area to form multiple frames of images, the judgment unit judges the same sub image block in each frame of image and obtains a judgment result, the recording unit operates the parameter values according to the judgment result, and when the parameter value is greater than or equal to a threshold value, the target area corresponding to the sub image block is determined to be a walkable area.
Compared with the prior art, the invention has the beneficial effects that: the image aiming at the target area is divided into the cells, each cell is identified microscopically, and the identification results of the cells are synthesized macroscopically to distinguish comprehensively, so that the accuracy of identifying the target area is improved, and the automatic walking equipment can walk more accurately in the target area.
Another object of the present invention is to provide a method for accurately identifying a target area and an automatic walking device using the same.
In order to achieve the purpose, the invention adopts a technical scheme that: a method for identifying a target area on which an automatic walking device walks, the method comprising the steps of: s10, acquiring an image of the walking target area of the automatic walking equipment; s20, dividing the image into a plurality of cells, wherein each cell has at least one adjacent cell; s30, identifying whether a target area corresponding to the cell is a working area or not according to the color information of the designated pixel in the cell and the texture characteristic value of the cell, and obtaining an identification result; s60, for each cell, the recognition result obtained by the step S30 is changed or maintained according to the recognition result of the adjacent cell.
Preferably, the S60 step further includes the steps of: s61, appointing a cell and obtaining the recognition result; s62, counting the number of adjacent cells having the same recognition result as that in the step S61; s63, calculating the proportion of the number in the step S62 to the total number of the adjacent cells; s64, if the proportion exceeds or reaches a fourth preset value, keeping the identification result of the cell appointed in the step S61 unchanged; if the ratio is smaller than a fourth preset value, changing the recognition result of the cell designated in step S61, where the fourth preset value is greater than or equal to 50%; s65, executing the steps S61-S64 for all the cells.
Preferably, the adjacent unit cells include unit cells adjacent to the unit cells in the transverse and longitudinal directions.
Preferably, the adjacent unit cells further comprise unit cells adjacent to the unit cells in the direction forming an included angle of 45 degrees with the transverse direction and the longitudinal direction.
Preferably, the S60 step further includes the steps of: s66, a cell is designated, and the reliability Y1 of the cell for the recognition result is obtained, wherein the reliability Y1 is a numerical value between 0% and 100%; s67, calculating 1-Y1, and marking the result as N1; s68, obtaining the reliability Ya and Yb … of all adjacent cells of the specified cell to the identification result, wherein the reliability Ya and Yb … are a numerical value between 0% and 100%; s69, calculating 1-Ya, 1-Yb …, and marking the result as Na and Nb …; s70, obtaining a weighted sum Y2 by weighted summation of Ya and Yb …, and obtaining a weighted sum N2 by weighted summation of Nb and Nc …, wherein the weighting coefficients are the same; s71, respectively calculating results of Y1+ alpha N1 and Y2+ alpha N2 and comparing the results, wherein alpha is a coefficient, if Y1+ alpha N1 is greater than or equal to Y2+ alpha N2, the identification result of the designated cell is maintained unchanged, and if the result of Y1+ alpha N1 is less than the result of Y2+ alpha N2, the identification result of the designated cell is changed; s72, executing the steps S66-S71 on all the cells until the identification results of all the cells are not changed.
In order to achieve the above purpose, another technical solution adopted by the present invention is: an automatic walking device, which is characterized in that: including the casing, be located the image acquisition device on the casing, image acquisition device is used for shooing the target area and generates the image, the drive the walking module of automatic walking equipment walking connects image acquisition device and walking module are with the main control module of the automatic walking equipment work of control, wherein, main control module divides unit, identification cell and correction unit, divide the unit will the image is divided into a plurality of cell, the identification cell discernment whether the target area that the cell corresponds is work area to give correction unit the discernment result, the correction unit is to every cell, according to the identification result of adjacent cell, changes or maintains the discernment result that obtains through the identification cell.
Preferably, the unit cells adjacent to the unit cells include unit cells adjacent to the unit cells in the lateral and longitudinal directions.
Preferably, the unit cell adjacent to the unit cell further comprises a unit cell adjacent to the unit cell in a direction forming an included angle of 45 degrees with the transverse direction and the longitudinal direction.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of dividing an image of a target area into cells, identifying each cell microscopically, and correcting by combining the identification results of a plurality of cells macroscopically, so that the accuracy of identifying the target area is improved.
Drawings
The above objects, aspects and advantages of the present invention will become apparent from the following detailed description of specific embodiments, which is to be read in connection with the accompanying drawings.
The same reference numbers and symbols in the drawings and the description are used to indicate the same or equivalent elements.
Fig. 1 is a schematic view of an automatic walking apparatus walking on a target area according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of the automatic walking apparatus in fig. 1 photographing a target area.
Fig. 3 is a schematic view of the automatic walking apparatus of fig. 1 dividing a target area.
Fig. 4 is a schematic diagram of respective partial modules of the automatic walking apparatus of fig. 1.
Fig. 5 is a flowchart illustrating a method for controlling the automatic walking device to walk according to an embodiment of the present invention.
Fig. 6 is a detailed flowchart of step S60 between step S30 and step S40 according to an embodiment of the present invention.
Fig. 7 is a detailed flowchart of step S60 between step S30 and step S40 according to another embodiment of the present invention.
FIG. 8 is a detailed flow diagram of step S40 of FIG. 5 in one embodiment.
Fig. 9 is a detailed flowchart of step S80 between step S40 and step S50 according to an embodiment of the present invention.
1. Automatic walking equipment 2, image acquisition device 3 and main control module
4. Walking module 5, working module 6 and energy module
9. Drive wheel 10, housing 11, auxiliary wheel
12. Dividing unit 13, color extracting unit 14, and calculating unit
15. Comparison unit 16, storage unit 17, texture extraction unit
18. Texture comparing unit 19, identifying unit 20, information extracting unit
21. Information changing unit 22, judging unit 23, sub image block dividing unit
28. Target area 32, correction unit 33, and recording unit
50. Island of working region 51 and non-working region 52
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Fig. 1 is a schematic view illustrating an automatic walking apparatus walking in a target area according to an embodiment of the present invention. The automatic walking device 1 can automatically walk on the ground or other working surfaces, and can work while walking. The automatic traveling apparatus 1 may be a robot cleaner, a robot mower, a robot trimmer, or the like. In this embodiment, the self-propelled device is a robotic lawnmower. The ground may be divided into a working area 50 and a non-working area 51 according to the object of work. The working area 50 refers to an area through which the user wants the automatic walking apparatus to walk and perform work, and the non-working area 51 refers to an area through which the user does not want the automatic walking apparatus to pass. In the present embodiment, since the self-propelled device is a robotic lawnmower, it works to perform mowing. Thus, walking area 50 may be, but is not limited to, grass, and non-working area 51 may be, but is not limited to, concrete roads, trees, ponds, fences, stakes, corners, and the like. Typically, grass is formed in pieces, and the non-walking areas may be located around the grass or may be surrounded by grass to form islands 52, so islands 52 are also a representation of non-walking areas. In the present invention, no boundary line may be provided at the boundary between the non-working area 51 and the working area 50, and the automatic traveling apparatus 1 recognizes using the difference in visual sense between the working area 50 and the non-working area 51.
Referring to fig. 2 and 3, the automatic walking device 1 has a housing 10 and an image pickup device 2 mounted on the housing 10. The image pickup device 2 picks up an image of an area in front of the automatic walking apparatus 1. The ground area in front of the automated walking device 1 is the target area 28 for the automated walking device to walk. Target area 28 may be a work area or a non-work area or a collection of walking and non-walking areas. The automatic walking apparatus 1 must identify the current target area 28 in order to perform normal walking in the walking area. The automatic walking device 1 is thus able to take a picture of this target area 28 and form an image of the target area 28 using the image acquisition means 2. The method of controlling the automatic walking device thus includes step S10 of generating an image about the walking target area of the automatic walking device 1. In the present embodiment, the viewing range of the image capturing device 2 is a fixed area, such as a fixed viewing angle range of 90 degrees to 120 degrees. In other optional embodiments, the viewing range may also be movable, and a certain angle range within the viewing angle range may be selected as the actual viewing range, for example, a 90-degree range located in the middle of the 120-degree viewing angle range may be selected as the actual viewing range. The image includes information of the target area, such as topography, color distribution, texture, etc. of the target area.
Referring to fig. 4, in addition to the image acquisition device 2, the automatic walking device 1 further includes a main control module 3, a walking module 4, a working module 5, and an energy module 6. The main control module 3 is electrically connected with the walking module 4, the working module 5, the energy module 6 and the image acquisition device 2 respectively, and plays a role in controlling the automatic walking device 1 to work.
The walking module 4 comprises a wheel set and a walking motor for driving the wheel set. There are many setting methods for the wheel set. Typically the wheel set comprises a drive wheel 9 driven by a walking motor and an auxiliary wheel 11 assisting the support housing 10, the number of drive wheels 9 may be 1, 2 or more. As shown in fig. 2, the direction of movement of the automatic traveling apparatus 1 is defined as a front side, a side opposite to the front side is defined as a rear side, and two sides adjacent to the front and rear sides are defined as left and right sides, respectively. In the present embodiment, the number of the drive wheels 9 of the automatic traveling apparatus 1 is 2, and the drive wheels are a left wheel 91 on the left side and a right wheel 92 on the right side. The left wheel 91 and the right wheel 92 are symmetrically disposed about the central axis of the automatic walking device 1. The left and right wheels 91, 92 are preferably located at the rear of the housing 10 and the auxiliary wheel 11 at the front, although alternative arrangements are possible in other embodiments.
In the present embodiment, the left wheel 91 and the right wheel 92 are respectively coupled with a driving motor to realize differential output to control steering, thereby achieving the purpose of left-turning or right-turning. The left wheel 91 and the right wheel 92 can also output at a constant speed, thereby achieving the purpose of advancing or retreating. The drive motor may be directly connected to the drive wheel, but a transmission device, such as a planetary gear train, etc., may also be provided between the drive motor and the drive wheel 9. In other embodiments, there may be 2 drive wheels and 1 drive motor, in which case the drive motor drives the left wheel 91 via the first transmission and the right wheel 92 via the second transmission. I.e. one and the same motor drives the left wheel 91 and the right wheel 92 through different transmissions.
The work module 5 is used to perform a specific work. In this embodiment, the working module 5 is embodied as a cutting module, comprising a cutting member (not shown) for cutting grass and a cutting motor (not shown) for driving the cutting member.
The energy module 6 is used to supply energy for the operation of the self-propelled device 1. The energy source of the energy module 6 may be gasoline, a battery pack or the like, in this embodiment the energy module 6 comprises a rechargeable battery pack arranged within the housing 2. When in operation, the battery pack releases electric power to maintain the operation of the automatic walking device 1. During non-operation, the battery may be connected to an external power source to supplement the power. In particular, for a more humanized design, the automatic walking device 1 finds itself to charge a charging stop (not shown) to supplement the electric energy when detecting the shortage of the battery.
As shown in fig. 3, the image capturing device 2 obtains an image of the target area 28 and transmits the image to the main control module 3. The main control module 3 includes a dividing unit 12. The dividing unit 12 is used for dividing the image into a plurality of cells. All cells are combined into one image, with each cell occupying a portion of the entire image. Each cell thus contains identification information of the partial image. The size of each cell is substantially uniform. In addition, the plurality of cells form a matrix array. The matrix array extends in the transverse direction and the longitudinal direction, respectively. In the transverse direction, about 20 cells are arranged in a row; and about 20 cells are arranged in a row in the longitudinal direction. In various embodiments, the number of cells arranged in the transverse direction and the longitudinal direction may be different. And each cell has at least one cell adjacent thereto. For the cells located in the middle area of the array, each cell has 4 cells adjacent thereto, respectively, up, down, left, and right, in other words, the 4 cells are adjacent to that cell in the lateral or longitudinal direction, respectively. Of course, the adjacent meanings are not limited to the upper, lower, left and right directions 4. In another embodiment, the cell has 8 adjacent cells in eight directions, i.e., upper left, upper right, lower left, lower right, and upper left, upper right, lower left, lower right, or in other words, the cell may be adjacent to the cell in a direction forming an angle of 45 degrees with the lateral direction and the longitudinal direction, in addition to being adjacent to each other in the lateral direction and the longitudinal direction. For cells located at the edge regions of the array, there may be no 4 cells adjacent to each cell, but at least one adjacent to each cell. The method of controlling the automatic walking device thus further includes a step S20 of dividing the image into a number of cells, each cell being adjacent to at least one other cell.
After the division unit 12 finishes dividing the cells, it starts to identify whether the target area corresponding to the cells is a walking area. The specific process method comprises the following steps: the main control module 3 reads the identification information contained in each cell. In this embodiment, the identification information included in the cell includes color information and texture information. In other embodiments, the information contained by the cells may be color information as well as other types of information. Since the cells are part of the image, the image includes information about the target area. The cells therefore necessarily contain information of the corresponding target area, and of course also color information. Reading the identification information helps to judge whether the target area corresponding to the cell is a management working area or a non-working area. Since the grass as the working area is green, and the road and soil as the non-working area are not green, if the color information of the cell is recognized as green, it can be considered that the cell corresponds to the walking area. If the color information is not recognized as green, the cell is considered to correspond to a non-walking area. Certainly, in order to further improve the accuracy, the non-walking area in some cases is also green, for example, the surface of some manually processed objects is painted with green paint, in this case, the color of the walking area and the color of the non-walking area are both green, and the walking area and the non-walking area are not easily distinguished from each other from the color information. It is therefore also necessary to incorporate identification of the texture information. Since that non-walking area is also green, it usually has a regular texture, while the grass in the walking area, although also green, is not so regular in its growth, and therefore its texture is irregular. And if the color information of the cell is identified to be green and the texture is irregular, the cell can be determined to correspond to the walking area. If the color is not green or the texture rule, the cell is determined to correspond to a non-walking area. Of course, in other embodiments, the purpose of distinguishing the walking area from the non-walking area can be achieved by identifying other information, which is not described in detail herein.
For this purpose, the main control module 3 also has a color extraction unit 13, a calculation unit 14, a comparison unit 15 and a storage unit 16. The main control module 3 extracts the color information of the cells, compares the color information with preset information, and identifies whether the cells are walking areas according to the comparison result. The specific method comprises the following steps: since each unit cell actually comprises a plurality of pixel units, the displayed color of the pixel unit is unique. The color extraction unit 13 thus functions to extract the colors of the respective pixel units in the cell, and in particular, to extract three primary color (RGB) components. And the preset information refers to preset information functioning as a reference comparison object. In this embodiment, the preset information refers to a range of values in which three primary color components of a predetermined color are stored. In the present embodiment, the predetermined color is green. And comparing the three primary color components of one pixel with the three primary color components of the preset color, and if the three primary color components of one pixel respectively fall into the numerical value range of the three primary color components of the preset color, judging the color of the pixel to be the preset color. It is stated that if the color of the pixel does not fall within the numerical range, the color of the pixel is judged to be a non-predetermined color. In another embodiment, the storage unit 16 has a preset Hue Value (Hue) range of a predetermined color, and after extracting three primary color components of a pixel, the obtained RGB components are further converted into HSV (Hue, Saturation, brightness) values, and whether the Hue Value thereof is within the preset Hue Value range is compared, if so, the color of the pixel is determined to be the predetermined color, otherwise, the color of the pixel is determined to be the non-predetermined color.
The calculation unit 14 then calculates the ratio (hereinafter, simply referred to as a "ratio") of the number of pixels having a predetermined color to the total number of pixels in one cell. The comparison unit 15 compares the ratio with a first preset value, and if the ratio exceeds or reaches the first preset value, the color display of the cell is determined to be the predetermined color. The first preset value may be 50%, 60% or other values. In addition, the first preset value may be stored in the storage unit 16.
The cell can be identified to belong to a working area or a non-working area by combining with other information of the cell. Which in this embodiment refers to the texture information of the cell. The main control module 3 further includes a texture extracting unit 17 and a texture comparing unit 18. The texture extraction unit 17 extracts a texture feature value of the cell. The dispersion of at least one parameter of all pixels in a cell may represent the degree of difference between the values of the parameter. If the target area is a green paint, the dispersion of one parameter in the image is small, even 0. Due to the irregular texture of the grass, the dispersion of the difference values of one parameter of all pixels of a cell is greater than or equal to a preset dispersion, thereby reflecting the irregularity of the texture of the cell. Therefore, in this embodiment, the texture feature value is a parameter dispersion, such as a color dispersion, a gray-scale dispersion, a brightness dispersion, and the like.
The texture comparing unit 18 compares the texture feature value of the cell with a second predetermined value to determine whether the texture feature value reaches the second predetermined value. In this embodiment, the second preset value is a preset dispersion. The texture comparison unit 18 may be present separately or may be integrated into the comparison unit 15. And the second preset value may be stored in the storage unit 16 in advance.
The master control module 3 further comprises an identification unit 19. Wherein the color extraction unit 13, the calculation unit 14, the comparison unit 15 and the storage unit 16 may in one embodiment form part of the identification unit 19 or be integrated into the identification unit 19. Or in another embodiment as a unit component juxtaposed to the identification unit 19. When the identification unit 19 identifies that the pixel proportion of the preset color in the cell reaches or exceeds a first preset value and the texture characteristic value of the cell reaches or exceeds a second preset value, the target area corresponding to the cell is judged to be a walking area; and if the proportion does not reach the first preset value or the texture characteristic value does not reach the second preset value, judging that the target area corresponding to the cell is a non-walking area. Therefore, the method for controlling the automatic walking device further includes step S30, namely, reading the identification information contained in each cell and identifying the cell, so as to obtain the identification result of whether the target area corresponding to the cell is the working area.
The recognition unit 19 of the main control module 3 respectively recognizes all the cells in the image, so as to obtain recognition results of all the cells. In a preferred embodiment, the main control module 3 further comprises a correction unit 32, and the correction unit 32 corrects the cell recognition result based on the markov random model. Therefore, in this embodiment, the control method further includes step S60, namely, the cell is modified based on the smoothing process. This is because the recognition result obtained in step S30 has a certain error in the actual condition, that is, an abnormal recognition result is generated. The abnormal recognition result can be corrected in the correction process, so that the recognition accuracy is improved. Specifically, for each cell within the image, there must be a cell adjacent to it. The aim of correction can be achieved by utilizing the identification results of the adjacent cells and comprehensively considering the identification results of the cells. The correction unit 32 includes an information extraction unit 20 and an information change unit 21.
In one embodiment, the method of correction is as follows: step S60 includes steps S61, S62, S63, and S64. Wherein step S61 means that for each cell, the information extraction unit 20 extracts the identification results of all cells adjacent to the cell; step S62 refers to the calculation unit 14 counting the number of adjacent cells that are the same as the recognition result of the cell, and the ratio of the number to the total number of adjacent cells. For example, if the identification result of the cell is the working area, the calculating unit 14 counts the number of the working areas in the adjacent cells, and calculates the ratio of the number to the total number of the adjacent cells. Step S63 is that the comparing unit 15 compares the ratio with a fourth preset value, and if the ratio is greater than or equal to a fourth preset value, (usually, the fourth preset value is not less than 50%, and may be 50%, 75%, etc.), it indicates that the adjacent cells of the recognition result occupy most of all the adjacent cells, and therefore the recognition result of the cell is kept unchanged by the cell information changing unit 21. If the ratio is smaller than the fourth preset value, the cell information changing unit 21 changes the recognition result of the cell to another recognition result, for example, the original recognition result of the cell is a working area, and the recognition result is a non-working area after the change. The whole process is that for one cell, for example, the original recognition result is a non-working area. However, the recognition result of 3 cells in 4 cells adjacent to the cell is the working area, and the ratio (3/4 ═ 75%) is already greater than the fourth preset value (assuming that the fourth preset value is 50%), then the original recognition result of the cell is determined to be the same as the recognition result of the adjacent cell according to the result of the adjacent cell, so that the recognition result of the cell remains unchanged and remains the working area. If only the recognition result of 1 cell out of the 4 cells adjacent to the cell is the working area and the ratio (1/4-25%) is smaller than the fourth preset value, the recognition result of the cell is determined not to be consistent with the recognition result of the cell adjacent to the cell, and the recognition result of the cell may be caused by an error, so that the recognition result of the cell is corrected to be the working area. Of course, it is emphasized again that the adjacency here is not limited to 4 directions of up, down, left, and right, but may be limited to 8 directions of up, left, right, down, left, down, right, and the like. Similarly, the original recognition result of the cell is not limited to the working area, but may be a non-working area. The final step S64 is to apply the method to all cells, i.e. completing the result correction of the whole image, that is, performing the above steps S61-S63 on all cells to correct the recognition results of all cells.
In another embodiment, step S60 includes steps S66, S67, S68, S69, S70, S71, and S72. Step S66 first acquires the reliability of the recognition result of the cell. The reliability is usually a value between 0 and 100%. Of course, the reliability may be other forms of values. In step S67, the reliability is denoted as Y1, and the unreliability is denoted as N1, where N1 is 1-Y1. Y1 may also be referred to as a similar degree of reliability. N1 may be referred to as a dissimilarity in reliability. Y1 and N1 may be stored in memory cell 16. Then, the reliability of the cell adjacent to the cell is calculated through step S68. The reliability is usually a value between 0 and 100%. If the cell has 8 cells adjacent to the cell, 8 similar reliabilities and dissimilar reliabilities are obtained through step S69 similar to step S67. Specifically, the reliability of the 8 adjacent cells is denoted as similar reliability Ya, Yb, Yc …, and the dissimilar reliability is denoted as Na, Nb, Nc …. Then, the 8 similar reliabilities are subjected to weighted summation processing by step S70 to obtain Y2. In the present embodiment, the weight coefficients are equal in size, and preferably 1/8. Of course, the weighting coefficients may take different values from each other. Similarly, the 8 dissimilar reliabilities are weighted and summed to obtain N2. The weighting factors for the 8 dissimilar reliabilities may be the same and may be consistent with the weighting factors for the similar reliabilities. Then, the sizes of Y1+ α N1 and Y2+ α N2 are compared by step S71 and measures are taken accordingly. Specifically, α is a weighting factor here, and may be different from or the same as the weighting value of the previous steps. Further, the comparison process may be performed in the comparison unit 15, or may be performed in other elements. As a result of the comparison, if the result of Y1+ α N1 is greater than or equal to the result of Y2+ α N2, the information changing unit 21 maintains the recognition result of the cell unchanged, and if the result of Y1+ α N1 is less than the result of Y2+ α N2, the information changing unit 21 changes the recognition result of the cell. The above process is then performed on all the cells in the image in step S72, and each cell participates in the iterative loop until the recognition results of all the cells do not change any more.
Therefore, the method for controlling the automatic walking device further includes step S40, where step S40 is used to determine whether the target area corresponding to the sub image block including the plurality of cells is a walking area. The automatic walking device 1 includes a judgment unit 22 for performing this step. The determining unit 22 comprises a sub image block dividing unit 23, the sub image block dividing unit 23 being adapted to divide the image into a number of sub image blocks. In one embodiment, the specific division manner is as follows: step S40 includes steps S41, S42, S43. First, the image is selectively divided into the number of sub image blocks according to the traveling direction of the automatic traveling apparatus by the sub image block dividing unit 23 through step S41. Each sub image block corresponds to a different walking direction. In one embodiment, the sub image block dividing unit 23 divides the image into three sub image blocks of a middle portion, a left portion, and a right portion, respectively corresponding to sub regions in the target region. As shown in fig. 3, the middle portion corresponds to a middle area a in the center of the front of the automatic traveling apparatus 1, which is as wide as the automatic traveling apparatus 1; a left area b corresponding to the front of the automatic walking device 1 and located on the left side of the middle area a; the right portion corresponds to a right area c located on the right of the middle area a in front of the automatic walking device 1. The three sub image blocks each include a plurality of cells. In another embodiment, the sub image block dividing unit 23 may divide the image into 5 different sub image blocks, i.e., right front, left front, right front, and the like. Since each sub image block includes a plurality of cells, the determining unit 22 determines whether the target area corresponding to the sub image block is a walking area or a non-walking area according to the identification results of all the cells in the sub image block. Specifically, it is assumed that 60 cells in total of three rows of cells located at the front end of the image are used as the middle sub image block. In this embodiment, the information extraction unit 20 of the automatic walking device 1 extracts the recognition results of all the cells in the middle sub image block, and the calculation unit 14 calculates the number of cells whose recognition results are the work area, and marks the number as a. Of course, in other embodiments, the number of cells whose identification result is the non-working area may be counted. The comparison unit 15 compares the number of cells of the walking area with the third preset value as the recognition result. When the number a or the ratio of the number a to all the cells in the sub image block is greater than or equal to a third preset value, the determining unit 22 may determine that the sub image block is a walking area. Of course, it may also be set that when the identification result is that the number of cells in the non-walking area is less than a third preset value, the sub image block may be determined as a walking area. The third preset value in the present embodiment is pre-stored in the storage unit 16, and may be 30, 40, 50, etc. In other embodiments, the automatic walking device 1 may also compare the ratio of the cells whose identification result is the walking area or the non-walking area to all the cells of the sub image block as a parameter with another third preset value, where the third preset value in this embodiment is greater than or equal to 50%, and may be 50%, 60%, 90%, and the like.
When it is determined that the target area corresponding to the sub image block is the walking area or the non-walking area, the automatic walking device 1 controls the automatic walking device to move forward, backward, turn left or turn right according to the determination result through step S50. According to the judgment result of the judgment unit 22, the automatic walking device 1 performs a specific response action. The action of the walking module 4 for controlling the response of the automatic walking device 1 comprises the following steps: forward (F), reverse (B), left turn (L), right turn (R) and unchanged (N). In the embodiment of dividing the image into the left, middle and right sub-image blocks, there are walking areas and non-walking areas respectively due to the recognition result of each sub-image block. So there are a total of eight different cases: 1, the left, the middle and the right are walking areas respectively; 2. the left middle is a walking area, and the right is a non-walking area; 3. the left and the right are walking areas, and the middle is a non-walking area; 4. the left is a walking area, and the middle right is a non-walking area; 5. the left is a non-walking area, and the middle right is a walking area; 6. the left and the right are non-walking areas, and the middle is a walking area; 7. the left middle is a non-walking area, and the right is a walking area; 8. the left, middle and right are both non-walking areas.
In case 1, the main control module 3 makes the walking module 4 execute the action of no change (N);
in case 2, the main control module 3 makes the walking module 4 execute the action of left-turning and advancing (LF);
in case 3, the main control module 3 makes the walking module 4 perform the actions of backward left turn and forward (BLF);
in case 4, the main control module 3 makes the walking module 4 perform the actions of backward left turn and forward (BLF);
in case 5, the main control module 3 makes the walking module 4 perform the action of turning right and advancing (RF);
in case 6, the main control module 3 makes the walking module 4 execute the actions of backward turning right and forward (BRF);
in case 7, the main control module 3 makes the walking module 4 execute the actions of backward turning right and forward (BRF);
in case 8, the main control module 3 causes the walk module 4 to perform either reverse right turn and forward (BRF) or reverse left turn and forward (BLF).
To explain further, when the current target area is determined to be the walking area, the automatic walking device 1 may continue to execute the original walking strategy, for example, keep the original walking state; when the current target area is determined to be a non-walking area, the automatic walking device 1 changes the walking direction, and further, can selectively walk in a direction away from the sub image block. Since the image has a plurality of sub image blocks, the automatic walking device 1 needs to identify the walking area or the non-walking area of each of the plurality of sub image blocks, and then adopts a corresponding strategy. In a preferred embodiment, the automated walking device may identify the plurality of sub image blocks simultaneously. For example, for three sub image blocks of the middle part, the left part, the right part, and the like, if it is detected that all the three sub image blocks are walking areas, the automatic walking device keeps a state of continuously moving forward; if the three sub-image blocks are detected to be non-walking areas, the automatic walking equipment turns 180 degrees and moves backwards; if it is detected that the sub image blocks in the middle part and the left part are both non-walking areas and the right part is a walking area, the automatic walking equipment moves to the direction away from the middle part and the left part, i.e. to the lower right, and of course, the automatic walking equipment can move back first and then turn right or turn right first and then back, and other specific modes.
Of course, in a preferred embodiment, the sub image block dividing unit 23 of the automatic walking device 1 may further include a process of dividing the sub image block a plurality of times and then performing the comprehensive judgment. The areas corresponding to the sub image blocks divided each time may be different. Therefore, the judgment results of different areas are comprehensively considered, errors in strategy formulation caused by inaccuracy of the judgment result of a single area are avoided, and the walking accuracy of the automatic walking equipment 1 is improved. Specifically, taking 60 cells in three rows of cells at the front end of the image as the middle sub image block as an example, in one recognition process, 80 cells in four rows of cells at the front end of the image may be used as the middle sub image block, and in the other recognition process, the 80 cells are used as the objects of the middle sub image block. And the third preset values used in the two judgments are also different, and the number of the third preset values can be, but is not limited to, 60. The two identifications are combined as a brand-new judgment basis, and for example, when the judgment condition is that 40 cells out of 60 cells constituting three rows are identified as a walking area and 60 cells out of 80 cells constituting four rows are identified as a walking area, the middle part is identified as a walking area. If the two judgment conditions cannot be simultaneously met, the middle part is determined as a non-walking area. Of course, the determination may be made in the same manner for the left and right portions.
In another preferred embodiment, since a single image still has the possibility of distorting the information acquisition of the target area, for example, an object swiped quickly at a certain moment may shadow the target area, thereby affecting the determination process of the automatic walking device on the target area. Therefore, in the present embodiment, the automated walking device further includes a step S80 provided between the above-described step S40 and step S50. And step S80, performing comprehensive filtering according to the sub image blocks in the multiple images to obtain a final determination result of whether the sub image blocks are walking areas. The target area can be shot for a plurality of times in a certain time period, so that a plurality of frames of images are formed. And then comprehensively filtering the judgment information contained in each frame of image to obtain a final judgment result.
Method step S80 includes at least steps S81, S82 and S84. In a preferred embodiment, step S80 further includes step S83 located between S82 and S84. The specific method comprises the following step of S81, shooting the same target area for multiple times by the image acquisition device 2 to form multiple frame images, wherein each frame image is respectively called as a first frame image and a second frame image … frame N image. The automatic walking device 1 further includes a recording unit 33, and the recording unit 33 is configured to process the weight values according to the judgment result on the sub image blocks through step S82. Specifically, when the determining unit 22 determines that the first frame image is a walking area, the recording unit 33 adds a fifth preset value to the initial weight value. The initial weight value may be labeled as 0 for convenience of description, and may be labeled as other values. While the fifth preset value may be a preset fixed constant, also a function of variation. In the present embodiment, the fifth preset value may be, but is not limited to, 3. When the recognition result of the first frame image walks in the area, the recording unit causes the corresponding weight value to become 3. And then processing the recognition result of the second frame image. If the recognition result of the second frame image is also the walking area, the recording unit 33 of the automatic walking device 1 adds a fifth preset value to the current weight value. The corresponding weight value becomes 6 at this time. If the recognition result of the second frame image is not the walking area, the recording unit 33 does not change the current weight value. And then processing the recognition result of the third frame image. If the recognition result of the third frame image is also the walking area, the current weight value is 9. This continues until the nth frame image. In addition, in step S84, the comparing unit 15 further compares the current weight value with a seventh preset value. And when the current weight value is greater than or equal to the seventh preset value, determining that the judgment result is correct, namely that the current target area is the walking area. Wherein the seventh preset value may be set to 8, for example. By comprehensively considering the identification results of the multi-frame images in the mode, the adverse effect caused by the possible error result of the single-frame image is avoided. Further, the identification of each frame of the image may be decomposed into the identification of each sub-image block of the image. For example, each frame image may be decomposed into three sub image blocks, namely, a left sub image block, a middle sub image block, a right sub image block, and the recording unit may record the three sub image blocks respectively, wherein the three sub image blocks correspond to three weight values respectively.
In addition, in a preferred embodiment, step S83 further includes that, during the image switching process between frames, the recording unit 33 further subtracts a sixth preset value from the current weight value, so that the time that the current weight value reaches or exceeds the seventh preset value is prolonged, and images of more frames are comprehensively considered, thereby further improving the accuracy. In this embodiment, the sixth preset value may be, but is not limited to, 1. For example, if the first frame image recognition result is a walking area, the current weight value becomes 3. When the second frame image recognition result is a non-walking area, the current weight value becomes 2. When the third frame image recognition result is a walking area, the current weight value becomes 4. When the fourth frame image recognition result is a walking area, the current weight value becomes 6. And determining the current target area as a walking area until the weight value reaches or exceeds a seventh preset value. And if the weight value does not reach the seventh preset value all the time, determining that the current target area is a non-walking area. Of course, a person skilled in the art may think that, through some changes, when the weight value reaches or exceeds the seventh preset value, the current target area is determined as the non-walking area, and when the weight value does not reach the seventh preset value all the time, the current target area is determined as the walking area. Namely, the affirming conditions of the walking area and the non-walking area are interchanged. Furthermore, the calculation rule of the weight value is further refined, for example, it can be set that the weight value is reduced to the minimum value under any condition, for example, when the weight value is reduced to 0, the weight value is not continuously reduced.
The invention is not limited to the specific embodiment structures illustrated, and structures based on the inventive concept are all within the scope of the invention.

Claims (15)

1. A method for controlling the walking of automatic walking equipment is characterized in that: the method comprises the following steps:
s10, acquiring an image of the walking target area of the automatic walking equipment;
s40, dividing the image into a plurality of sub image blocks, wherein each sub image block comprises a plurality of cells, judging whether a target area corresponding to the sub image block is a walkable area according to the identification result of the cells in the sub image block, and obtaining a judgment result;
and S50, controlling the walking direction of the automatic walking equipment according to the judgment result.
2. The method of claim 1, wherein: and after obtaining the identification result of each unit cell, aiming at each unit cell, and adjusting the identification result of the unit cell according to the identification results of the unit cell and the adjacent unit cells thereof.
3. The method of claim 2, wherein: the method further comprises the steps of:
s61, selecting a cell and acquiring the identification result;
s62, counting the number of adjacent cells having the same recognition result as that in the step S61;
s63, calculating the proportion of the number in the step S62 in the total number of the adjacent cells;
s64, if the proportion exceeds or reaches the preset value, keeping the identification result of the cell appointed in the step S61 unchanged; if the ratio is smaller than the preset value, the recognition result of the cell designated in step S61 is changed, wherein the fourth preset value is greater than or equal to 50%.
4. The method of claim 2, wherein: the method further comprises the steps of:
s66, selecting a cell, and acquiring the reliability Y1 of the cell on the recognition result, wherein the reliability Y1 is a numerical value between 0% and 100%;
s67, calculating 1-Y1, and marking the result as N1;
s68, obtaining the reliability Ya and Yb … of all adjacent cells of the selected cell to the identification result, wherein the reliability Ya and Yb … are a numerical value between 0% and 100%;
s69, calculating 1-Ya and 1-Yb …, and marking the results as Na and Nb …;
s70, obtaining a weighted sum Y2 by weighted summation of Ya and Yb …, and obtaining a weighted sum N2 by weighted summation of Nb and Nc …, wherein the weighting coefficients are the same;
s71, respectively calculating results of Y1+ alpha N1 and Y2+ alpha N2 and comparing the results, wherein alpha is a coefficient;
and S72, if the result of Y1+ alpha N1 is greater than or equal to the result of Y2+ alpha N2, keeping the identification result of the designated cell unchanged, and if the result of Y1+ alpha N1 is less than the result of Y2+ alpha N2, changing the identification result of the designated cell.
5. The method of claim 1, wherein: the step of S40 further includes the steps of:
s41, dividing the image into a plurality of sub image blocks, acquiring the number of cells contained in each sub image block, and marking the number of cells as B; each sub image block corresponds to different walking directions;
s42, collecting the identification results of the cells in the sub image block, counting the number of the cells of which the identification results are non-working areas, and marking the cells as A;
s43, if A: and B is smaller than a third preset value, judging that the target area corresponding to the sub image block is a walkable area, otherwise, judging that the target area corresponding to the sub image block is not the walkable area.
6. The method of claim 1, wherein: the step S50 includes the steps further including:
when the current target area is judged to be the walking area, controlling the automatic walking equipment to continuously execute the original walking strategy;
and when the current target area is judged to be the non-walking area, controlling the self-moving equipment to walk towards the direction away from the sub-image block corresponding to the current target area.
7. The method of claim 1, wherein: the method also comprises the steps of continuously shooting the same target area to form a multi-frame image, judging whether the target area corresponding to the sub image block is a walkable area according to the judgment result of the same sub image block in each frame of image, and obtaining the judgment result.
8. The method of claim 1, wherein: the method further comprises the following steps: dividing the image into sub image blocks for multiple times, and performing comprehensive judgment; wherein the areas corresponding to the sub image blocks divided each time are different.
9. The method of claim 8, wherein: dividing the image into three sub image blocks of a left area, a middle area and a right area each time, wherein the number of the cells of the sub image blocks which correspond to the same azimuth area and are divided each time is different, so that the areas corresponding to the sub image blocks divided each time are different;
the method further comprises: judging whether the area corresponding to the sub image block is a walkable area according to the judgment result of the area corresponding to the sub image block divided by the image each time, obtaining a plurality of judgment results, and then carrying out comprehensive judgment according to the plurality of judgment results.
10. The method of claim 1, wherein: the sub image blocks comprise a middle part, a left part and a right part, and respectively correspond to a middle area, a left area and a right area of the target area; the automatic walking equipment simultaneously judges the middle part, the left part and the right part of the sub-image blocks, and if the three sub-image blocks are detected to be walking areas, the automatic walking equipment is controlled to keep a state of continuously moving forwards;
if the three sub-image blocks are detected to be non-walking areas, controlling the automatic walking equipment to turn by 180 degrees and move backwards;
and if the sub image blocks of the middle part and the left part are detected to be non-walking areas, and the right part is a walking area, controlling the automatic walking equipment to move towards the direction far away from the middle part and the left part or controlling the automatic walking equipment to move in a mode of successively returning to the right or firstly returning to the right and then returning to the back.
11. An automatic walking device, which is characterized in that: the automatic walking device comprises a shell, an image acquisition device and a main control module, wherein the image acquisition device is positioned on the shell and used for shooting a target area and generating an image, the main control module is used for driving the automatic walking device to walk, and the main control module is connected with the image acquisition device and the walking module to control the automatic walking device to work; the judging unit is used for judging whether the corresponding sub image block is a walkable area according to the identification result of the cells contained in the sub image block and obtaining a judgment result; and the control unit is used for controlling the walking direction of the walking module according to the judgment result.
12. The automated walking device of claim 11, wherein: each cell is provided with at least one adjacent cell, the main control module further comprises a correction unit, and the correction unit adjusts the identification result of each cell according to the identification result of the cell and the adjacent cell.
13. The automated walking device of claim 11, wherein: the main control module further comprises a dividing unit and an identification unit, wherein the dividing unit is used for dividing the image into a plurality of cells, and each cell is provided with at least one adjacent cell; the identification unit is used for identifying whether the target area corresponding to the cell is a working area or not and obtaining an identification result.
14. The automated walking device of claim 13, wherein: the adjacent cells include cells laterally and longitudinally adjacent to the selected cell.
15. The automated walking device of claim 11, wherein: the sub-image block dividing unit is also used for dividing the image into sub-image blocks for multiple times and carrying out comprehensive judgment; wherein the areas corresponding to the sub image blocks divided each time are different.
CN202110518427.0A 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method Pending CN113311830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110518427.0A CN113311830A (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610389564.8A CN107463167B (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method
CN202110518427.0A CN113311830A (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610389564.8A Division CN107463167B (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method

Publications (1)

Publication Number Publication Date
CN113311830A true CN113311830A (en) 2021-08-27

Family

ID=60545520

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610389564.8A Active CN107463167B (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method
CN202110518427.0A Pending CN113311830A (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201610389564.8A Active CN107463167B (en) 2016-06-03 2016-06-03 Automatic walking equipment and target area identification method

Country Status (1)

Country Link
CN (2) CN107463167B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051785A1 (en) * 2022-09-07 2024-03-14 苏州宝时得电动工具有限公司 Self-moving device, method for controlling self-moving device and mowing control apparatus

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852662B (en) * 2018-08-21 2024-05-24 北京京东尚科信息技术有限公司 Flow control method and device
CN109947111B (en) * 2019-04-04 2022-09-23 肖卫国 Automatic carrying trolley movement control method and device and automatic carrying trolley
CN113495553A (en) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium
CN112163631A (en) * 2020-10-14 2021-01-01 山东黄金矿业(莱州)有限公司三山岛金矿 Gold ore mineral analysis method based on video analysis for orepass

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101543054A (en) * 2007-06-28 2009-09-23 松下电器产业株式会社 Image processing device, image processing method, and program
CN101572804A (en) * 2009-03-30 2009-11-04 浙江大学 Multi-camera intelligent control method and device
CN102713779A (en) * 2009-11-06 2012-10-03 进展机器人有限公司 Methods and systems for complete coverage of a surface by an autonomous robot
CN103679740A (en) * 2013-12-30 2014-03-26 中国科学院自动化研究所 ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN104111653A (en) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 Automatic walking equipment and working region judgment method thereof
CN104111651A (en) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 Automatic walking equipment and method for automatic walking equipment to return to stop station
CN104345734A (en) * 2013-08-07 2015-02-11 苏州宝时得电动工具有限公司 Automatic working system, automatic walking equipment and control method thereof
CN104778432A (en) * 2014-01-10 2015-07-15 携程计算机技术(上海)有限公司 Image recognition method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011259332A (en) * 2010-06-11 2011-12-22 Sony Corp Image processing device and method
CN103034866B (en) * 2011-09-29 2016-02-10 无锡物联网产业研究院 A kind of target identification method, Apparatus and system
CN105512689A (en) * 2014-09-23 2016-04-20 苏州宝时得电动工具有限公司 Lawn identification method based on images, and lawn maintenance robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101543054A (en) * 2007-06-28 2009-09-23 松下电器产业株式会社 Image processing device, image processing method, and program
CN101572804A (en) * 2009-03-30 2009-11-04 浙江大学 Multi-camera intelligent control method and device
CN102713779A (en) * 2009-11-06 2012-10-03 进展机器人有限公司 Methods and systems for complete coverage of a surface by an autonomous robot
CN104111653A (en) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 Automatic walking equipment and working region judgment method thereof
CN104111651A (en) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 Automatic walking equipment and method for automatic walking equipment to return to stop station
CN104345734A (en) * 2013-08-07 2015-02-11 苏州宝时得电动工具有限公司 Automatic working system, automatic walking equipment and control method thereof
CN103679740A (en) * 2013-12-30 2014-03-26 中国科学院自动化研究所 ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN104778432A (en) * 2014-01-10 2015-07-15 携程计算机技术(上海)有限公司 Image recognition method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051785A1 (en) * 2022-09-07 2024-03-14 苏州宝时得电动工具有限公司 Self-moving device, method for controlling self-moving device and mowing control apparatus

Also Published As

Publication number Publication date
CN107463167A (en) 2017-12-12
CN107463167B (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN107463167B (en) Automatic walking equipment and target area identification method
CN109063575B (en) Intelligent mower autonomous and orderly mowing method based on monocular vision
CN110243372B (en) Intelligent agricultural machinery navigation system and method based on machine vision
Åstrand et al. An agricultural mobile robot with vision-based perception for mechanical weed control
EP3199009B1 (en) Self-moving robot
WO2021169193A1 (en) Automatic working system, automatic locomotion device and control method therefor, and computer readable storage medium
US8185275B2 (en) System for vehicular guidance with respect to harvested crop
CN107463166A (en) Automatic running device and its control traveling method
CN111324122B (en) Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium
CN105785986A (en) Automatic working equipment
CN104111653A (en) Automatic walking equipment and working region judgment method thereof
WO2021169192A1 (en) Automatic working system, automatic walking device and control method therefor, and computer-readable storage medium
WO2022021630A1 (en) Autonomous walking device and control method and system therefor, and readable storage medium
WO2021139397A1 (en) Method for controlling self-moving device
WO2017206950A1 (en) Automatic walking device and method for controlling the walking thereof
WO2021042487A1 (en) Automatic working system, automatic travelling device and control method therefor, and computer readable storage medium
EP4123406A1 (en) Automatic working system, automatic walking device and method for controlling same, and computer-readable storage medium
US20220151144A1 (en) Autonomous machine navigation in lowlight conditions
CN113848872B (en) Automatic walking device, control method thereof and readable storage medium
WO2021042486A1 (en) Automatic working system, automatic walking device and control method therefor, and computer readable storage medium
WO2021184663A1 (en) Automatic working system, automatic walking device and control method therefor, and computer-readable storage medium
WO2021184664A1 (en) Automatic working system, automatic walking device and control method therefor, and computer-readable storage medium
CN207992810U (en) A kind of field automatic travelling device for detecting plant growth condition
CN116806526A (en) Mowing robot based on multi-view vision and service execution method and system thereof
CN117274949A (en) Obstacle based on camera, boundary detection method and automatic walking equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210827

WD01 Invention patent application deemed withdrawn after publication