WO2017206950A1 - 自动行走设备及其控制行走方法 - Google Patents

自动行走设备及其控制行走方法 Download PDF

Info

Publication number
WO2017206950A1
WO2017206950A1 PCT/CN2017/087021 CN2017087021W WO2017206950A1 WO 2017206950 A1 WO2017206950 A1 WO 2017206950A1 CN 2017087021 W CN2017087021 W CN 2017087021W WO 2017206950 A1 WO2017206950 A1 WO 2017206950A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cell
sub
walking
area
Prior art date
Application number
PCT/CN2017/087021
Other languages
English (en)
French (fr)
Inventor
邵勇
傅睿卿
郭会文
吴新宇
Original Assignee
苏州宝时得电动工具有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610389387.3A external-priority patent/CN107463166A/zh
Priority claimed from CN201610389564.8A external-priority patent/CN107463167B/zh
Application filed by 苏州宝时得电动工具有限公司 filed Critical 苏州宝时得电动工具有限公司
Publication of WO2017206950A1 publication Critical patent/WO2017206950A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters

Definitions

  • the invention relates to an automatic walking device and a method of controlling the walking of the automatic walking device.
  • the walking area of the existing automatic lawn mower is generally set by physical boundary lines, such as wires or fences, and the automatic lawn mower detects the physical boundary line to determine the walking area.
  • the process of boundary wiring is cumbersome, time consuming and laborious, and there may be non-grass areas in the boundary line, or there are areas outside the boundary line that need to be cut.
  • the method of using physical boundary lines is inflexible and inconvenient.
  • one of the objects of the present invention is to provide a method for accurately identifying a target area and controlling the automatic walking device to walk according to the recognition result, and an automatic walking device applying the same.
  • the technical solution adopted by the present invention is: a method for controlling the walking of an automatic walking device, comprising the steps of: acquiring an image of a walking target area of the automatic walking device; S20, dividing the image a plurality of cells, each cell having at least one adjacent cell; S30, identifying a target region corresponding to the cell according to color information of a specified pixel in the cell and a texture feature value of the cell Whether it is a work area and obtain recognition results; S40, dividing the image into a plurality of sub-image blocks, each sub-image block includes a plurality of adjacent cells, and determining, according to the recognition result of the cells in the sub-image block, whether the target region corresponding to the sub-image block is Walking the area and obtaining the judgment result; S50, according to the judgment result, controlling the walking direction of the automatic walking device.
  • the method further comprises, after obtaining the recognition result, adjusting the recognition result of the cell according to the recognition result of the cell and its adjacent cells for each cell.
  • the method further comprises the steps of: S61, selecting a cell and obtaining a recognition result thereof; S62, counting the number of adjacent cells having the same recognition result as that in step S61; S63, calculating step S62 The ratio of the number of the total number of adjacent cells; S64, if the ratio exceeds or reaches a preset value, the recognition result of the cell specified in step S61 is maintained unchanged; if the ratio is less than the preset value, the change is The recognition result of the cell specified in step S61, wherein the fourth preset value is greater than or equal to 50%.
  • the adjacent cells comprise cells adjacent to the selected cells in the lateral and longitudinal directions.
  • the adjacent cells further comprise cells adjacent to the selected cells in a direction at an angle of 45 degrees to the lateral and longitudinal directions.
  • the method further comprises the steps of: S66: selecting a cell, obtaining a reliability Y1 of the cell for the recognition result thereof, and the reliability Y1 is a value between 0 and 100%; S67, calculating 1-Y1, and mark the result as N1; S68, obtain the reliability Ya, Yb... of all the adjacent cells of the selected cell for the recognition result, and the reliability Ya, Yb... is between 0 and 100% a value; S69, calculate 1-Ya, 1-Yb..., and mark the result as Na, Nb...; S70, weight the sum of Ya, Yb, and obtain the weighted sum Y2, and weight the sum of Nb, Nc, ...
  • the step S40 further includes the following steps: S41, dividing the image into a plurality of sub-image blocks, acquiring the number of cells included in each sub-image block, marking it as B; S42, collecting the sub-images The result of the recognition of the cells in the block, the statistical recognition result is the cell of the work area The quantity is marked as A; S43. If A:B is smaller than the third preset value, it is determined that the target area corresponding to the sub-image block is not a walkable area; otherwise, the target area corresponding to the sub-image block is determined. It is a walkable area.
  • the method further comprises: continuously capturing the same target area to form a multi-frame image, determining, according to the determination result of the same sub-image block in each frame image, whether the target area corresponding to the sub-image block is a walkable area, and obtaining the judgment result.
  • the method further comprises the steps of: S81: continuously capturing the same target area to form a multi-frame image; S82, selecting one of the sub-image blocks of the one-frame image, obtaining the determination result by step S40; S83, setting the initial The parameter value is calculated according to the judgment result obtained in step S82.
  • the judgment result is a walkable area
  • the first parameter associated with the judgment result is added to the initial parameter value to become the current parameter value; If the result is not the walkable area, the parameter value is kept unchanged; S84, the next frame image is selected, and the current parameter value is calculated according to the judgment result obtained in step S82, and if the judgment result is a walkable area, then The first parameter associated with the determination result is added to the current parameter value to become a new current parameter value; if the determination result is not a walkable area, the current parameter value is kept unchanged; S85, comparing the current parameter value with The size of the threshold, if the current parameter value is greater than or equal to the threshold, determining that the target area corresponding to the sub-image block is a walkable area .
  • the step S84 further includes: after the next frame image is selected, before the current parameter value is calculated, the current parameter value is subtracted from a preset second parameter, and the second parameter is smaller than The first parameter.
  • the sub-image block includes three sub-image blocks of a middle portion, a left portion, and a right portion, which respectively correspond to an intermediate region, a left region, and a right region of the target region.
  • an automatic walking device comprising: a housing, an image capturing device located on the housing, wherein the image capturing device is used for photographing a target area and Generating an image, driving a walking module of the walking device, connecting the image capturing device and the walking module to control a main control module of the automatic walking device, wherein the main control module comprises a dividing unit, an identifying unit, and a determining unit And a control unit, the dividing unit divides the image into a plurality of cells, and transmits the division result to the identification unit, the recognition unit identifying whether the target area corresponding to the cell is a work area, and The recognition result is transmitted to the judging unit, and the judging unit judges whether the corresponding area of the sub-image block including the plurality of cells is a walkable area, and The judgment result is transmitted to the control unit, and the control unit controls the walking direction of the walking module according to the judgment result.
  • the main control module further includes a correction unit, and the correction unit adjusts the recognition result of the cell according to the recognition result of the cell and its adjacent cells for each cell.
  • the adjacent cells comprise cells adjacent to the selected cells in the lateral and longitudinal directions.
  • the adjacent cells further comprise cells adjacent to the selected cells in a direction at an angle of 45 degrees to the lateral and longitudinal directions.
  • the determining unit further includes a sub-image block dividing unit that divides the image into a plurality of sub-image blocks, and the determining unit determines the corresponding sub-image according to the recognition result of the cells included in the sub-image block. Whether the block is a walkable area.
  • the sub-image block includes three sub-image blocks of a middle portion, a left portion, and a right portion.
  • the main control module further includes a recording unit recorded with an initial parameter value, wherein the image capturing device continuously captures the same target area to form a multi-frame image, and the determining unit pairs the same sub-image block in each frame image.
  • the judgment unit obtains the judgment result, and the recording unit calculates the parameter value according to the judgment result.
  • the parameter value is greater than or equal to the threshold value, it is determined that the target area corresponding to the sub-image block is the walkable area.
  • the invention has the beneficial effects that the image of the target area is divided into cells, each cell is identified microscopically, and the recognition results of the plurality of cells are integrated macroscopically, thereby comprehensively discriminating Improve the accuracy of identifying the target area, which is conducive to the more accurate walking of the automatic walking equipment in the target area.
  • Another object of the present invention is to provide a method for accurately identifying a target area and an automatic walking apparatus to which the method is applied.
  • a technical solution adopted by the present invention is: a method for identifying a target area in which an automatic walking device walks, characterized in that the identification method comprises the following steps: S10: acquiring about the automatic walking An image of the device walking target area; S20, dividing the image into a plurality of cells, each cell having at least one adjacent cell; S30, color information according to a specified pixel in the cell, and the cell
  • the texture feature value identifies whether the target area corresponding to the cell is a work area, and obtains a recognition result; S60, for each cell, according to the adjacent unit
  • the result of the recognition of the cell changes or maintains the recognition result obtained by step S30.
  • the step S60 further comprises the steps of: S61, designating a cell, and obtaining a recognition result thereof; S62, counting the number of adjacent cells having the same recognition result as that in step S61; S63, calculating step S62 The ratio of the number of the total number of adjacent cells; S64, if the ratio exceeds or reaches the fourth preset value, the recognition result of the cell specified in step S61 is maintained unchanged; if the ratio is less than the fourth preset value Then, the recognition result of the cell specified in step S61 is changed, wherein the fourth preset value is greater than or equal to 50%; and S65, the above steps S61 to S64 are performed on all the cells.
  • the adjacent cells comprise cells adjacent to the cells in the lateral and longitudinal directions.
  • the adjacent cells further comprise cells adjacent to the cells in a direction at an angle of 45 degrees to the lateral and longitudinal directions.
  • the step S60 further comprises the steps of: S66, designating a cell, obtaining a reliability Y1 of the cell for the recognition result thereof, and the reliability Y1 is a value between 0 and 100%; S67, calculating 1-Y1, and mark the result as N1; S68, obtain the reliability Ya, Yb... of all the adjacent cells of the specified cell for the recognition result, and the reliability Ya, Yb... is between 0 and 100% a value; S69, calculate 1-Ya, 1-Yb..., and mark the result as Na, Nb...; S70, weight the sum of Ya, Yb, and obtain the weighted sum Y2, and obtain the weighted sum of Nb, Nc, ...
  • Weighting sum N2 wherein the weighting coefficients are all the same; S71, respectively calculating the results of Y1+ ⁇ N1 and Y2+ ⁇ N2 and comparing the sizes thereof, wherein ⁇ is a coefficient, and if Y1+ ⁇ N1 is greater than or equal to Y2+ ⁇ N2, then maintaining The recognition result of the specified cell is unchanged, and if the result of Y1+ ⁇ N1 is smaller than the result of Y2+ ⁇ N2, the recognition result of the specified cell is changed; S72, the above steps S66 to S71 are performed for all the cells until The recognition results for all cells no longer change.
  • an automatic walking device comprising: a housing, an image capturing device located on the housing, the image capturing device for capturing a target area And generating an image, driving the walking module of the walking device, connecting the image capturing device and the walking module to control the main control module of the automatic walking device, wherein the main control module dividing unit, the identifying unit and the correcting unit
  • the dividing unit divides the image into a plurality of cells, the identifying unit identifies whether the target area corresponding to the cell is a working area, and transmits the recognition result to the correcting unit, wherein the correcting unit is for each The cell changes or maintains the recognition result obtained by the recognition unit according to the recognition result of the adjacent cell.
  • the cells adjacent to the cells include cells adjacent to the cells in the lateral and longitudinal directions.
  • the cells adjacent to the cells further comprise cells adjacent to the cells in a direction at an angle of 45 degrees to the lateral and longitudinal directions.
  • the invention has the beneficial effects that the image of the target area is divided into cells, each cell is identified microscopically, and the recognition results of multiple cells are integrated macroscopically to improve Identify the accuracy of the target area.
  • FIG. 1 is a schematic view of an automatic walking apparatus walking in a target area according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of the auto-going device photographing target area of FIG. 1.
  • FIG. 2 is a schematic diagram of the auto-going device photographing target area of FIG. 1.
  • FIG. 3 is a schematic diagram of the automatic walking device dividing target area of FIG. 1.
  • FIG. 4 is a schematic diagram of various parts of the automatic walking apparatus of FIG. 1.
  • FIG. 5 is a schematic flow chart of a method for controlling the walking of an automatic walking device according to an embodiment of the present invention.
  • FIG. 6 is a detailed flow chart of step S60 between step S30 and step S40 in an embodiment of the present invention.
  • FIG. 7 is a detailed flow chart of step S60 between step S30 and step S40 in another embodiment of the present invention.
  • FIG. 8 is a detailed flow chart of step S40 of FIG. 5 in one embodiment.
  • FIG. 9 is a detailed flow chart of step S80 between step S40 and step S50 in an embodiment of the present invention.
  • Comparison unit 16 storage unit 17
  • FIG. 1 is a schematic diagram showing the walking of an automatic walking device in a target area according to an embodiment of the present invention.
  • the automatic walking device 1 can automatically walk on the ground or other work surface, and can also work while walking.
  • the automatic walking device 1 may be an automatic vacuum cleaner, an automatic lawn mower, an automatic trimmer, or the like.
  • the automatic walking device is an automatic lawn mower.
  • the ground can be divided into a work area 50 and a non-work area 51 depending on the object of the work.
  • the work area 50 refers to an area where the user wants the automatic walking equipment to walk and work
  • the non-work area 51 refers to an area where the user does not want the automatic walking equipment to pass.
  • since the automatic traveling device is an automatic lawn mower, its operation is to perform mowing.
  • the walking area 50 can be, but is not limited to, a grassland
  • the non-working area 51 can be, but is not limited to, a cement road, a large tree, a pond, a fence, a stake, a corner, and the like.
  • the grass is formed in pieces, and the non-walking area can be located around the grass or surrounded by grass to form an island 52, so the island 52 is also a form of non-walking area.
  • the boundary between the non-working area 51 and the working area 50 may not be provided with a boundary line, and the autonomous traveling apparatus 1 recognizes the visual difference between the working area 50 and the non-working area 51.
  • the automatic walking device 1 has a housing 10 and an image capture device 2 mounted on the housing 10.
  • the image pickup device 2 captures an image of the area in front of the automatic traveling device 1.
  • the ground area located in front of the automatic traveling equipment 1 is the target area 28 in which the automatic traveling equipment travels.
  • the target area 28 may be a work area, a non-work area, or a collection of a walking area and a non-walking area.
  • the automatic walking apparatus 1 must recognize the current target area 28 in order to be able to perform the normal walking in the walking area. Therefore, the autonomous walking apparatus 1 can take an image of the target area 28 and form an image with respect to the target area 28 by the image pickup device 2.
  • the method of controlling the autonomous walking apparatus therefore includes the step S10 of generating an image regarding the walking target area of the autonomous walking apparatus 1.
  • the viewing range of the image capture device 2 is a fixed area, such as a fixed viewing angle range of 90 degrees to 120 degrees.
  • the framing range can also be active, and a range of viewing angles can be selected.
  • the fixed angle range is taken as the actual viewing range, for example, the 90 degree range located in the middle of the range of the viewing angle is selected as the actual viewing range.
  • the image contains information of the target area, such as the terrain fluctuation of the target area, the color distribution, the texture, and the like.
  • the automatic walking device 1 further includes a main control module 3, a walking module 4, a working module 5, and an energy module 6.
  • the main control module 3 is electrically connected to the walking module 4, the working module 5, the energy module 6 and the image capturing device 2, respectively, and functions to control the operation of the automatic walking device 1.
  • the walking module 4 includes a wheel set and a travel motor for driving the wheel set.
  • the wheel set generally includes a drive wheel 9 driven by a travel motor and an auxiliary wheel 11 that assists the support housing 10, and the number of drive wheels 9 may be one, two or more.
  • the moving direction of the automatic traveling device 1 is the front side, the side opposite to the front side is the rear side, and the two sides adjacent to the front and rear sides are the left and right sides, respectively.
  • the left wheel 91 and the right wheel 92 are symmetrically arranged with respect to the center axis of the automatic traveling device 1.
  • the left wheel 91 and the right wheel 92 are preferably located at the rear of the housing 10, and the auxiliary wheel 11 is located at the front, although it may alternatively be provided in other embodiments.
  • the left wheel 91 and the right wheel 92 are each coupled with a drive motor to achieve differential output to control steering, thereby achieving the purpose of turning left or right.
  • the left wheel 91 and the right wheel 92 can also be output at a constant speed to achieve the purpose of advancing or retreating.
  • the drive motor can be directly coupled to the drive wheel, but a transmission can also be provided between the drive motor and the drive wheel 9, such as a planetary gear train as is common in the art.
  • two drive wheels may be provided, one for the drive motor.
  • the drive motor drives the left wheel 91 through the first transmission and the right wheel 92 through the second transmission. That is, the same motor drives the left wheel 91 and the right wheel 92 through different transmissions.
  • the work module 5 is used to perform a specific work.
  • the working module 5 is specifically a cutting module, and includes a cutting member (not shown) for cutting grass and a cutting motor (not shown) for driving the cutting member.
  • the energy module 6 is used to energize the operation of the autonomous walking device 1.
  • the energy source of the energy module 6 may be gasoline, a battery pack or the like.
  • the energy module 6 includes a rechargeable battery pack disposed within the housing 2. At work, the battery pack releases electrical energy to maintain the autonomous walking device 1 in operation.
  • the battery can be connected to an external power source to supplement the power.
  • the autonomous vehicle 1 will automatically find a charging station (not shown) to supplement the power.
  • the image acquisition device 2 obtains an image about the target area 28 and transmits it to the main control module 3.
  • the main control module 3 includes a dividing unit 12.
  • the dividing unit 12 is used to divide the image into a plurality of orders Yuan lattice. All cells are combined into one image, with each cell occupying a portion of the entire image. Therefore each cell contains identification information for that part of the image.
  • the size of each cell is basically the same.
  • the plurality of cells constitute a matrix array.
  • the matrix array extends in the lateral and longitudinal directions, respectively. In the lateral direction, about 20 cells are arranged in a row; and in the longitudinal direction, about 20 cells are arranged in a row.
  • the number of cells arranged in the horizontal and vertical directions may be inconsistent.
  • each cell has at least one cell adjacent to it.
  • each cell has four cells adjacent to each other, up and down, left and right, in other words, the four cells are adjacent to the cell in the horizontal or vertical direction, respectively.
  • the meaning of adjacent is not limited to four directions of up, down, left, and right.
  • the cell has eight cells adjacent thereto in eight directions: up, down, left and right, and upper left, upper right, lower left, and lower right, in other words, in addition to the horizontal and vertical directions. Outside the neighborhood, it is also adjacent to the cell in a direction at an angle of 45 degrees to the lateral and longitudinal directions.
  • each cell may not have four cells adjacent to it, but at least one will be adjacent to it.
  • the method of controlling the autonomous walking apparatus therefore further comprises the step S20 of dividing the image into a plurality of cells, each cell being adjacent to at least one other cell.
  • the main control module 3 first reads the identification information contained in each cell.
  • the identification information included in the cell includes color information and texture information.
  • the information contained in the cell may be color information as well as other types of information. Since the cell is part of the image, the image includes information about the target area. Therefore, the cell necessarily contains the information of the corresponding target area, and of course also includes the color information. By reading the identification information, it is helpful to determine whether the target area corresponding to the cell is a management work area or a non-work area.
  • the cell Since the grass as the work area is green, and the road and the soil as the non-work area are not green, if the color information of the cell is recognized as green, the cell is considered to correspond to the walking area. If it is recognized that the color information is not green, the cell is considered to correspond to a non-walking area. Of course, in order to further improve the accuracy, in some cases, the non-walking area is also green. For example, some artificially treated objects are painted with green paint. In this case, the color of the walking area and the non-walking area are green, and the color information is obtained. It is not easy to distinguish between the walking area and the non-walking area. Therefore, it is also necessary to add recognition of texture information.
  • the non-walking area is also green, it usually has a regular texture, and the grass in the walking area is green, but the growth of the grass is not so regular, so the texture is irregular. Further, if it is recognized that the color information of the cell is green and the texture is irregular, it can be determined that the cell corresponds to the walking area. If the color is not green or the texture rule, the cell can be considered to correspond to the non-walking area. Of course, in other embodiments, The purpose of identifying the walking area and the non-walking area by recognizing other information is not detailed here.
  • the main control module 3 also has a color extraction unit 13, a calculation unit 14, a comparison unit 15, and a storage unit 16.
  • the main control module 3 compares the color information of the cell, and then compares the color information with the preset information, and identifies whether the cell is a walking area according to the comparison result.
  • the specific method is as follows: Since each cell actually contains a large number of pixel units, the color displayed by the pixel unit is unique. Therefore, the function of the color extracting unit 13 is to extract the color of each pixel unit in the cell, and in particular, the three primary color (RGB) components are extracted.
  • the preset information refers to the preset information that serves as a reference comparison object.
  • the preset information refers to a numerical range in which the three primary color components of the predetermined color are stored.
  • the predetermined color means green. Comparing the three primary color components of one pixel with the three primary color components of the predetermined color, if the three primary color components of one pixel respectively fall within the numerical range of the three primary color components of the predetermined color, it is determined that the color of the pixel is a predetermined color. If it does not fall within the range of values, it is determined that the color of the pixel is an unscheduled color.
  • the storage unit 16 has a preset hue range (Hue) of a predetermined color, and after extracting three primary color components of one pixel, the obtained RGB component is further converted into HSV (Hue, Saturation, Luminance Value) Value, and compare whether the tone value is within the preset tone value range, if it is determined that the color of the pixel is a predetermined color, otherwise it is determined that the color of the pixel is an unscheduled color.
  • HSV Hue, Saturation, Luminance Value
  • the calculation unit 14 calculates the ratio of the number of pixels having a predetermined color to the total number of pixels in one cell (hereinafter referred to as the ratio).
  • the comparing unit 15 compares the ratio with a first preset value. If the ratio exceeds or reaches the first preset value, it is determined that the color display of the cell is a predetermined color.
  • the first preset value can be 50%, 60% or other values. Further, the first preset value may be stored in the storage unit 16.
  • the main control module 3 further includes a texture extraction unit 17, and a texture comparison unit 18.
  • the texture extracting unit 17 extracts the texture feature value of the cell.
  • the degree of dispersion of at least one parameter of all pixels in the cell may reflect the degree of difference between the individual values of the parameter. If the target area is green paint, the dispersion of one parameter in the image is small, even zero. Due to the irregular texture of the grass, the dispersion of the difference value of one parameter of all the pixels of the cell is greater than or equal to a preset dispersion, thereby reflecting the irregularity of the texture of the cell. Therefore, in the embodiment, the texture feature value is a parameter dispersion, such as color dispersion, gradation dispersion, brightness dispersion, and the like.
  • the texture comparison unit 18 compares the texture feature value of the cell with the second preset value to determine the texture feature. Whether the levy value reaches the second preset value.
  • the second preset value is a preset dispersion.
  • the texture comparison unit 18 may exist independently or may be integrated into the comparison unit 15.
  • the second preset value may also be stored in the storage unit 16 in advance.
  • the main control module 3 also includes an identification unit 19.
  • the color extraction unit 13, the calculation unit 14, the comparison unit 15, and the storage unit 16 may constitute part of the identification unit 19 in one embodiment, or integrated into the identification unit 19 to form a whole. It may also be a unit component juxtaposed with the identification unit 19 in another embodiment.
  • the recognition unit 19 recognizes that the pixel ratio of the predetermined color in the cell reaches or exceeds the first preset value and the texture feature value of the cell reaches or exceeds the second preset value, determining the target corresponding to the cell
  • the area is a walking area; if the ratio does not reach the first preset value or the texture feature value does not reach the second preset value, it is determined that the target area corresponding to the cell is a non-walking area. Therefore, the method for controlling the automatic walking device further includes step S30, that is, reading the identification information included in each cell and identifying the same, thereby obtaining whether the target region corresponding to the cell is the recognition result of the working region.
  • the identification unit 19 of the main control module 3 separately identifies all the cells in the image, thereby obtaining the recognition results of all the cells.
  • the main control module 3 further includes a correction unit 32 that corrects the recognition result of the cells based on the Markov random model. Therefore, in this embodiment, therefore, the control method further includes step S60 of correcting the different recognition result in the cell based on the smoothing process. This is because in the actual working condition, the recognition result obtained by the step S30 has a certain error, that is, an abnormal recognition result is generated.
  • the correction process can correct the recognition result of the abnormality, thereby improving the accuracy of the recognition. Specifically, for each cell in the image, there must be a cell adjacent to it. The purpose of the correction can be achieved by comprehensively considering the recognition result of the adjacent cells and the recognition result of the cell itself.
  • the correction unit 32 includes an information extraction unit 20 and an information change unit 21.
  • Step S60 includes steps S61, S62, S63, and S64.
  • step S61 means that for each cell, the information extracting unit 20 extracts the recognition result of all the cells adjacent to the cell; and step S62 means that the calculating unit 14 counts the same result as the cell.
  • the number of adjacent cells and the proportion of that number to the total number of adjacent cells For example, if the recognition result of the cell is a work area, the calculation unit 14 counts that the recognition result in the cell adjacent thereto is also the number of the work area, and calculates the quantity to occupy the total number of adjacent cells. The proportion.
  • Step S63 is to compare the ratio of the ratio to the fourth preset value, and if the ratio is greater than or equal to a fourth preset value, (the fourth preset value is usually not less than 50%, Is 50%, 75%, etc.), indicating that the adjacent cells of the recognition result occupy all adjacent orders Most of the cells, so the recognition result of the cells is kept unchanged by the cell information changing unit 21. If the ratio is less than the fourth preset value, the cell information changing unit 21 changes the recognition result of the cell to another recognition result, for example, the original recognition result of the cell is a work area, and the recognition result is Become a non-working area. For example, for a cell, the original recognition result is a non-working area.
  • the recognition result of the cell does not match, and the recognition result of the cell may be caused by an error, so the recognition result of the cell is corrected to the work area.
  • the adjacent positions here are not limited to the four directions of the up, down, left, and right directions, and may not be limited to eight directions including upper left, upper right, lower left, and lower right.
  • the original recognition result of the same cell is not limited to the work area, but also the non-work area.
  • the final step S64 means that the method is applied to all the cells, that is, the result correction of the entire image is completed, that is, the above steps S61 to S63 are performed on all the cells, and the recognition results of all the cells are corrected.
  • step S60 includes steps S66, S67, S68, S69, S70, S71, and S72.
  • Step S66 first acquires the reliability of the recognition result of the cell. Reliability is usually a value between 0 and 100%. Of course, the reliability can also be other forms of values.
  • the cell has 8 cells adjacent thereto, 8 similar reliability and dissimilar reliability are obtained by similar steps S69 and S67. Specifically, the reliability of the eight adjacent cells is recorded as similar reliability Ya, Yb, Yc, ..., and the dissimilar reliability is denoted as Na, Nb, Nc, . Then, the eight similar reliability are subjected to weighted summation processing in step S70 to obtain Y2. In this embodiment, the weight coefficients are of the same size, preferably both 1/8. Of course, the weight coefficients can also be different values. In the same way, the eight dissimilar reliability are weighted and summed to obtain N2. The weight coefficients of the eight dissimilar reliability may be the same and may be consistent with the weight coefficients of similar reliability.
  • is a weight coefficient here, and may be different from the weight values of the previous steps, or may be the same.
  • the comparison process can be performed in the comparison unit It is carried out in 15 and can also be carried out in other components.
  • the result of the comparison if the result of Y1+ ⁇ N1 is greater than or equal to the result of Y2+ ⁇ N2, the information changing unit 21 maintains the recognition result of the cell unchanged, and if the result of Y1+ ⁇ N1 is smaller than the result of Y2+ ⁇ N2, the information The changing unit 21 changes the recognition result of the cell. Then, the above process is performed on all the cells in the image through step S72, and each cell participates in the iterative loop until the recognition result of all the cells no longer changes.
  • the method for controlling the automatic walking device further includes a step S40 for determining whether the target region corresponding to the sub-image block including the plurality of cells is a walking region.
  • the automatic walking device 1 includes a judging unit 22 for performing this step.
  • the judging unit 22 includes a sub-picture block dividing unit 23 for dividing the image into a plurality of sub-image blocks.
  • the specific division is as follows: Step S40 includes steps S41, S42, and S43. First, the image is selectively divided into a plurality of sub-image blocks in accordance with the traveling direction sub-picture block dividing unit 23 of the autonomous walking apparatus by step S41. Each sub-image block corresponds to a different walking direction.
  • the sub-picture block dividing unit 23 divides the image into three sub-image blocks of the middle portion, the left portion, and the right portion, respectively corresponding to the sub-regions in the target region.
  • the middle portion corresponds to the front center of the automatic traveling device 1, and the intermediate portion a which is equal to the automatic traveling device 1;
  • the left portion corresponds to the front of the automatic traveling device 1, and the left side of the left side of the intermediate portion a
  • the right part corresponds to the front of the automatic traveling device 1, and the right side area c located to the right of the intermediate area a.
  • the three sub-image blocks each contain a plurality of cells.
  • the sub-picture block dividing unit 23 may further divide the image into five different sub-image blocks such as the front side, the left front side, the left side, the right front side, and the right side. Since each sub-image block includes a plurality of cells, the judging unit 22 judges whether the target region corresponding to the sub-image block is a walking region or a non-walking region by the recognition result of all the cells in the sub-image block. Specifically, it is assumed that a total of 60 cells in three rows of cells located at the front end of the image are used as the middle sub-image block.
  • the information extracting unit 20 of the automatic walking device 1 extracts the recognition result of all the cells in the middle sub-image block, and the calculating unit 14 calculates the number of cells whose recognition result is the working area region, and marks the number Is A.
  • the number of cells whose recognition result is a non-working area can also be counted.
  • the comparison unit 15 compares the number of cells whose recognition result is the walking area with the size of the third preset value. When the number A or A occupies a proportion of all the cells in the sub-image block is greater than or equal to a third preset value, the determining unit 22 may determine that the sub-image block is a walking area.
  • the sub-image block may be determined to be a walking area.
  • the third preset value in this embodiment is pre-stored in the storage unit 16, and may be a value of 30, 40, 50 or the like.
  • the autonomous walking device 1 may also take the proportion of the cells whose recognition result is the walking area or the non-walking area to all the cells of the sub-image block as a parameter, and another third preset.
  • the third preset value in this embodiment is greater than or equal to 50%, and may be 50%, 60%, 90%, and the like.
  • the automatic walking device 1 controls the automatic walking device to advance, retreat, turn left or turn right according to the determination result by step S50.
  • the autonomous walking apparatus 1 performs a specific response action.
  • the action of the walking module 4 to control the automatic walking device 1 to respond includes: forward (F), backward (B), left (L), right (R), and no change (N).
  • the recognition result of each sub-image block has a walking area and a non-walking area, respectively. Therefore, there are eight different situations: 1. The left middle right is the walking area; 2.
  • the left middle is the walking area, the right is the non-walking area; 3.
  • the left and right are the walking area, and the middle is the non-walking area; 4.
  • Left For the walking area the middle right is the non-walking area; 5.
  • the left is the non-walking area, the middle right is the walking area; 6.
  • the left and right are non-walking areas, the middle is the walking area; 7.
  • the left middle is the non-walking area, and the right is the walking area.
  • Area; 8. Left, right, and right are non-walking areas.
  • the main control module 3 causes the walking module 4 to perform an action of no change (N);
  • the main control module 3 causes the walking module 4 to perform an action of turning left and advancing (LF);
  • the main control module 3 causes the walking module 4 to perform an action of reversing the left turn and advancing (BLF);
  • the main control module 3 causes the walking module 4 to perform an action of reversing the left turn and advancing (BLF);
  • the main control module 3 causes the walking module 4 to perform a right turn and advance (RF) action
  • the main control module 3 causes the walking module 4 to perform an action of reversing the right turn and advancing (BRF);
  • the main control module 3 causes the walking module 4 to perform an action of reversing the right turn and advancing (BRF);
  • the main control module 3 causes the walking module 4 to perform an action of reversing the right turn and advancing (BRF) or reversing the left turn and advancing (BLF).
  • the autonomous walking device 1 may continue to execute the original walking strategy, for example, maintaining the original walking state; when determining that the current target area is a non-walking area, the autonomous walking device 1 The walking direction is changed, and further, it is possible to selectively walk in a direction away from the sub-image block. Since the image has a plurality of sub-image blocks, the autonomous walking apparatus 1 needs to identify the walking area or the non-walking area for the plurality of sub-image blocks, and adopts a corresponding strategy. In a preferred embodiment, the autopilot device can identify the plurality of sub-image blocks simultaneously.
  • the automatic walking device keeps moving forward; if it is detected that the three sub-image blocks are non-walking areas, the automatic walking device will turn 180 degrees and move backward; if the middle part is detected, The sub-image blocks on the left are non-walking areas, and the right part is the walking area.
  • the auto-traveling device will move to the lower right and the lower left, that is, you can turn right and turn right first. Back and many other specific ways.
  • the sub-picture block dividing unit 23 of the autonomous walking apparatus 1 may further include a process of dividing the sub-image blocks a plurality of times and then performing comprehensive judgment.
  • the area corresponding to each divided sub-image block may be different.
  • the judgment results of different regions are comprehensively considered, and the error of the strategy formulation caused by the inaccuracy of the judgment result of the single region is avoided, and the accuracy of the walking of the automatic walking device 1 is improved.
  • a total of 60 cells in the three rows of cells at the front end of the image are taken as the middle sub-image block.
  • the object of the sub-image block is determined to be the 60 cells at a time, and in another process of identification, A total of 80 cells in the four rows of cells at the front end of the image are used as the middle sub-image block, and the object of the sub-image block is judged to be the 80 cells.
  • the third preset value used in the two judgments is also different, and can be, but is not limited to, 60. Combine these two recognitions as a new judgment basis. For example, when the judgment condition is that 40 cells of the 60 cells constituting the three rows are recognized as the walking area, and 60 of the 80 cells constituting the four rows are 60. When a cell is identified as a walking area, the middle can be considered as a walking area. If the two judgment conditions cannot be satisfied at the same time, it is determined that the middle portion is a non-walking area. Of course, the same way can be used for the left and right parts.
  • the automatic walking device further includes step S80 being set between the above steps S40 and S50.
  • step S80 comprehensive filtering is performed based on the sub-image blocks in the plurality of images to obtain whether the sub-image block is the final determination result of the walking area.
  • the target area can be photographed multiple times within a certain period of time to form a multi-frame image. Then, the judgment information included in each frame image is comprehensively filtered to obtain a final judgment result.
  • Step S80 includes at least steps S81, S82 and step S84.
  • step S80 also includes a step S83 between S82 and S84.
  • the specific method is as follows:
  • step S81 the image acquisition device 2 captures a plurality of frames of images in the same target region, and each frame image is referred to as a first frame image, a second frame image, and an Nth frame image.
  • the automatic walking apparatus 1 further includes a recording unit 33 for performing processing on the weight value based on the determination result of the sub-image block by step S82. Specifically, when the determining unit 22 determines that the first frame image is a walking area, The recording unit 33 adds a fifth preset value to the initial weight value.
  • the initial weight value is convenient for description, it can be marked as 0, and of course it can be marked as other values.
  • the fifth preset value may be a preset fixed constant and is also a function of the change. In this embodiment, the fifth preset value may be but not limited to 3.
  • the recording unit causes the corresponding weight value to become 3.
  • the recognition result of the second frame image is then processed. If the recognition result of the second frame image is also the walking area, the recording unit 33 of the autonomous walking apparatus 1 adds a fifth preset value to the current weight value. At this time, the corresponding weight value becomes 6. If the recognition result of the second frame image is not the walking area, the recording unit 33 does not change the current weight value.
  • the recognition result of the third frame image is processed. If the recognition result of the third frame image is also the walking area, the current weight value becomes 9. This continues until the Nth frame of the image. Further, through step S84, the comparing unit 15 further compares the current weight value with a seventh preset value. When the current weight value is greater than or equal to the seventh preset value, it is determined that the determination result is correct, that is, the current target area is indeed the walking area.
  • the seventh preset value can be set to 8, for example. In this way, the recognition result of the multi-frame image is comprehensively considered, thereby avoiding the adverse effects caused by the erroneous result of the possible existence of the single-frame image.
  • each frame of image can be decomposed into an identification of each sub-image block of the image.
  • each frame image can be decomposed into three left and right sub-image blocks, and the recording unit can record the three sub-image blocks separately, corresponding to three weight values.
  • step S83 further includes: during the image switching process of the frame and the frame, the recording unit 33 further subtracts a sixth preset value from the current weight value, so that the current weight value reaches or exceeds the first weight value.
  • the seven presets are worth the time, which makes the images of more frames more comprehensive, further improving the accuracy.
  • the sixth preset value may be, but is not limited to, 1. For example, if the image recognition result of the first frame is the walking area, the current weight value becomes 3. When the second frame image recognition result is a non-walking area, the current weight value becomes 2. When the image recognition result of the third frame is the walking area, the current weight value becomes 4. When the image recognition result of the fourth frame is the walking area, the current weight value becomes 6.
  • the current target area is determined to be the walking area. If the weight value never reaches the seventh preset value, the current target area is determined to be a non-walking area.
  • the current target area is determined to be the walking area. That is, the identification conditions of the walking area and the non-walking area are interchanged.
  • the calculation rule of the weight value is further refined. For example, it can be set that when the weight value is reduced to the minimum value under any condition, for example, when the weight value is reduced to 0, it will not continue to decrease.
  • the present invention is not limited to the specific embodiment structures, and the structures based on the inventive concept are all within the scope of the present invention.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种控制自动行走设备(1)行走的方法,其特征在于包括以下步骤:S10、获取自动行走设备(1)行走目标区域的图像;S20、把图像划分成若干个单元格,每个单元格具有至少一个相邻单元格;S30、根据单元格中的指定像素的颜色信息以及单元格的纹理特征值识别单元格对应的目标区域是否为工作区域,并获得识别结果;S40、把图像划分成若干个子图像块,每个子图像块包括若干个相邻的单元格,根据子图像块内的单元格的识别结果判断子图像块对应的目标区域是否为可行走区域,并获得判断结果;S50、根据判断结果,控制自动行走设备(1)的行走方向。

Description

自动行走设备及其控制行走方法 技术领域
本发明涉及一种自动行走设备,以及控制该自动行走设备行走的方法。
背景技术
随着计算机技术和人工智能技术的不断进步,类似于智能机器人的自动行走设备已经开始慢慢的走进人们的生活。三星、伊莱克斯等公司均开发了全自动吸尘器并已经投入市场。这种全自动吸尘器通常体积小巧,集成有环境传感器、自驱系统、吸尘系统、电池和充电系统,能够无需人工操控,自行在室内巡航,在能量低时自动返回停靠站,对接并充电,然后继续巡航吸尘。同时,哈斯科瓦纳等公司开发了类似的智能割草机,其能够自动在用户的草坪中割草、充电,无需用户干涉。由于这种自动割草系统一次设置之后就无需再投入精力管理,将用户从清洁、草坪维护等枯燥且费时费力的家务工作中解放出来,因此受到极大欢迎。
现有的自动割草机的行走区域一般是通过设置物理的边界线,如导线或篱笆,自动割草机侦测物理的边界线以确定行走区域。边界布线的过程比较麻烦,耗时费力,并且在边界线内可能还存在非草区域,或者边界线外还存在需割草的区域,采用物理边界线的方法不灵活、不方便。
而使用电子手段来识别并确定行走区域的解决方法,由于行走区域的多样性,利用现有手段往往会产生噪点,识别行走区域的准确性很低,从而影响自动行走设备的判断,很容易离开行走区域,从而对自动行走设备的正常工作造成一定影响。
因此有必要对现有技术手段进行改进,使得更精确的识别行走区域,从而方便自动行走设备进行工作。
发明内容
有鉴于此,本发明的目的之一在于提供一种准确识别目标区域并能控制自动行走设备根据识别结果相应行走的方法及应用该方法的自动行走设备。
为实现上述目的,本发明所采用的技术方案是:一种控制自动行走设备行走的方法,其特征在于包括以下步骤:S10、获取所述自动行走设备行走目标区域的图像;S20、把图像划分成若干个单元格,每个单元格具有至少一个相邻单元格;S30、根据所述单元格中的指定像素的颜色信息以及所述单元格的纹理特征值识别所述单元格对应的目标区域是否为工作区域,并获得识别结果; S40、把图像划分成若干个子图像块,每个子图像块包括若干个相邻的单元格,根据所述子图像块内的单元格的识别结果判断所述子图像块对应的目标区域是否为可行走区域,并获得判断结果;S50、根据判断结果,控制自动行走设备的行走方向。
优选地,所述方法还包括在获得识别结果后,针对每个单元格,根据所述单元格及其相邻单元格的识别结果,调整所述单元格的识别结果。
优选地,所述方法进一步包括以下步骤:S61、选定一个单元格,并获取其识别结果;S62、统计具有与步骤S61中相同识别结果的相邻单元格的数量;S63、计算步骤S62中的数量在总的相邻单元格数量中的比例;S64、若该比例超过或达到预设值,则维持步骤S61指定的单元格的识别结果不变;若该比例小于预设值,则改变步骤S61指定的单元格的识别结果,其中所述第四预设值大于或等于50%。
优选地,所述相邻单元格包括横向及纵向上与所述选定的单元格相邻的单元格。
优选地,所述相邻单元格还包括与所述横向及纵向呈45度夹角的方向上与所述选定的单元格相邻的单元格。
优选地,所述方法进一步包括以下步骤:S66、选定一个单元格,获取所述单元格对于其识别结果的可靠度Y1,可靠度Y1为0~100%之间的一个数值;S67、计算1-Y1,并将结果标记为N1;S68、获取该选定单元格的所有相邻单元格对于其识别结果的可靠度Ya、Yb…,可靠度Ya、Yb…为0~100%之间的一个数值;S69、计算1-Ya、1-Yb…,并把结果标记为Na、Nb…;S70、把Ya、Yb…加权求和获得加权和Y2,把Nb、Nc…的加权求和获得加权和为N2,其中,加权系数均相同;S71、分别计算Y1+αN1和Y2+αN2的结果并比较其大小,其中α为系数;S72、若Y1+αN1的结果大于或等于Y2+αN2的结果,则维持所述指定的单元格的识别结果不变,若Y1+αN1的结果小于Y2+αN2的结果,则改变所述指定的单元格的识别结果。
优选地,所述S40步骤进一步包括以下步骤:S41、把所述图像划分成若干个子图像块,获取每个子图像块包含的单元格的数量,将其标记为B;S42、收集所述子图像块中的单元格的识别结果,统计识别结果为工作区域的单元格的 数量,并将其标记为A;S43、若A:B小于第三预设值,则判断所述子图像块对应的目标区域不是可行走区域,否则,判断所述子图像块对应的目标区域为可行走区域。
优选地,所述方法还包括对同一目标区域连续拍摄形成多帧图像,根据每帧图像中同一子图像块的判断结果判断所述子图像块对应的目标区域是否为可行走区域,并获得判断结果。
优选地,所述方法进一步包括以下步骤:S81、对同一目标区域连续拍摄形成多帧图像;S82、选定其中一帧图像中的一个子图像块,通过步骤S40获得判断结果;S83、设置初始的参数值,并根据步骤S82获得的判断结果对参数值进行运算,若判断结果为可行走区域,则在所述初始参数值上增加与判断结果关联的第一参数成为当前参数值;若判断结果不是可行走区域,则保持所述参数值不变;S84、选定下一帧图像,并根据步骤S82获得的判断结果对当前的参数值进行运算,若判断结果为可行走区域,则在所述当前的参数值上增加与判断结果关联的第一参数成为新的当前参数值;若判断结果不是可行走区域,则保持所述当前的参数值不变;S85、比较当前的参数值与阈值的大小,若当前的参数值大于或等于阈值,则认定该所述子图像块对应的目标区域是可行走区域。
优选地,所述步骤S84还进一步包括:在选定下一帧图像后而在对当前参数值进行运算前,当前的参数值减去一预设的第二参数,且所述第二参数小于所述第一参数。
优选地,所述子图像块包括中部、左部和右部三个子图像块,分别对应目标区域的中间区域、左侧区域及右侧区域。
为实现上述目的,本发明所采用的另一技术方案是:一种自动行走设备,其特征在于:包括壳体、位于壳体上的图像采集装置,所述图像采集装置用于拍摄目标区域并生成图像,驱动所述自动行走设备行走的行走模块,连接所述图像采集装置和行走模块以控制自动行走设备工作的主控模块,其中,所述主控模块包括划分单元、识别单元、判断单元和控制单元,所述划分单元把所述图像划分成若干个单元格,并把划分结果传递给所述识别单元,所述识别单元识别所述单元格对应的目标区域是否为工作区域,并把识别结果传递给判断单元,判断单元判断包含若干单元格的子图像块对应区域是否为可行走区域,并 把判断结果传递给控制单元,所述控制单元根据判断结果,控制行走模块的行走方向。
优选地,所述主控模块还包括修正单元,所述修正单元针对每个单元格,根据单元格及其相邻单元格的识别结果,调整所述单元格的识别结果。
优选地,所述相邻单元格包括横向及纵向上与所述选定的单元格相邻的单元格。
优选地,所述相邻单元格还包括与所述横向及纵向呈45度夹角的方向上与所述选定的单元格相邻的单元格。
优选地,所述判断单元还包括子图像块划分单元,所述子图像块划分单元把图像划分成若干个子图像块,判断单元根据所述子图像块包含的单元格的识别结果判断对应子图像块是否为可行走区域。
优选地,所述子图像块包括中部、左部和右部三个子图像块。
优选地,所述主控模块还包括记录有初始参数值的记录单元,所述图像采集装置对同一目标区域连续拍摄形成多帧图像,所述判断单元对每一帧图像中的同一子图像块进行判断并获得判断结果,记录单元根据其判断结果对参数值进行运算,当参数值大于或等于阈值,则认定该所述子图像块对应的目标区域是可行走区域。
与现有技术相比,本发明的有益效果为:针对目标区域的图像划分成单元格,微观上对每个单元格进行识别,在宏观上综合多个单元格的识别结果进行综合辨别,从而提升识别目标区域的准确性,利于自动行走设备在目标区域更加准确的行走。
本发明的另一目的在于提供一种准确识别目标区域的方法及应用该方法的自动行走设备。
为实现上述目的,本发明所采用的一种技术方案是:一种关于自动行走设备行走的目标区域的识别方法,其特征在于,所述识别方法包括以下步骤:S10、获取关于所述自动行走设备行走目标区域的图像;S20、把图像划分成若干个单元格,每个单元格具有至少一个相邻单元格;S30、根据所述单元格中的指定像素的颜色信息以及所述单元格的纹理特征值识别所述单元格对应的目标区域是否为工作区域,并获得识别结果;S60、针对每个单元格,根据相邻单元 格的识别结果,改变或者维持通过步骤S30获得的识别结果。
优选地,所述S60步骤进一步包括以下步骤:S61、指定一个单元格,并获取其识别结果;S62、统计具有与步骤S61中相同识别结果的相邻单元格的数量;S63、计算步骤S62中的数量在总的相邻单元格数量的比例;S64、若该比例超过或达到第四预设值,则维持步骤S61指定的单元格的识别结果不变;若该比例小于第四预设值,则改变步骤S61指定的单元格的识别结果,其中所述第四预设值大于或等于50%;S65、对所有单元格执行上述S61~S64步骤。
优选地,所述相邻单元格包括横向及纵向上与所述单元格相邻的单元格。
优选地,所述相邻单元格还包括与所述横向及纵向呈45度夹角的方向上与所述单元格相邻的单元格。
优选地,所述S60步骤进一步包括以下步骤:S66、指定一个单元格,获取所述单元格对于其识别结果的可靠度Y1,可靠度Y1为0~100%之间的一个数值;S67、计算1-Y1,并将结果标记为N1;S68、获取该指定单元格的所有相邻单元格对于其识别结果的可靠度Ya、Yb…,可靠度Ya、Yb…为0~100%之间的一个数值;S69、计算1-Ya,1-Yb…,并把结果标记为Na、Nb…;S70、把Ya、Yb…加权求和获得加权和Y2,把Nb、Nc…的加权求和获得加权和N2,其中,这里的加权系数均相同;S71、分别计算Y1+αN1和Y2+αN2的结果并比较其大小,其中α为系数,若Y1+αN1大于或等于Y2+αN2,则维持所述指定的单元格的识别结果不变,若Y1+αN1的结果小于Y2+αN2的结果,则改变所述指定的单元格的识别结果;S72、对所有单元格执行上述S66~S71步骤,直到所有单元格的识别结果不再变化。
为实现上述目的,本发明所采用的另外一种技术方案是:一种自动行走设备,其特征在于:包括壳体、位于壳体上的图像采集装置,所述图像采集装置用于拍摄目标区域并生成图像,驱动所述自动行走设备行走的行走模块,连接所述图像采集装置和行走模块以控制自动行走设备工作的主控模块,其中,所述主控模块划分单元、识别单元和修正单元,所述划分单元把所述图像划分成若干个单元格,所述识别单元识别所述单元格对应的目标区域是否为工作区域,并把识别结果传递给修正单元,所述修正单元针对每个单元格,根据相邻单元格的识别结果,改变或者维持通过识别单元获得的识别结果。
优选地,与所述单元格相邻的单元格包括横向及纵向上与所述单元格相邻的单元格。
优选地,与所述单元格相邻的单元格还包括与所述横向及纵向呈45度夹角的方向上与所述单元格相邻的单元格。
与现有技术相比,本发明的有益效果为:针对目标区域的图像划分成单元格,微观上对每个单元格进行识别,在宏观上综合多个单元格的识别结果进行修正,从而提升识别目标区域的准确性。
附图说明
以上所述的本发明的目的、技术方案以及有益效果可以通过下面的能够实现本发明的具体实施例的详细描述,同时结合附图描述而清楚地获得。
附图以及说明书中的相同的标号和符号用于代表相同的或者等同的元件。
图1是本发明实施例的自动行走设备在目标区域行走的示意图。
图2是图1中的自动行走设备拍摄目标区域的的示意图。
图3是图1的自动行走设备划分目标区域的示意图。
图4是图1的自动行走设备的各部分模块的示意图。
图5是本发明实施例的控制自动行走设备行走方法的流程示意图。
图6是本发明一实施例中的位于步骤S30与步骤S40之间的步骤S60的详细流程示意图。
图7是本发明另一实施例中的位于步骤S30与步骤S40之间的步骤S60的详细流程示意图。
图8是图5中步骤S40在一个实施例中的详细流程示意图。
图9是本发明一实施例中的位于步骤S40与步骤S50之间的步骤S80的详细流程示意图。
1、自动行走设备   2、图像采集装置   3、主控模块
4、行走模块       5、工作模块       6、能量模块
9、驱动轮         10、壳体          11、辅助轮
12、划分单元      13、颜色提取单元  14、计算单元
15、比较单元      16、存储单元      17、纹理提取单元
18、纹理比较单元  19、识别单元      20、信息提取单元
21、信息改变单元  22、判断单元    23、子图像块划分单元
28、目标区域      32、修正单元    33、记录单元
50、工作区域      51、非工作区域  52孤岛
具体实施方式
下面结合附图对本发明的较佳实施例进行详细阐述,以使本发明的优点和特征能更易于被本领域技术人员理解,从而对本发明的保护范围做出更为清楚明确的界定。
图1所示为本发明一实施例的自动行走设备在目标区域行走的示意图。自动行走设备1可以在地面或其他工作表面上自动地行走,在行走的同时也可以进行工作。自动行走设备1可以为自动吸尘器、自动割草机、自动修剪机等。在本实施例中,自动行走设备为自动割草机。根据工作的对象不同,地面可以划分为工作区域50和非工作区域51。工作区域50是指用户想让自动行走设备行走经过并进行工作的区域,而非工作区域51是指用户不想让自动行走设备经过的区域。在本实施例中,由于自动行走设备为自动割草机,其工作为执行割草。因此行走区域50可以但不限定为草地,而非工作区域51可以但不限定为水泥路、大树、池塘、栅栏、木桩、墙角等。通常情况下,草地都是成片成块形成的,而非行走区域可以位于草地的周围,也可以被草地包围而形成孤岛52,所以孤岛52也是一种非行走区域的表现形式。在本发明中,非工作区域51和工作区域50的交界处可以不设置边界线,自动行走设备1利用工作区域50和非工作区域51在视觉上的差异进行识别。
结合图2和图3,自动行走设备1具有壳体10及安装在壳体10上的图像采集装置2。图像采集装置2拍摄自动行走设备1前方区域的图像。位于自动行走设备1前方的地面区域即为自动行走设备行走的目标区域28。目标区域28有可能是工作区域,也可能是非工作区域,也可能是行走区域和非行走区域的集合。而自动行走设备1为了能够执行在行走区域正常行走的目的,必须对当前的目标区域28进行识别。因此自动行走设备1利用图像采集装置2能够对该目标区域28拍摄并形成关于目标区域28的图像。因此控制自动行走设备的方法包括了步骤S10,即生成关于自动行走设备1行走目标区域的图像。在本实施例中,图像采集装置2的取景范围为一固定区域,如固定的视角范围90度至120度。在其他可选实施例中取景范围也可以为活动的,可选取视角范围内一 定角度范围作为实际取景范围,如选取视角范围120度内位于中部的90度范围作为实际取景范围。该图像中包含目标区域的信息,例如目标区域的地形起伏情况、颜色分布情况、纹理情况等。
请参照图4,除了图像采集装置2外,自动行走设备1还包括主控模块3、行走模块4、工作模块5及能量模块6。主控模块3分别与行走模块4、工作模块5、能量模块6以及图像采集装置2均电性相连,起到控制自动行走设备1工作的作用。
行走模块4包括轮组和用于驱动轮组的行走马达。轮组可以有多种设置方法。通常轮组包括由行走马达驱动的驱动轮9和辅助支撑壳体10的辅助轮11,驱动轮9的数量可以为1个,2个或者更多。如图2所示,以自动行走设备1的移动方向作为前侧,与前侧相对的一侧为后侧,与前后侧相邻的两边分别为左右两侧。在本实施例中,自动行走设备1的驱动轮9为2个,分别为位于左侧的左轮91和位于右侧的右轮92。左轮91和右轮92关于自动行走设备1的中轴线对称设置。左轮91和右轮92优选的位于壳体10的后部,辅助轮11位于前部,当然在其他实施例中也可以替换设置。
在本实施例中,左轮91和右轮92各自配接一个驱动马达,以实现差速输出以控制转向,从而达到左转或者右转的目的。左轮91和右轮92也能够等速输出,从而达到前进或者后退的目的。驱动马达可以直接连接驱动轮,但也可以在驱动马达和驱动轮9之间设传动装置,如本技术领域内常见的行星轮系等。在其他的实施例中,也可设置驱动轮2个,驱动马达1个,这种情况下,驱动马达通过第一传动装置驱动左轮91,通过第二传动装置驱动右轮92。即同一个马达通过不同的传动装置驱动左轮91和右轮92。
工作模块5用于执行特定的工作。本实施例中,工作模块5具体为切割模块,包括用于割草的切割部件(图未示)和驱动切割部件的切割马达(图未示)。
能量模块6用于给自动行走设备1的运行提供能量。能量模块6的能源可以为汽油、电池包等,在本实施例中能量模块6包括在壳体2内设置的可充电电池包。在工作的时候,电池包释放电能以维持自动行走设备1工作。在非工作的时候,电池可以连接到外部电源以补充电能。特别地,出于更人性化的设计,当探测到电池的电量不足时,自动行走设备1会自行的寻找充电停靠站(图未示)补充电能。
如图3所示,图像采集装置2获得了关于目标区域28的图像后传递给主控模块3。主控模块3包括划分单元12。划分单元12用于把图像划分若干个单 元格。所有的单元格组合成一张图像,每个单元格占据整个图像的一部分。因此每个单元格包含了该部分图像的识别信息。每个单元格的大小基本一致。另外,该若干个单元格构成矩阵阵列。该矩阵阵列分别沿横向及纵向延伸。在横向上,约20个单元格排列成一排;而在纵向上,约20个单元格排列成一列。在不同的实施例中,横向及纵向排列的单元格数量可以不一致。而每个单元格都具有至少一个与其相邻的单元格。对于位于阵列中间区域的单元格,每个单元格具有分别位于上下左右共4个与其相邻的单元格,换句话说,该4个单元格分别在横向上或者纵向上与那个单元格相邻。当然,相邻的意思也不限于上下左右4个方向。在另外的实施例中,对于该单元格具有上下左右以及上左、上右、下左、下右八个方向共8个与其相邻的单元格,换句话说,除了在横向及纵向上相邻之外,在与横向及纵向呈45度夹角的方向上也可以与单元格相邻。而对于位于阵列边缘区域的单元格,每个单元格可能没有4个与其相邻的单元格,但是至少会有一个与其相邻。因此控制自动行走设备的方法还包括了步骤S20,即图像划分成若干个单元格,每个单元格与至少一个其他的单元格相邻。
划分单元12划分单元格完毕后,开始识别单元格对应的目标区域是否为行走区域。具体流程方法如下:主控模块3先读取各个单元格所包含的识别信息。在本实施例中,单元格所包含的识别信息包括了颜色信息和纹理信息。在其他实施例中,单元格包含的信息可以是颜色信息以及其他类型的信息。由于单元格是图像的一部分,而图像包括了对目标区域的信息。因此单元格必然包含了对应的目标区域的信息,当然也包括颜色信息。通过读取该识别信息有助于判断出该单元格对应的目标区域是管理工作区域还是非工作区域。由于作为工作区域的草地是绿色的,而作为非工作区域的道路、泥土则不是绿色的,因此若识别出该单元格的颜色信息为绿色,则可认为该单元格对应的是行走区域。若识别出颜色信息不为绿色,则可认为该单元格对应的是非行走区域。当然为了进一步提升准确性,有些情况的非行走区域也是绿色的,例如一些人工处理过的物体表面刷上了绿漆,这种情况下行走区域和非行走区域的颜色都为绿色,从颜色信息上分辨,并不容易区分出行走区域和非行走区域。因此还需要加入对纹理信息的识别。因为在那种非行走区域也为绿色的情况下,通常其具有规则的纹理,而行走区域的草地虽然也为绿色,但是草得生长并不是那么的有规则,因此其纹理是不规则的。进而如果识别出该单元格的颜色信息为绿色,且纹理不规则,则可认定该单元格对应的是行走区域。若颜色不为绿色或者纹理规则,则可认定该单元格对应的是非行走区域。当然,在其他实施例中,也可 以通过识别其他信息来达到甄别行走区域和非行走区域的目的,在此就不尽详述。
对此,主控模块3还颜色提取单元13、计算单元14、比较单元15以及存储单元16。主控模块3通过提取单元格的颜色信息,然后把颜色信息与预设信息进行比较,根据比较结果对单元格是否为行走区域进行识别。具体的方法如下:由于每个单元格实际包含了很多个像素单元,而像素单元显示的颜色是唯一的。因此颜色提取单元13的作用是提取单元格中各个像素单元的颜色,特别地,提取的是三原色(RGB)分量。而预设信息是指预设的起参考比较对象作用的信息。在本实施例中,预设信息是指存有预定颜色的三原色分量的数值范围。本实施例中,预定颜色是指绿色。比较一个像素的三原色分量与预定颜色的三原色分量,若一个像素的三原色分量分别落入预定颜色的三原色分量的数值范围,则判断该像素的颜色为预定颜色。说明如果没有落入该数值范围,则判断该像素的颜色为非预定颜色。在另外的实施例中,存储单元16具有预定颜色的预设色调值(Hue)范围,在提取一个像素三原色分量后,将得到的RGB分量进一步转换为HSV(色调Hue,饱和度Saturation,亮度Value)值,并比较其色调值是否在预设色调值范围之内,是则判定该像素的颜色为预定颜色,否则判定该像素的颜色为非预定颜色。
然后计算单元14计算具有预定颜色的像素数量占一个单元格中总的像素数量的比例(以下简称占比)。比较单元15再把占比与一第一预设值比较,若该占比超过或达到第一预设值,认定该单元格的颜色显示是预定颜色。第一预设值可以是50%、60%或者其他数值。另外第一预设值可以存储在存储单元16中。
再结合单元格的其他一些信息可以识别出该单元格属于工作区域或非工作区域。在本实施例中指的是单元格的纹理信息。主控模块3还包括纹理提取单元17、纹理比较单元18。纹理提取单元17提取该单元格的纹理特征值。单元格中所有像素的至少一个参数的离散度可以体现该参数的各个取值之间的差异程度。若目标区域为绿色的油漆,则其图像中一个参数的离散度很小,甚至为0。由于草地的纹理不规则,单元格的所有像素的一个参数的差分值的离散度会大于或等于一个预设离散度,从而体现了该单元格的纹理的不规则性。因此,本实施方式中,所述纹理特征值为参数离散度,如颜色离散度、灰度离散度、亮度离散度等。
纹理比较单元18比较该单元格的纹理特征值与第二预设值以判断纹理特 征值是否达到第二预设值。本实施方式中,所述第二预设值为预设离散度。纹理比较单元18可以独立的存在,也可以集成到比较单元15。而第二预设值也可以预先存储在存储单元16中。
主控模块3还包括识别单元19。其中颜色提取单元13、计算单元14、比较单元15以及存储单元16在一个实施例中可以构成识别单元19的一部分,或者说整合入识别单元19形成整体。也可以在另一个实施例中作为与识别单元19并列的单元组件。当识别单元19识别出该单元格中的具有预定颜色的像素占比达到或超过第一预设值且单元格的纹理特征值达到或超过第二预设值时,判断该单元格对应的目标区域为行走区域;若占比未达到第一预设值或纹理特征值未达到第二预设值,判断该单元格对应的目标区域为非行走区域。因此控制自动行走设备的方法还包括了步骤S30,即读取每个单元格所包含的识别信息并对其进行识别,从而获得单元格对应的目标区域是否为工作区域的识别结果。
主控模块3的识别单元19对图像内的所有单元格都分别进行识别,从而获得所有单元格的识别结果。在优选地的实施例中,主控模块3还包括修正单元32,修正单元32基于马尔可夫随机模型对单元格的识别结果进行修正。因此在该实施例中,因此控制方法还包括了步骤S60,即基于平滑处理的方式修正单元格中的异识别结果。这是因为在实际工况中,通过步骤S30获得的识别结果会有一定的误差,即产生异常的识别结果。修正过程可以可以把该异常的识别结果进行修正,从而提升识别的准确性。具体来说,对于图像内的每一个单元格来说,必然存在与其相邻的单元格。利用其相邻的单元格的识别结果以及对单元格自身的识别结果进行综合考虑可以达到修正的目的。修正单元32包括了信息提取单元20和信息改变单元21。
在一个实施例中,修正的方法如下:步骤S60包括了步骤S61、S62、S63和S64。其中步骤S61是指对于每个单元格而言,信息提取单元20提取所有与该单元格的相邻的单元格的识别结果;步骤S62是指计算单元14统计与该单元格的识别结果相同的相邻单元格的数量,以及该数量占到相邻的单元格总数量的占比。举例来说,若该单元格的识别结果为工作区域,则计算单元14统计与其相邻的单元格中识别结果也为工作区域的数量,并且计算该数量占到整个相邻的单元格总数量的占比。步骤S63是指比较单元15比较该占比与第四预设值的大小,若该占比大于或等于一第四预设值,(通常情况下第四预设值不少于50%,可以是50%,75%等),说明该识别结果的相邻单元格占据所有相邻单 元格的大多数,因此通过单元格信息改变单元21使该单元格的识别结果保持不变。若该占比小于第四预设值,则单元格信息改变单元21使该单元格的识别结果变更为另一识别结果,例如该单元格的原本识别结果为工作区域,变更后即识别结果即变为非工作区域。整个过程举例来说,对于一个单元格来说,原来的识别结果为非工作区域。但是与其相邻的4个单元格中有3个单元格的识别结果为工作区域,该比例(3/4=75%)已经大于第四预设值(假设第四预设值为50%),则根据该相邻单元格的结果认定该单元格原来的识别结果与邻近的单元格的识别结果相同,因此该单元格的识别结果保持不变,仍然为工作区域。若与其相邻的4个单元格中只有1个单元格的识别结果为工作区域,该比例(1/4=25%)小于第四预设值,则认定该单元格的识别结果与其相邻单元格的识别结果不符合,可能对该单元格的识别结果是由于误差造成的,因此把该单元格的识别结果修正为工作区域。当然,再次强调此处的相邻并不是限定上下左右4个方向的相邻,还可以不限于是上左、上右、下左、下右等共8个方向。同样单元格原先的识别结果不限为工作区域,也可以是非工作区域。最后步骤S64是指以此方法适用到所有单元格上,即完成整个图像的结果修正,也就是说,对所有单元格执行上述S61~S63步骤,修正所有单元格的识别结果。
在另外的一个实施例中,步骤S60包括了步骤S66、S67、S68、S69、S70、S71和S72。步骤S66先获取单元格的识别结果的可靠度。可靠度通常为0~100%之间的一个数值。当然,可靠度还可以是其他形式的数值。步骤S67中把可靠度记为Y1,不可靠度记为N1,其中N1=1-Y1。Y1也可以称之为相似的可靠度。N1可以称之为不相似的可靠度。Y1和N1可以存储在存储单元16中。然后再通过步骤S68获取计算与该单元格相邻的单元格的可靠度。可靠度通常为0~100%之间的一个数值。假如该单元格具有8个与其相邻的单元格的话,通过步骤S69与步骤S67类似的获取8个相似的可靠度以及不相似的可靠度。具体地,把该8个相邻的单元格的可靠度记为相似的可靠度Ya、Yb、Yc…,以及把不相似的可靠度记为Na、Nb、Nc…。然后通过步骤S70把这8个相似的可靠度进行加权求和处理得到Y2。在本实施例中,权重系数大小一致,优选地均为1/8。当然,权重系数也可以采用互不相同的值。同理,把那8个不相似的可靠度进行加权求和处理得到N2。该8个不相似的可靠度的权重系数可以是一样的,并且可以与相似的可靠度的权重系数一致。然后通过步骤S71比较Y1+αN1与Y2+αN2的大小并采取相应措施。具体地,α这里为权重系数,并且和前面几个步骤的权重值可以不一样,也可以一样。进一步地,比较的过程可以在比较单元 15中进行,也可以在其他元件中进行。关于比较的结果,若Y1+αN1的结果大于或等于Y2+αN2的结果,则信息改变单元21维持该单元格的识别结果不变,若Y1+αN1的结果小于Y2+αN2的结果,则信息改变单元21改变该单元格的识别结果。然后通过步骤S72对图像内的所有单元格全部执行上述过程,每个单元格都会参与迭代循环,直到所有的单元格的识别结果不再产生改变。
因此控制自动行走设备的方法还包括了步骤S40,步骤S40用于判断包含了若干个单元格的子图像块对应的目标区域是否为行走区域。自动行走设备1包括用来执行该步骤的判断单元22。判断单元22包括了子图像块划分单元23,子图像块划分单元23用于把图像划分成若干个子图像块。其中的一个实施例中,具体划分的方式如下:步骤S40包括了步骤S41、S42、S43。首先,通过步骤S41根据自动行走设备的行走方向子图像块划分单元23选择地将图像划分成若干个子图像块。每个子图像块对应不同的行走方向。在其中的一个实施例中,子图像块划分单元23把图像划分成中部、左部和右部三个子图像块,分别对应于目标区域中的子区域。如图3所示,中部对应于自动行走设备1的前方正中、与自动行走设备1等宽的中间区域a;左部对应于自动行走设备1的前方、位于中间区域a左侧的左侧区域b;右部对应于自动行走设备1的前方、位于中间区域a右侧的右侧区域c。该三个子图像块分别各自包含了多个单元格。在另外的一个实施例中,子图像块划分单元23还可以把图像划分成正前方、左前方、左方、右前方、右方等5个不同的子图像块。由于每个子图像块都包含了若干单元格,判断单元22通过子图像块中的所有单元格的识别结果来判断该子图像块对应的目标区域是行走区域还是非行走区域。具体地,假设以位于图像前端的三行单元格共60个单元格作为中部子图像块。在该实施例中,自动行走设备1的信息提取单元20提取该中部子图像块中所有单元格的识别结果,计算单元14计算识别结果为工作区域区域的单元格的数量,并把该数量标记为A。当然在其他实施例中也可以统计识别结果为非工作区域的单元格的数量。比较单元15比较识别结果为行走区域的单元格的数量与第三预设值的大小。当数量A或者A占到该子图像块中所有单元格的比例大于或等于一第三预设值时,则判断单元22可认定该子图像块为行走区域。当然也可以设置成当识别结果为非行走区域的单元格的数量小于一第三预设值时,可判断该子图像块为行走区域。本实施例中的第三预设值预存在存储单元16中,并且可以是30、40、50等数值。在其他实施例中,自动行走设备1也可以把识别结果为行走区域或非行走区域的单元格占到该子图像块的所有单元格的比例作为参数,与另一第三预设 值进行比较,本实施例中的第三预设值大于或等于50%,可以是50%、60%、90%等。
当判断子图像块对应的目标区域为行走区域或非行走区域后,自动行走设备1通过步骤S50,根据判断结果控制自动行走设备前进、后退、左转或右转。根据判断单元22的判断结果,自动行走设备1会执行具体的响应动作。行走模块4控制自动行走设备1响应的动作有包括:前进(F)、后退(B)、左转(L)、右转(R)和不变化(N)。在把图像划分为左中右三块子图像块的实施例中,由于每个子图像块的识别结果分别有行走区域和非行走区域。所以总共八种不同的情况:分别是1.左中右均为行走区域;2.左中为行走区域,右为非行走区域;3.左右为行走区域,中为非行走区域;4.左为行走区域,中右为非行走区域;5.左为非行走区域,中右为行走区域;6.左右为非行走区域,中为行走区域;7.左中为非行走区域,右为行走区域;8.左中右均为非行走区域。
在第1种情况下,主控模块3使行走模块4执行不变化(N)的动作;
在第2种情况下,主控模块3使行走模块4执行左转并前进(LF)的动作;
在第3种情况下,主控模块3使行走模块4执行后退左转并前进(BLF)的动作;
在第4种情况下,主控模块3使行走模块4执行后退左转并前进(BLF)的动作;
在第5种情况下,主控模块3使行走模块4执行右转并前进(RF)的动作;
在第6种情况下,主控模块3使行走模块4执行后退右转并前进(BRF)的动作;
在第7种情况下,主控模块3使行走模块4执行后退右转并前进(BRF)的动作;
在第8种情况下,主控模块3使行走模块4执行后退右转并前进(BRF)或者后退左转并前进(BLF)二选一的动作。
进一步地说明,当判断出当前目标区域为行走区域时,自动行走设备1可能会继续执行原先的行走策略,例如保持原来的行走状态;当认定当前目标区域为非行走区域时,自动行走设备1则会改变行走方向,进一步地可以选择地向远离该子图像块的方向行走。由于图像存在多个子图像块,自动行走设备1需要对该多个子图像块分别进行行走区域或非行走区域的认定,进而采取对应的策略。在优选的实施例中,自动行走设备可以同时对该多个子图像块进行认定。举例来说,对于中部、左部、右部等三个子图像块来说,若检测到该三个 子图像块均为行走区域,自动行走设备保持继续向前移动的状态;若检测到该三个子图像块均为非行走区域,自动行走设备会转向180度,向后方移动;若检测到中部、左部的子图像块均为非行走区域,右部为行走区域,则自动行走设备就会向远离中部和左部的方向即向右下方移动,当然可以先后退再右转或者先右转再后退等多种具体的方式。
当然在优选的实施例中,自动行走设备1的子图像块划分单元23还可以包括多次划分子图像块,然后进行综合判断的过程。每次划分的子图像块对应的区域可以是不相同的。这样对不同区域的判断结果综合考虑,避免单一区域的判断结果的不准确性而带来的策略制定的错误,提升了自动行走设备1行走的准确性。具体地仍然举位于图像前端的三行单元格共60个单元格作为中部子图像块为例,在一次判断该子图像块的对象是该60个单元格,在另外一次识别的过程中,可以把图像前端的四行单元格共80个单元格作为中部子图像块,在该次判断该子图像块的对象是该80个单元格。并且两次判断时用到的第三预设值也不相同,此处可以但不限制为60个。把这两次识别结合起来作为全新的判断依据,例如当判断条件为构成三行的60个单元格中有40个单元格识别为行走区域,且在构成四行的80个单元格中有60个单元格识别为行走区域时,可认定中部为行走区域。若该两个判断条件不能同时满足,则认定中部为非行走区域。当然,对于左部和右部也可以采用同样的方式进行判断。
在另一优选的实施例中,由于单张的图像仍然有可能对目标区域的信息采集有存在失真的情况,例如,在某一瞬间快速掠过的物体会对目标区域产生阴影,从而影响自动行走设备对该目标区域的判断过程。因此在本实施例中,自动行走设备还包括在上述步骤S40与步骤S50之间设置步骤S80。通过步骤S80根据多张图像中的子图像块进行综合过滤,获得所述子图像块是否为行走区域的最终判断结果。可以在一定的时间周期内对目标区域多次拍摄,从而形成多帧的图像。然后对每一帧图像包含的判断信息进行综合过滤获得最终的判断结果。
方法步骤S80至少包括了步骤S81、S82和步骤S84。在优选的实施例中,步骤S80还包括了位于S82和S84之间的步骤S83。具体方法如下:通过步骤S81,图像采集装置2对同一块目标区域多次拍摄形成多帧图像,每一帧图像分别称之为第一帧图像、第二帧图像…第N帧图像。自动行走设备1还包括记录单元33,记录单元33通过步骤S82用于根据对子图像块的判断结果进行对权重值进行处理。具体来说,当判断单元22判断出第一帧图像为行走区域时, 记录单元33会对初始的权重值加上一第五预设值。初始的权重值为了方便说明,可以标记为0,当然也可以标记为其他数值。而第五预设值可以是预设的固定常数,也是变化的函数。在本实施例中,第五预设值可以为但不限定为3。当第一帧图像的识别结果行走区域时,记录单元就使对应的权重值就变为3。然后在对第二帧图像的识别结果进行处理。若第二帧图像的识别结果也为行走区域时,自动行走设备1的记录单元33对当前的权重值再加上一第五预设值。此时对应的权重值就变为6。如果第二帧图像的识别结果不为行走区域时,记录单元33对当前权重值不做变化。然后再对第三帧图像的识别结果进行处理。若第三帧图像的识别结果也为行走区域时,则当前权重值变为9。如此继续直到第N帧图像。另再通过步骤S84,比较单元15还对当前的权重值与一第七预设值进行比较。当前的权重值大于或等于第七预设值时,就认定判断结果是正确的,即确实当前目标区域为行走区域。其中第七预设值例如可以设为8。通过这种方式对多帧图像的识别结果综合考虑,从而避免单帧图像的可能存在的错误结果带来的不良影响。进一步地,对每一帧图像的识别可以分解成对图像的每个子图像块的识别。例如,每一帧图像可以分解成左中右三个子图像块,记录单元可以分别对该三个子图像块进行记录,分别对应三个权重值。
另外在优选的实施例中,步骤S83还包括在帧与帧的图像切换过程中,记录单元33还会对当前的权重值减去一第六预设值,从而使当前权重值达到或超过第七预设值得时间变长,使得更多帧数的图像进行综合考虑,进一步提升了准确性。本实施例中,第六预设值可以但不限定为是1。例如第一帧图像识别结果为行走区域,则当前权重值变为3。当第二帧图像识别结果为非行走区域时,当前权重值变为2。当第三帧图像识别结果为行走区域时,当前权重值就变为4。当第四帧图像识别结果为行走区域时,当前权重值就变为6。如此直到权重值达到或超过第七预设值时,认定当前目标区域为行走区域。如果权重值始终未达到第七预设值时,认定当前目标区域为非行走区域。当然,本领域技术人员可以想到的是通过一些变化,可以使得权重值达到或超过第七预设值时,认定当前目标区域为非行走区域,而权重值始终未达到第七预设值时,认定当前目标区域为行走区域。即把行走区域和非行走区域的认定条件互换一下。而且对权重值的计算规则进一步细化,例如可以设置当权重值在任何条件下若减至最小值,例如当权重值减少到0时,就不会再持续减少。
本发明不局限于所举的具体实施例结构,基于本发明构思的结构均属于本发明保护范围。

Claims (18)

  1. 一种控制自动行走设备行走的方法,其特征在于:所述方法包括以下步骤:
    S10、获取所述自动行走设备行走目标区域的图像;
    S20、把图像划分成若干个单元格,每个单元格具有至少一个相邻单元格;
    S30、根据所述单元格中的指定像素的颜色信息以及所述单元格的纹理特征值识别所述单元格对应的目标区域是否为工作区域,并获得识别结果;
    S40、把图像划分成若干个子图像块,每个子图像块包括若干个相邻的单元格,根据所述子图像块内的单元格的识别结果判断所述子图像块对应的目标区域是否为可行走区域,并获得判断结果;
    S50、根据判断结果,控制自动行走设备的行走方向。
  2. 根据权利要求1所述的方法,其特征在于:所述方法还包括在获得识别结果后,针对每个单元格,根据所述单元格及其相邻单元格的识别结果,调整所述单元格的识别结果。
  3. 根据权利要求2所述的方法,其特征在于:所述方法进一步包括以下步骤:
    S61、选定一个单元格,并获取其识别结果;
    S62、统计具有与步骤S61中相同识别结果的相邻单元格的数量;
    S63、计算步骤S62中的数量在总的相邻单元格数量中的比例;
    S64、若该比例超过或达到预设值,则维持步骤S61指定的单元格的识别结果不变;若该比例小于预设值,则改变步骤S61指定的单元格的识别结果,其中所述第四预设值大于或等于50%。
  4. 根据权利要求3所述的方法,其特征在于:所述相邻单元格包括横向及纵向上与所述选定的单元格相邻的单元格。
  5. 根据权利要求4所述的方法,其特征在于:所述相邻单元格还包括与所述横向及纵向呈45度夹角的方向上与所述选定的单元格相邻的单元格。
  6. 根据权利要求2所述的方法,其特征在于:所述方法进一步包括以下步骤:
    S66、选定一个单元格,获取所述单元格对于其识别结果的可靠度Y1,可靠度Y1为0~100%之间的一个数值;
    S67、计算1-Y1,并将结果标记为N1;
    S68、获取该选定单元格的所有相邻单元格对于其识别结果的可靠度Ya、 Yb…,可靠度Ya、Yb…为0~100%之间的一个数值;
    S69、计算1-Ya、1-Yb…,并把结果标记为Na、Nb…;
    S70、把Ya、Yb…加权求和获得加权和Y2,把Nb、Nc…的加权求和获得加权和为N2,其中,加权系数均相同;
    S71、分别计算Y1+αN1和Y2+αN2的结果并比较其大小,其中α为系数;
    S72、若Y1+αN1的结果大于或等于Y2+αN2的结果,则维持所述指定的单元格的识别结果不变,若Y1+αN1的结果小于Y2+αN2的结果,则改变所述指定的单元格的识别结果。
  7. 根据权利要求1所述的方法,其特征在于:所述S40步骤进一步包括以下步骤:
    S41、把所述图像划分成若干个子图像块,获取每个子图像块包含的单元格的数量,将其标记为B;
    S42、收集所述子图像块中的单元格的识别结果,统计识别结果为工作区域的单元格的数量,并将其标记为A;
    S43、若A:B小于第三预设值,则判断所述子图像块对应的目标区域不是可行走区域,否则,判断所述子图像块对应的目标区域为可行走区域。
  8. 根据权利要求1所述的方法,其特征在于:所述方法还包括对同一目标区域连续拍摄形成多帧图像,根据每帧图像中同一子图像块的判断结果判断所述子图像块对应的目标区域是否为可行走区域,并获得判断结果。
  9. 根据权利要求8所述的方法,其特征在于:所述方法进一步包括以下步骤:
    S81、对同一目标区域连续拍摄形成多帧图像;
    S82、选定其中一帧图像中的一个子图像块,通过步骤S40获得判断结果;
    S83、设置初始的参数值,并根据步骤S82获得的判断结果对参数值进行运算,若判断结果为可行走区域,则在所述初始参数值上增加与判断结果关联的第一参数成为当前参数值;若判断结果不是可行走区域,则保持所述参数值不变;
    S84、选定下一帧图像,并根据步骤S82获得的判断结果对当前的参数值进行运算,若判断结果为可行走区域,则在所述当前的参数值上增加与判断结果关联的第一参数成为新的当前参数值;若判断结果不是可行走区域, 则保持所述当前的参数值不变;
    S85、比较当前的参数值与阈值的大小,若当前的参数值大于或等于阈值,则认定该所述子图像块对应的目标区域是可行走区域。
  10. 根据权利要求9所述的方法,其特征在于:所述步骤S84还进一步包括:在选定下一帧图像后而在对当前参数值进行运算前,当前的参数值减去一预设的第二参数,且所述第二参数小于所述第一参数。
  11. 根据权利要求1所述的方法,其特征在于:所述子图像块包括中部、左部和右部三个子图像块,分别对应目标区域的中间区域、左侧区域及右侧区域。
  12. 一种自动行走设备,其特征在于:包括壳体、位于壳体上的图像采集装置,所述图像采集装置用于拍摄目标区域并生成图像,驱动所述自动行走设备行走的行走模块,连接所述图像采集装置和行走模块以控制自动行走设备工作的主控模块,其中,所述主控模块包括划分单元、识别单元、判断单元和控制单元,所述划分单元把所述图像划分成若干个单元格,并把划分结果传递给所述识别单元,所述识别单元识别所述单元格对应的目标区域是否为工作区域,并把识别结果传递给判断单元,判断单元判断包含若干单元格的子图像块对应区域是否为可行走区域,并把判断结果传递给控制单元,所述控制单元根据判断结果,控制行走模块的行走方向。
  13. 根据权利要求12所述的自动行走设备,其特征在于:所述主控模块还包括修正单元,所述修正单元针对每个单元格,根据单元格及其相邻单元格的识别结果,调整所述单元格的识别结果。
  14. 根据权利要求13所述的自动行走设备,其特征在于:所述相邻单元格包括横向及纵向上与所述选定的单元格相邻的单元格。
  15. 根据权利要求14所述的自动行走设备,其特征在于:所述相邻单元格还包括与所述横向及纵向呈45度夹角的方向上与所述选定的单元格相邻的单元格。
  16. 根据权利要求12所述的自动行走设备,其特征在于:所述判断单元还包括子图像块划分单元,所述子图像块划分单元把图像划分成若干个子图像块,判断单元根据所述子图像块包含的单元格的识别结果判断对应子图像块是 否为可行走区域。
  17. 根据权利要求16所述的自动行走设备,其特征在于:所述子图像块包括中部、左部和右部三个子图像块。
  18. 根据权利要求12所述的自动行走设备,其特征在于:所述主控模块还包括记录有初始参数值的记录单元,所述图像采集装置对同一目标区域连续拍摄形成多帧图像,所述判断单元对每一帧图像中的同一子图像块进行判断并获得判断结果,记录单元根据其判断结果对参数值进行运算,当参数值大于或等于阈值,则认定该所述子图像块对应的目标区域是可行走区域。
PCT/CN2017/087021 2016-06-03 2017-06-02 自动行走设备及其控制行走方法 WO2017206950A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610389564.8 2016-06-03
CN201610389387.3A CN107463166A (zh) 2016-06-03 2016-06-03 自动行走设备及其控制行走方法
CN201610389564.8A CN107463167B (zh) 2016-06-03 2016-06-03 自动行走设备及目标区域识别方法
CN201610389387.3 2016-06-03

Publications (1)

Publication Number Publication Date
WO2017206950A1 true WO2017206950A1 (zh) 2017-12-07

Family

ID=60478549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087021 WO2017206950A1 (zh) 2016-06-03 2017-06-02 自动行走设备及其控制行走方法

Country Status (1)

Country Link
WO (1) WO2017206950A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021243895A1 (zh) * 2020-06-02 2021-12-09 苏州科瓴精密机械科技有限公司 基于图像识别工作位置的方法、系统,机器人及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110166701A1 (en) * 2010-01-06 2011-07-07 Russell Thacher Adaptive scheduling of a service robot
CN102662400A (zh) * 2012-05-10 2012-09-12 慈溪思达电子科技有限公司 割草机器人的路径规划算法
US20140166047A1 (en) * 2012-12-05 2014-06-19 Vorwerk & Co. Interholding Gmbh Traveling cleaning appliance and method for operating such an appliance
CN103901890A (zh) * 2014-04-09 2014-07-02 中国科学院深圳先进技术研究院 基于家庭庭院的户外自动行走装置及其控制系统和方法
CN104111651A (zh) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 自动行走设备及其向停靠站回归的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110166701A1 (en) * 2010-01-06 2011-07-07 Russell Thacher Adaptive scheduling of a service robot
CN102662400A (zh) * 2012-05-10 2012-09-12 慈溪思达电子科技有限公司 割草机器人的路径规划算法
US20140166047A1 (en) * 2012-12-05 2014-06-19 Vorwerk & Co. Interholding Gmbh Traveling cleaning appliance and method for operating such an appliance
CN104111651A (zh) * 2013-04-22 2014-10-22 苏州宝时得电动工具有限公司 自动行走设备及其向停靠站回归的方法
CN103901890A (zh) * 2014-04-09 2014-07-02 中国科学院深圳先进技术研究院 基于家庭庭院的户外自动行走装置及其控制系统和方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021243895A1 (zh) * 2020-06-02 2021-12-09 苏州科瓴精密机械科技有限公司 基于图像识别工作位置的方法、系统,机器人及存储介质

Similar Documents

Publication Publication Date Title
CN109063575B (zh) 一种基于单目视觉的智能割草机自主有序割草方法
CN107463167B (zh) 自动行走设备及目标区域识别方法
US7248968B2 (en) Obstacle detection using stereo vision
WO2021169193A1 (zh) 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
EP3199009B1 (en) Self-moving robot
CN103336966B (zh) 一种应用于农业智能机械的杂草图像辨识方法
WO2021169190A1 (zh) 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
CN107463166A (zh) 自动行走设备及其控制行走方法
WO2022021630A1 (zh) 自动行走设备及其控制方法和系统及可读存储介质
CN105785986A (zh) 自动工作设备
CN111460903B (zh) 基于深度学习的田间西兰花长势监测系统及方法
CN104111653A (zh) 自动行走设备及其工作区域判断方法
WO2021169192A1 (zh) 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
US20240071094A1 (en) Obstacle recongnition method applied to automatic traveling device and automatic traveling device
KR20210059839A (ko) 잔디경계선에 대한 영상처리를 이용한 잔디깍기 로봇 및 이의 제어 방법
CN107564071A (zh) 一种图像识别草地方法及装置
WO2017206950A1 (zh) 自动行走设备及其控制行走方法
WO2021042487A1 (zh) 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
CN106056107B (zh) 一种基于双目视觉避桩控制方法
CN102640622A (zh) 采棉机导航信息图像检测方法及系统
CN112634213A (zh) 一种无人机预测冬小麦冠层叶面积指数的系统和方法
CN115451965B (zh) 基于双目视觉的插秧机插植系统相对航向信息检测方法
US20240094739A1 (en) Automatic Working System, Automatic Walking Device, and Method for Controlling Same, and Computer-Readable Storage Medium
Chen et al. Measurement of the distance from grain divider to harvesting boundary based on dynamic regions of interest
WO2021042486A1 (zh) 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17805895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17805895

Country of ref document: EP

Kind code of ref document: A1