WO2014173290A1 - 自动行走设备及其工作区域判断方法 - Google Patents

自动行走设备及其工作区域判断方法 Download PDF

Info

Publication number
WO2014173290A1
WO2014173290A1 PCT/CN2014/075954 CN2014075954W WO2014173290A1 WO 2014173290 A1 WO2014173290 A1 WO 2014173290A1 CN 2014075954 W CN2014075954 W CN 2014075954W WO 2014173290 A1 WO2014173290 A1 WO 2014173290A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
sub
image
preset
boundary
Prior art date
Application number
PCT/CN2014/075954
Other languages
English (en)
French (fr)
Inventor
田角峰
刘瑜
刘芳世
Original Assignee
苏州宝时得电动工具有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201310141126.6A external-priority patent/CN104111460B/zh
Priority claimed from CN201310140775.4A external-priority patent/CN104111652A/zh
Priority claimed from CN201310140286.9A external-priority patent/CN104111651A/zh
Priority claimed from CN201310140824.4A external-priority patent/CN104111653A/zh
Application filed by 苏州宝时得电动工具有限公司 filed Critical 苏州宝时得电动工具有限公司
Publication of WO2014173290A1 publication Critical patent/WO2014173290A1/zh

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the invention relates to an automatic walking device and a working area determining method thereof.
  • the working area of the existing automatic lawn mower is generally set by physical boundary lines, such as wires or fences, and the automatic mower detects physical boundary lines to determine the working area.
  • the process of border wiring is cumbersome, time consuming and laborious, and there may be non-grass areas in the boundary line, or there are areas outside the boundary line that need to be cut.
  • the method of using physical boundary lines is inflexible and inconvenient.
  • the present invention provides an automatic walking device in which the working system is provided with a single order, humanized, and the working area is flexible and convenient, and the cost is low, and the initial installation is easy.
  • An automatic walking device comprising: a housing, a walking module, an image collecting device mounted on the housing, and an image collecting device and a walking module to control the operation of the automatic walking device
  • the main control module the image collection device captures a target area to form an image; the main control module divides the image into a sub-image block, and each sub-image block corresponds to a sub-area of the target area;
  • the main control module extracts colors of respective pixels of the at least one sub-image block;
  • the main control module calculates a proportion of the predetermined color in the sub-image block and compares with a first preset value;
  • the main control module extracts the sub-module And a texture feature value of the image block is compared with a second preset value; the proportion of the predetermined color of the main control module in a sub-image block of the image reaches or exceeds a first preset value and a predetermined texture When the value reaches or exceeds the second preset value, determining that the sub-region
  • the main control module includes a sub-area dividing unit, a color extracting unit, a proportion calculating unit, a ratio comparing unit, a texture extracting unit, a texture comparing unit, a work area identifying unit, and a storage unit
  • the storage unit stores the first pre-preparation Setting a value and a second preset value
  • the sub-area dividing unit divides the image into sub-image blocks corresponding to the sub-area region
  • the color extracting unit extracts colors of each pixel of the at least one sub-image block
  • the proportion calculating unit is to be a predetermined color Dividing the number of pixels by the total number of pixels to calculate the proportion of the predetermined color in the sub-image block
  • the ratio comparing unit compares the proportion of the predetermined color in the sub-image block with the first preset value
  • the texture extracting unit extracts the sub-image a texture feature value of the block
  • the texture comparison unit compares the texture feature value of the sub-image block with a second preset value
  • the storage unit stores a numerical range of color components of the predetermined color, and if the color components of one pixel respectively fall within a numerical range of color components of a predetermined color, the color extracting unit determines the color of the pixel For the predetermined color.
  • the color component is a three primary color component.
  • the texture feature value is a parameter dispersion degree
  • the second preset value is a preset dispersion degree
  • the storage unit stores a preset dispersion degree and a preset difference value
  • the texture extraction unit calculates a sub-image block. a gradient difference of at least one parameter of each adjacent two pixels, determining whether the gradient difference is greater than a preset difference value, and calculating a parameter dispersion of all gradient differences greater than the preset difference value in the sub-image block, texture comparison
  • the unit compares the parameter dispersion with the preset dispersion.
  • the main control module further includes a steering control unit, and the sub-area dividing unit divides the image into three sub-image blocks of a middle portion, a left portion, and a right portion, respectively corresponding to an intermediate region and a left region of the target region, and a right side area, the middle area is located in the front center of the automatic walking device, and the left side area and the right side area are respectively located on the left and right sides of the middle area along the traveling direction of the automatic walking device, and the working area identifying unit determines When the intermediate area is a non-working area, the steering control unit changes the traveling direction of the automatic traveling device until the intermediate area is determined as the working area.
  • the sub-area dividing unit divides the image into three sub-image blocks of a middle portion, a left portion, and a right portion, respectively corresponding to an intermediate region and a left region of the target region, and a right side area
  • the middle area is located in the front center of the automatic walking device
  • the left side area and the right side area are respectively located on
  • the target area is located directly in front of the automatic walking device, and the width of the target area is greater than the width of the automatic walking device.
  • the image capturing device has a viewing angle ranging from 90 degrees to 120 degrees.
  • the automatic walking device is an automatic lawn mower, and the predetermined color is green.
  • a shielding plate is disposed above the image collecting device, and the shielding plate extends outward from a top of the image collecting device.
  • the image collecting device collects an image of a region in front of the casing and transmits the image to the main control module, wherein the front region includes at least a predetermined region of the ground in front of the casing, and the width of the predetermined region is greater than a width of the housing, the main control module analyzes a predetermined image block corresponding to the predetermined area in the image to monitor whether a boundary exists in the predetermined area, when a sub-area is a non-working area and adjacent thereto When the sub-area is the working area, the main control module judges that the boundary is located in the sub-area, and when the boundary is monitored, the auto-driving device is placed at the boundary position and walks along the boundary.
  • the main control module controls the walking module to keep the housing in the working area and the boundary is on a specific side of the housing.
  • the image collection device collects an image and transmits the image to the main control module, and the main control module divides the predetermined image block of the image into three sub-image blocks of a middle portion, a right portion, and a left portion, respectively corresponding to the automatic
  • the front control unit and the automatic walking device and the like have a wide intermediate area, the right side of the right side of the middle area, and the left area of the left side of the middle area, and the main control module controls the movement of the walking module to adjust
  • the position of the automatic walking device, the middle area corresponding to the middle portion is identified as the working area, and the left side area or the right side area corresponding to the left part or the right part is identified as a non-working area and the boundary is located therein to keep the housing in the working area, And the boundary is on a particular side of the housing.
  • the main control module further includes a boundary recognition unit, and the boundary recognition unit determines whether the current boundary along the line leads to the stop station. If the determination result is no, the main control module controls the walking module to make the automatic walking device Leave the current boundary along the line.
  • the invention also provides a working area judging method for an automatic walking device, the automatic walking device comprising a casing, a walking module, an image collecting device mounted on the casing, and a connecting image collecting device and a walking module to control the automatic
  • the main control module of the working device, the working area determining method includes the following steps: the image collecting device captures a target area to form an image; the main control module divides the image into a thousand image blocks, each The sub-image block corresponds to a sub-area of the target area; the main control module extracts colors of respective pixels of the at least one sub-image block; the main control module calculates a proportion of the predetermined color in the sub-image block and is the first Presetting a comparison value; the main control module extracts a texture feature value of the sub-image block and compares it with a second preset value; if a predetermined color in a sub-image block of the image occupies a ratio of the first or the first The preset value and the texture feature value meets or exceeds the second prese
  • the main control module determines that the sub-region corresponding to the sub-image block is a non-working area.
  • the main control module stores a numerical range of color components of the predetermined color, and the main control module extracts a color component of each pixel of a sub-image block, if the color components of one pixel respectively fall into a predetermined The value range of the color component of the color, the master module determines that the color of the pixel is a predetermined color.
  • the color component is a three primary color component.
  • the texture feature value is a parameter dispersion degree
  • the second preset value is a preset dispersion degree
  • the preset control unit stores a preset dispersion degree and a preset difference value
  • the main control module calculates a sub-image block. A gradient difference of at least one parameter of each adjacent two pixels, determining whether the gradient difference is greater than a preset difference value, calculating a parameter dispersion degree of all gradient differences greater than the preset difference value in the sub-image block, and determining a parameter Whether the dispersion reaches the preset dispersion.
  • the image captured by the image collection device includes three sub-image blocks of a middle portion, a left portion, and a right portion, respectively corresponding to an intermediate region, a left region, and a right region of the target region, wherein the intermediate region is located in the automatic walking device.
  • the left side area and the right side area are respectively located on the left and right sides of the middle area along the traveling direction of the autonomous traveling device, and when the intermediate area is determined to be a non-working area, the steering control unit changes The traveling direction of the automatic walking device is described until the intermediate portion is judged as the working area.
  • the working area determining method further comprises the step of controlling the return of the automatic walking device to the docking station, the walking module comprising a wheel set mounted on the casing and a traveling motor driving the wheel set, the control automatically walking
  • the step of returning the device to the docking station comprises the following sub-steps: a. monitoring a predetermined image block of the image collected by the image collecting device, the predetermined image block corresponding to a predetermined area of the ground in front of the housing to determine the predetermined area Whether a boundary appears; b. If a boundary occurs in a specific area, control the automatic walking device to be at the boundary position; c. Walk along the boundary.
  • the width of the predetermined area is greater than the width of the housing, and the step a further includes: dividing the predetermined image block into corresponding thousands of sub-image blocks corresponding to the thousands of sub-areas of the predetermined area; analyzing each sub-image block The corresponding sub-region is identified as one of the working area or the non-working area; when one sub-area is a non-working area and its adjacent sub-area is a working area, the judgment boundary is located in the sub-area.
  • the retaining housing is located within the working area and the boundary is located on a particular side of the housing.
  • the automatic walking device and the working area judging method thereof in the present invention capture the object through the image collecting device
  • the image of the target area, the main control module combines color recognition and texture analysis to determine whether at least one sub-area of the target area is a work area, so that the recognition of the work area is more flexible and convenient.
  • the invention also provides an automatic walking device capable of recognizing a boundary and walking along a boundary, comprising: a housing; a walking module, the walking module comprising a wheel set mounted on the housing and a traveling motor driving the wheel set; a collecting device mounted on the casing; a working module performing a predetermined work; a main control module connecting the image collecting device, the working module and the walking module to control the operation of the automatic walking device, wherein: the image collecting device Collecting an image of a region in front of the casing and transmitting the image to the main control module, the front region at least including a predetermined area of the ground in front of the casing, the main control module analyzing the image corresponding to the predetermined region a predetermined image block to monitor whether a boundary occurs in the predetermined area, and to cause the autonomous walking device to be in a boundary position and to walk along the boundary when the boundary is monitored.
  • the width of the predetermined area is greater than the width of the housing, and the main control module divides the predetermined image block into corresponding thousands of sub-image blocks corresponding to the thousands of sub-areas of the predetermined area, and analyzes each sub-image block to The corresponding sub-area is identified as one of the working area or the non-working area.
  • the main control module determines that the boundary is located in the sub-area.
  • the main control module controls the walking module to keep the housing in the working area, and the boundary is located on a specific side of the housing.
  • the image collection device collects an image and transmits the image to the main control module, and the main control module divides the predetermined image block of the image into three sub-image blocks of a middle portion, a right portion, and a left portion, respectively corresponding to the automatic
  • the front control unit and the automatic walking device and the like have a wide intermediate area, the right side of the right side of the middle area, and the left area of the left side of the middle area, and the main control module controls the movement of the walking module to adjust
  • the position of the automatic walking device, the middle area corresponding to the middle portion is identified as the working area, and the left side area or the right side area corresponding to the left part or the right part is identified as a non-working area and the boundary is located therein to keep the housing in the working area, And the boundary is on a particular side of the housing.
  • the main control module further includes a boundary recognition unit, and the boundary recognition unit determines whether the current boundary along the line leads to the stop station. If the determination result is no, the main control module controls the walking module to make the automatic walking device Leave the current boundary along the line.
  • the boundary recognition unit determines that the walking direction of the automatic walking device is within a preset time or a preset distance, and compares the determination result with the preset standard result, and if yes, determines the current line The boundary is connected to the docking station. If they are inconsistent, it is judged that the boundary of the current line is not connected to the docking station.
  • the boundary recognition unit calculates an accumulated deflection amount of the automatic walking device within a preset time or a preset distance, and compares the accumulated deflection amount with a preset value to determine a walking direction of the automatic walking device.
  • the cumulative deflection amount is an accumulated wheel difference of the distance traveled by the left and right wheels of the automatic traveling device, or an accumulated deflection angle of the automatic traveling device.
  • the preset standard result is clockwise; when the specific side is the right side, the preset standard result is counterclockwise.
  • the main control module further includes a docking station identifying unit, wherein the docking station identifying unit monitors whether an image of the docking station is present in the image collected by the image collecting device, and if the docking station is monitored, the main control module controls the walking module. , causing the autonomous walking device to travel to the docking station.
  • the docking station identifying unit monitors whether an image of the docking station is present in the image collected by the image collecting device, and if the docking station is monitored, the main control module controls the walking module. , causing the autonomous walking device to travel to the docking station.
  • Another object of the present invention is to provide a method for returning to a docking station of an automatic traveling apparatus which is low in cost and easy to install initially.
  • the automatic walking device includes: a housing and a walking module, the walking module includes a wheel set mounted on the housing, a traveling motor driving the wheel set, and an image collecting device mounted on the housing, mounted on the shell a working module for performing a predetermined work, a main control module for connecting the image collecting device, the working module and the walking module to control the operation of the automatic walking device, and the method for returning the automatic walking device to the docking station comprises the following steps: a Monitoring a predetermined image block of the image collected by the image collection device, the predetermined image block corresponding to a predetermined area of the ground in front of the housing to determine whether a boundary exists in the predetermined area; b. if a boundary appears in the specific area, Control the automatic walking equipment to be in the boundary position; c. Walk along the boundary.
  • the width of the predetermined area is greater than the width of the housing, and the step a further includes: dividing the predetermined image block into corresponding thousands of sub-image blocks corresponding to the thousands of sub-areas of the predetermined area; analyzing each sub-image block The corresponding sub-region is identified as one of the working area or the non-working area; when one sub-area is a non-working area and its adjacent sub-area is a working area, the judgment boundary is located in the sub-area.
  • the retaining housing is located within the working area and the boundary is located on a particular side of the housing.
  • the image collection device collects the collected image and transmits the image to the main control module, where the main control module
  • the block divides the predetermined image block of the image into three sub-image blocks of a middle portion, a right portion, and a left portion, respectively corresponding to the front side of the autonomous walking device, and a wide intermediate portion such as an autonomous walking device, and a right side of the right side of the intermediate portion.
  • the corresponding left or right area is identified as a non-working area and the boundary is located therein to maintain the housing within the working area and the boundary is on a particular side of the housing.
  • the method for returning the automatic walking device to the docking station further comprises the following steps: d. determining whether the current boundary along the line leads to the stopping station; e. if the judgment result of the step d is no, leaving the current boundary along the line, performing Step a.
  • the step d further includes the following steps: dl. determining the walking direction of the automatic walking device within a preset time or a preset distance; d2. comparing the judgment result of the dl step with the preset standard result, if Consistently, it is judged that the current boundary along the line is connected to the docking station. If it is inconsistent, it is judged that the boundary of the current line is not connected to the docking station.
  • the step d l is specifically: calculating an accumulated deflection amount of the automatic walking device within a preset time or a preset distance, and comparing the accumulated deflection amount with a preset value to determine a traveling direction of the automatic walking device.
  • the cumulative deflection amount is an accumulated wheel difference of the distance traveled by the left and right wheels of the automatic traveling device, or an accumulated deflection angle of the automatic traveling device.
  • the preset standard result is clockwise; when the specific side is the right side, the preset standard result is counterclockwise.
  • the method for returning the automatic walking device to the docking station further comprises the following steps: f. monitoring whether a docking station appears in the image collected by the collecting image collecting device; g. if the docking station is monitored, driving to the docking station .
  • the beneficial effects of the present invention are: monitoring the boundary by using the image collecting device and returning to the docking station along the boundary, thereby avoiding the need to slot the buried physical boundary line, and arranging the working system with a single labor saving.
  • the present invention provides an obstacle detecting method capable of identifying an obstacle before collision and identifying an automatic walking device with high precision.
  • an automatic walking device at work
  • the automatic walking work in the area includes: a housing; a working module; a walking module supporting and driving the automatic walking device to walk; a main control module, controlling the working module and the walking module to operate according to a preset manner;
  • the automatic walking device further includes an image collecting device and an ultrasonic detecting device; the image collecting device acquires image information of a predetermined area in front of the automatic walking device, and the main control module determines whether the predetermined area is based on the image information
  • There is a non-working area and when there is a non-working area, the size parameter of the non-working area is compared with a preset value; when the size parameter of the non-working area is smaller than a preset value, the ultrasonic detecting device detects the Whether there are obstacles in the non-work area.
  • the main control module calculates a size parameter of the non-working area according to the image information, and the size parameter of the non-working area may be at least one of a length, a width, or an area of the non-working area.
  • the preset value is respectively smaller than a length, a width or an area of a projection of the automatic walking device on the working area.
  • the main control module is preset with a time threshold.
  • the main control module determines that there is an obstacle in the non-working area.
  • the control module controls the walking module to move the automatic walking device away from the obstacle.
  • an obstacle detecting method for an automatic walking device the automatic walking device automatically walking in a working area
  • the obstacle detecting method comprising the following steps: a. passing through an image collecting device Acquiring image information of a predetermined area in front of the automatic walking device; b. determining whether there is a non-working area in the predetermined area based on the image information; c. when there is a non-working area, determining a size parameter of the non-working area Comparing with the preset value; d. When the size parameter of the non-working area is less than the preset value, detecting whether the non-working area has an obstacle by the ultrasonic detecting device.
  • step b it is determined whether there is a non-working area by identifying colors and textures in the image information.
  • the size parameter of the non-working area is calculated according to the image information, and the size parameter of the non-working area may be the length, width or area of the non-working area. At least one of them.
  • the preset value is smaller than a preset value of the length, width and area of the projection of the automatic walking device on the working area is smaller than the projection of the automatic walking device on the working area.
  • the obstacle detecting method further comprises: comparing the time when the ultrasonic detecting device sends the ultrasonic wave to the received echo to a preset time threshold, when the ultrasonic detecting device sends the ultrasonic wave to the time when the echo is received When the preset time threshold is less than the preset time threshold, the non-working area has an obstacle.
  • the obstacle detecting method of the automatic walking device further comprises: when the obstacle exists in the predetermined area, the automatic walking device is away from the obstacle.
  • the automatic walking device and the obstacle detecting method provided by the invention enable the automatic walking device to perform obstacle recognition in the working area through the image collecting device and the ultrasonic detecting device, and need not identify with the obstacle when identifying the obstacle
  • the obstacle directly collides, making the automatic walking device not easily damaged by the collision with the obstacle, and the automatic walking device has high precision when recognizing the obstacle.
  • the invention also provides a docking method for docking an automatic walking device and a docking station, wherein the automatic walking device is provided with an image collecting device, the docking station is provided with a base, and the docking station is installed at a fixed position through the mounting plane of the base.
  • the docking method includes the following steps: a. collecting image information of the current position of the automatic walking device by the image collecting device; b. determining, by the environment image information, whether there is a stop station around the current position of the automatic walking device; c. When there is a stop station around the current position of the walking equipment, it is judged whether the automatic walking device and the docking station are facing each other; d. When the automatic walking device and the docking station are facing each other, the automatic walking device is controlled to be close to the stopping station in the direction opposite to the stopping station. .
  • step b includes: bl) identifying whether the environment image information includes a preset color; b2) extracting a sub-region having a preset color when the environment image information includes the preset color; b3) acquiring the contour of the sub-region; B4) determining whether the contour of the sub-area matches the preset contour; b5) determining that there is a stop station around the current position of the auto-walking device when the contour of the sub-area matches the preset contour.
  • the step b3) comprises: performing gray processing on the sub-area according to a preset color to obtain a gray-scale image, and performing gradient difference processing on the gray-scale image to obtain an outline of the sub-area.
  • the step b4) includes: obtaining a feature quantity that represents a contour of the sub-area; determining whether the feature quantity matches a preset feature quantity; and determining a contour of the sub-area according to a result of whether the feature quantity matches the preset feature quantity; Whether the preset contours match.
  • the contour of the sub-region includes a boundary contour of the sub-region and an inner contour of the sub-region
  • the feature amount characterizes at least one of a boundary contour or an inner contour of the sub-region.
  • the feature quantity is at least one of a parameter of a boundary contour of the sub-area, a parameter of an internal contour, or a ratio between a parameter of the boundary contour and a parameter of the internal contour, and the parameter includes a length, a height, and At least one of shape and area.
  • the preset contour is set according to a projection of a stop station in a plane parallel to the mounting plane in a direction parallel to the mounting plane.
  • the docking station includes a feature portion disposed on an outer surface of the docking station body
  • the step C includes: identifying a positional relationship of a feature portion of the docking station in the environment image information with respect to a central axis of the environmental image information, determining the Whether the positional relationship satisfies the preset condition, and when the positional relationship satisfies the preset condition, it is determined that the automatic walking device and the docking station are facing each other.
  • the characteristic part is a conductive terminal of the docking station, and the conductive terminal is used for electrically connecting the stopping station and the automatic walking device when the automatic walking device is successfully docked with the docking station.
  • the conductive terminal comprises a first terminal and a second terminal, wherein the distance between the first terminal of the environmental image information and the central axis of the environmental image information is a first distance, and the distance between the second terminal and the central axis of the environmental image information is The second distance, the preset condition is that the first terminal and the second terminal of the environmental image information are respectively located on two sides of the central axis of the environmental image information, and the ratio of the first distance to the second distance is a preset ratio.
  • the predetermined condition is that the conductive terminal is located on a central axis of the environmental image information.
  • the feature portion is a support arm disposed perpendicularly to the base, the support arm having a first side and a second side in a direction opposite to the automatic walking device, and the first side of the environmental image information
  • the distance between the central axis of the environmental image information is a first interval
  • the distance between the second side and the central axis of the environmental image information is a second interval
  • the preset condition is that the ratio of the first interval to the second interval is a preset ratio.
  • the invention also provides an automatic working system, comprising a docking station and an automatic walking device that can be docked with the docking station, the docking station comprises: a base, comprising a mounting plane, and the docking station main body is installed in a fixed position through the mounting plane;
  • the auto-walking device includes: an image gathering device that collects environmental image information of a current position of the automatic walking device; a main control module that receives environmental image information transmitted by the image collecting device, including the first determining component a second determining component, a signal sending unit, and a storage unit, the storage unit storing the preset parameter; the first determining component determining, according to the environmental image information and the preset parameter, whether there is a stop station around the current location of the automatic walking device; The second determining component determines whether the automatic walking device and the docking station are facing each other according to the environmental image information and the preset parameter; the signal sending unit sends a corresponding control signal according to the judgment result of the first determining component and the second determining component;
  • the preset parameter includes a preset contour
  • the first determining component includes a color recognizing unit, an area extracting unit, a contour acquiring unit, and a contour determining unit
  • the color recognizing unit identifies whether the environment image information includes a preset. a color
  • the region extracting unit extracts a sub-region having a preset color
  • the contour acquiring unit acquires a contour of the sub-region
  • the contour determining unit determines whether the contour of the sub-region matches the preset contour, and when the contour of the sub-region matches the preset contour When it is determined, there is a stop station around the current position of the automatic walking device.
  • the contour acquiring unit includes a grayscale processing circuit and a gradient difference processing circuit
  • the grayscale processing circuit performs grayscale processing on the sub-region according to a preset color to obtain a grayscale image
  • the gradient difference processing circuit pairs the grayscale The image is subjected to gradient differential processing to obtain the contour of the sub-region.
  • the contour determining unit includes a feature quantity acquiring circuit and a feature quantity matching circuit, wherein the feature quantity acquiring circuit acquires a feature quantity that represents a contour of the sub-area, and the feature quantity matching circuit determines whether the feature quantity and the preset feature quantity are Matching, when the feature amount matches the preset feature amount, the contour determining unit determines the contour of the sub-region and the preset contour.
  • the contour of the sub-region includes a boundary contour of the sub-region and an inner contour of the sub-region, and the feature quantity characterizes at least one of a boundary contour or an inner contour of the sub-region.
  • the feature quantity is at least one of a parameter of a boundary contour of the sub-area, a parameter of an internal contour, or a ratio between a parameter of the boundary contour and a parameter of the internal contour, and the parameter includes a length, a height, and At least one of shape and area.
  • the preset contour is set according to a projection of a stop station in a plane parallel to the mounting plane in a direction parallel to the mounting plane.
  • the preset parameter includes a preset condition
  • the second determining component includes a feature identifying unit and a feature determining unit
  • the feature identifying unit identifies a positional relationship between the feature portion of the docking station and the central axis of the environment image information in the environment image information, Whether the positional relationship of the feature judging unit satisfies a preset condition, and when the positional relationship satisfies the preset condition, the second judging component judges that the auto-walking device and the docking station are facing each other.
  • the characteristic part is a conductive terminal of the docking station, and the conductive terminal is used for electrically connecting the stopping station and the automatic walking device when the automatic walking device is successfully docked with the docking station.
  • the conductive terminal comprises a first terminal and a second terminal, wherein the distance between the first terminal and the central axis of the environmental image information in the environmental image information is a first distance, and the second terminal and the central axis of the environmental image information The distance is a second distance, and the preset condition is that the ratio of the first distance to the second distance is a preset ratio.
  • the predetermined condition is that the conductive terminal is located on a central axis of the environmental image information.
  • the feature portion is a support arm disposed perpendicularly to the base, the support arm having a first side and a second side in a direction opposite to the automatic walking device, and the first side of the environmental image information
  • the distance between the central axis of the environmental image information is a first interval
  • the distance between the second side and the central axis of the environmental image information is a second interval
  • the preset condition is that the ratio of the first interval to the second interval is a preset ratio.
  • the invention has the beneficial effects that: the automatic walking device can be reliably docked with the docking station without a human being, and the drawing is illustrated
  • FIG. 1 is a diagram of an automatic working system of an embodiment of the present invention.
  • FIG 2 is a block diagram of the automatic walking device in the automatic working system shown in Figure 1.
  • Figure 3 is a perspective view of the autonomous vehicle shown in Figure 2.
  • Fig. 4 is a schematic view showing a photographing area of the autonomous walking apparatus shown in Fig. 2.
  • Fig. 5 is a schematic diagram showing the pixel distribution of the image shown in Fig. 3.
  • Fig. 6 is a flow chart showing the first embodiment of the working area judging method of the present invention.
  • Fig. 7 is a flow chart showing the second embodiment of the working area judging method of the present invention.
  • Fig. 8 is a schematic view showing the automatic walking apparatus of the present embodiment keeping a straight line.
  • Fig. 9 is a schematic view showing the automatic traveling apparatus turned to the right in the embodiment.
  • Figure 10 is a schematic illustration of the automatic walking apparatus of Figure 1 walking along a boundary.
  • Figure 11 is a schematic view showing the principle of the automatic walking device of Figure 10 walking along the boundary.
  • Figure 12 is a schematic view of the autonomous vehicle shown in Figure 1 taken out of an island.
  • Figure 13 is a flow chart showing the method of returning the automatic walking device to the docking station of the present invention.
  • Figure 14 is a flow diagram of the method of identifying whether the current boundary along the line leads to the docking station in Figure 13.
  • Figure 15 is a schematic view showing the operation of the ultrasonic detecting device of the automatic traveling apparatus of the present invention;
  • Figure 16 is a flow chart showing the obstacle detecting method of the automatic traveling apparatus of the present invention.
  • Figure 17 is a circuit block diagram of another embodiment of the automatic walking device of the present invention.
  • FIG. 18 is a general working flow chart of the docking method of the automatic walking device and the docking station of the present invention
  • FIG. 19 is a circuit block diagram of the first determining component shown in FIG.
  • FIG. 20 is a working flow chart of a preferred embodiment of the first judging component of FIG. 18 for determining whether there is a docking station around the current position of the auto-traveling device;
  • Figure 21 is a circuit block diagram of the contour acquiring unit shown in Figure 19;
  • Figure 22 is a circuit block diagram of the contour judging unit shown in Figure 19;
  • Figure 23 is a perspective view of the docking station shown in Figure 1;
  • Figure 24 is a side view of the docking station shown in Figure 23;
  • Figure 25 is a front elevational view of the docking station shown in Figure 23;
  • Figure 26 is a circuit block diagram of the second determining component of Figure 17;
  • Figure 27 is a flow chart showing the operation of the first preferred embodiment shown in Figure 26 for determining whether the automatic traveling device and the docking station are facing each other;
  • Figure 28 is a flow chart showing the operation of the second preferred embodiment shown in Figure 26 for determining whether the automatic walking device and the docking station are facing each other;
  • Figure 29 is a flow chart showing the operation of the third preferred embodiment shown in Figure 26 for determining whether the automatic traveling device and the docking station are facing each other.
  • I I housing; 15, image collection device; 16, ultrasonic detection device; 17, walking module; 19, working module; 33, energy module;
  • 3153b gradient differential processing power 3155, contour judging unit; 3155a, feature quantity acquiring circuit;
  • FIG. 1 shows an automatic working system according to an embodiment of the present invention.
  • the automatic working system is set on the ground or other surface.
  • the ground is divided into a work area 5 and a non-work area 7, and a part of the non-work area 7 surrounded by the work area 5 forms an island 71, and a boundary line between the work area 5 and the non-work area 7 forms a boundary 6.
  • the work area 5 and the non-work area 7 are visually different.
  • the automated working system includes an autonomous walking device 1 and a docking station 4.
  • the automatic walking device 1 can be an automatic vacuum cleaner, an automatic lawn mower, an automatic trimmer, and the like.
  • the automatic traveling device 1 is an automatic lawn mower, and the docking station 4 is disposed on the peripheral boundary 6 of the work area.
  • the automatic traveling apparatus 1 has a casing 11 and an image collecting device 15 mounted on the casing 11.
  • Image collection device 15 captures an image of the area in front of the autonomous walking device 1 for identifying the work area 5 and the non-work area 7.
  • the automatic walking device 1 further includes a main control module 31, a walking module 17, a working module 19, and an energy module 33.
  • the main control module 31 is connected to the walking module 17, the working module 19, the energy module 33, and the image collecting device 15.
  • the work module 19 is used to perform a specific work.
  • the working module 19 is specifically a cutting module, and includes a cutting member (not shown) for mowing and a cutting motor (not shown) for driving the cutting member.
  • the energy module 33 is used to energize the operation of the autonomous walking device 1.
  • the energy source of the energy module 33 may be gasoline, a battery pack, or the like.
  • the energy module 33 includes a rechargeable battery pack disposed within the housing 2. At work, the battery pack releases electrical energy to maintain the autonomous walking device 1 in operation. When not in use, the battery can be connected to an external power source to supplement the power. In particular, for a more user-friendly design, when the battery is detected to be insufficient, the autonomous walking device 1 will find the stop station 4 Charging energy.
  • the walking module 17 includes a wheel set 13 and a travel motor that drives the wheel set 13.
  • the wheel set 13 can have a variety of setting methods.
  • the wheel set 13 includes a drive wheel driven by a travel motor and an auxiliary wheel 133 of the auxiliary support housing 11, and the number of drive wheels may be one, two or more.
  • the moving direction of the automatic traveling device 1 is the front side, the side opposite to the front side is the rear side, and the two sides adjacent to the front and rear sides are the left and right sides, respectively.
  • the number of driving wheels of the autonomous traveling device 1 is two, which are the left wheel 131 on the left side and the right wheel 132 on the right side, respectively.
  • the left wheel 131 and the right wheel 132 are symmetrically arranged with respect to the center axis of the automatic traveling device 1.
  • the left wheel 131 and the right wheel 132 are preferably located at the rear of the housing 11, and the auxiliary wheel 133 is located at the front, although it may alternatively be provided in other embodiments.
  • the left wheel 131 and the right wheel 132 are each coupled to a drive motor to effect differential output to control steering.
  • the drive motor can be directly coupled to the drive wheel, but a transmission can also be provided between the drive motor and the drive wheel, such as a planetary gear train as is common in the art.
  • two drive wheels may be provided, one for the drive motor.
  • the drive motor drives the left wheel 131 through the first transmission and the right wheel 132 through the second transmission. That is, the same motor drives the left wheel 13 1 and the right wheel 132 through different driving devices.
  • the image collecting device 15 is mounted at a position on the front portion of the casing 11, preferably centered, and collects an image of a region in front of the casing 11, the front region including at least a target area of the front ground.
  • the viewing range of the image collecting device 15 is a fixed area, such as a fixed viewing angle range of 90 degrees to 120 degrees.
  • the framing range may also be active, and a range of angles within the range of the angle of view may be selected as the actual framing range. For example, the range of 90 degrees in the middle of the range of 120 degrees is selected as the actual framing range.
  • the framing range of the image concentrating device 15 includes a target area which is a rectangular DCIJ area in Fig. 4, and the DCIJ area is located on the ground directly in front of the autonomous traveling apparatus 1, and is spaced apart from the autonomous walking apparatus 1 by a small distance to form a blind spot d.
  • the central axis of the DCIJ area coincides with the center axis of the housing 11 of the autonomous vehicle 1, and the width of the DCIJ area is slightly larger than the width of the autonomous walking apparatus 1. This ensures that the automatic walking device 1 can collect image information of the ground not far from it in front of it, for the main control module 3 1 to judge its attribute.
  • the entire viewing range of the image collecting device 15 may be larger than the DCIJ region, for example, including the region above the ground.
  • the main control module 31 will collect the complete image of the image collecting device 15.
  • the predetermined graphic block corresponding to the DCIJ area is extracted for ground attribute analysis; the entire viewing range of the image collecting device 15 may also be exactly equal to the DCIJ area, and the complete image of the image collecting device 15 is corresponding to the DCIJ.
  • a predetermined image block of the area is extracted for ground attribute analysis; the entire viewing range of the image collecting device 15 may also be exactly equal to the DCIJ area, and the complete image of the image collecting device 15 is corresponding to the DCIJ.
  • the predetermined image block is divided into three sub-image blocks of a middle portion, a left portion, and a right portion, which respectively correspond to sub-regions in the target region.
  • the middle portion corresponds to the front center of the automatic traveling device 1 and the intermediate portion a which is equal to the automatic walking device 1;
  • the left portion corresponds to the front of the automatic traveling device 1, and the left region b on the left side of the intermediate portion a;
  • the right portion corresponds to In front of the automatic walking device 1, a right side region c located on the right side of the intermediate portion a.
  • the automatic walking device 1 further includes an ultrasonic detecting device 16 for detecting whether an obstacle or a charging station is present in front of the autonomous traveling device 1.
  • the main control module 31 determines the attributes of the respective parts in the framing area by analyzing various pieces of information in the image captured by the image collecting device 15, such as analyzing whether it belongs to the working area or the non-working area, or analyzing that it belongs to the already-worked area or is to be operated. region. Specifically, in the embodiment, the main control module 3 1 determines whether the position corresponding to each part is a grassland as a work area by analyzing color information and texture information of each part in the image. As a work area, the color of the grass is green, and the texture is a natural irregular pattern. As a non-working area, other ground colors such as land or cement are usually not green, even if the color is green, it is usually a manually processed item.
  • the main control module 3 1 recognizes that a certain part of the color is green and the texture is irregular, it is judged that the part is grass, and if the color is not green or the texture has a rule, it is non-grass.
  • the main control module 3 1 After judging the attributes of the respective parts, the main control module 3 1 also controls the traveling direction of the autonomous walking device 1 so that the autonomous walking device 1 is always located in the working area.
  • the main control module 31 includes a sub-area dividing unit 31, a color extracting unit 312, a ratio calculating unit 313, a ratio comparing unit 314, a texture extracting unit 315, and a texture comparing unit 316.
  • the sub-region The domain dividing unit 3 11 divides the image into thousands of sub-image blocks, respectively corresponding to the thousands of sub-regions in the target region.
  • the thousands of sub-image blocks include three sub-image blocks of the middle portion, the left portion, and the right portion, respectively corresponding to the intermediate region a, the left region b, and the right region c in the target region.
  • the color extracting unit 3 12 extracts colors of respective pixels of at least one sub-image block and determines whether each pixel is a predetermined color.
  • Each of the extracted pixels may be all pixels in the sub-image block, or may be pixels arranged in the sub-image block, such as pixels arranged in one or more pixels.
  • the color extracting unit 312 extracts colors of respective pixels in the middle, left, and right portions, and in particular, the color extracting unit 312 extracts three primary color (RGB) components of the respective pixels; the storage unit 3 18 stores There is a numerical range of the three primary color components of the predetermined color, and the color extracting unit 312 compares the numerical range of the three primary color components of one pixel with the three primary color components of the predetermined color; if the three primary color components of one pixel respectively fall within the numerical range of the three primary color components of the predetermined color The color extracting unit 3 12 determines that the color of the pixel is a predetermined color.
  • the storage unit 3 18 stores a preset hue value (Hue) range of a predetermined color
  • the color extracting unit 312 further converts the obtained RGB component into an HSV after extracting one pixel three primary color components ( Hue, Saturation, Brightness Value), and determine whether the hue value is within the preset hue value, and the color of the pixel is the predetermined color.
  • the predetermined color is green.
  • the ratio calculation unit 3 13 calculates the ratio of the pixels of the predetermined color in one sub-image block (the ratio of the lower cylinders).
  • the ratio calculation unit 314 divides the number of pixels of a predetermined color by the number of total pixels in the sub-image block, and obtains the proportion of pixels of the predetermined color in the sub-image block.
  • the storage unit 3 18 stores a first preset value, and the ratio comparison unit 314 compares the proportion of the predetermined color in the sub-picture block with the first preset value to determine the size of the two.
  • the texture extracting unit 3 15 extracts the texture feature value of the sub-image block.
  • the dispersion of at least one parameter of all pixels of a sub-image block may reflect the degree of difference between the respective values of the parameter. If the target area is green, the dispersion of one parameter in the image is small, even zero. Due to the irregular texture of the grass, the dispersion of the difference value of one parameter of all pixels of a sub-image block is greater than or equal to a preset dispersion, thereby embodying the irregularity of the texture of the sub-image block. Therefore, in this embodiment, the texture feature value is a parameter dispersion, such as a color. Discreteness, gradation dispersion, brightness dispersion, etc.
  • the texture comparison unit 3 16 compares the texture feature value of the sub-image block with a second preset value to determine whether the texture feature value reaches the second preset value.
  • the second preset value is a preset dispersion.
  • the work area identifying unit 3 17 determines that the sub-area corresponding to the sub-image block is working when the proportion of the predetermined color in the sub-image block reaches or exceeds the first preset value and the texture feature value reaches or exceeds the second preset value. region.
  • the main control module 3 1 may also perform texture analysis and then perform color setting, as long as the proportion of the predetermined color in one sub-image block reaches the first preset value and the texture feature value reaches the second preset.
  • the main control module 32 identifies the sub-region corresponding to the sub-image block as the work area 5.
  • the above method of distinguishing between the working area 5 and the non-working area 7 is merely exemplary.
  • the main control module 31 can also process the image using other algorithms to distinguish the working area 5 from the non-working area 7.
  • the predetermined block is divided into more sub-areas to improve the accuracy of the position recognition, change the shape of the predetermined block, such as becoming a fan to cover a wider field of view, and the like.
  • the color dispersion is taken as an example to illustrate the specific process of texture analysis.
  • the storage unit 3 18 stores a preset dispersion and a preset difference value.
  • the texture extracting unit 3 15 After the color extracting unit 3 12 determines whether each pixel is a predetermined color, the texture extracting unit 3 15 marks all the pixels of the predetermined color as 1 and the pixels of the non-predetermined color as 0; the texture extracting unit 3 15 calculates each adjacent two a gradient difference value of the pixel value of the pixel, and determining whether the gradient difference value is greater than or equal to a preset difference value, such as 1; the texture extracting unit 3 15 calculates all gradient difference values in the sub-area that are greater than or equal to the preset difference value. Dispersion, specifically, the dispersion can be calculated by means of extreme difference, average difference or standard deviation.
  • the texture extracting unit 3 15 calculates a gradient difference value of the tone values of each adjacent two pixels, and determines whether the gradient difference value is greater than or equal to a preset difference value; the texture extracting unit 3 15 calculates the The dispersion of all the gradient difference values in the sub-area that is greater than or equal to the preset difference value may be calculated by using a range difference, an average difference, or a standard deviation.
  • the texture comparison unit 3 16 compares the dispersion with the preset dispersion to determine whether the dispersion reaches a predetermined dispersion.
  • the main control module 3 1 may also perform texture analysis and then perform color recognition, as long as the proportion of the predetermined color in one sub-image block reaches or exceeds the first preset value and the texture feature value reaches or exceeds the first
  • the second control module 32 identifies that the sub-region corresponding to the sub-image block is the work area 5.
  • the above distinction between the working area 5 and the non-working area 7 is merely exemplary, similar Under the idea, the main control module 31 can also process the image using other algorithms to distinguish the working area 5 from the non-working area 7.
  • the predetermined block is divided into more sub-areas to improve the accuracy of the position recognition, change the shape of the predetermined block, such as becoming a fan shape to cover a wider field of view, and the like.
  • the main control module 31 also includes a steering control unit 3 19 .
  • the steering control unit 3 19 maintains the traveling direction of the automatic traveling device 1; when the intermediate area a is determined to be a non-working area, the steering control unit 3 19 changes the traveling direction of the autonomous walking apparatus 1 until the intermediate area a is judged as a working area. Thereby, it is ensured that the autonomous walking device 1 walks only in the work area 5 and does not run out of the work area 5.
  • the steering control unit 319 controls the automatic traveling apparatus 1 to randomly turn left or turn right until the intermediate area a is judged as a work area.
  • the steering control unit 3 19 further adjusts the traveling direction of the autonomous traveling apparatus 1 according to the trend of the change in the green ratio in the intermediate area a at the time of turning or the trend of the change in the green dispersion. If the automatic walking device 1 turns to the right, the green ratio in the intermediate area a becomes larger or the green dispersion becomes larger, the steering control unit 3 19 controls the automatic traveling device 1 to continue to turn to the right; instead, if the automatic walking device 1 is oriented When turning right, the green ratio in the intermediate area a becomes smaller or the green dispersion becomes smaller. The steering control unit 3 19 controls the automatic traveling apparatus 1 to stop turning to the right and then to the left.
  • the present invention further provides a working area determining method for the automatic walking device 1.
  • the first preferred embodiment of the working area determining method of the present invention comprises the following steps:
  • Step S101 The image collecting device 15 captures an image of a target area in front of the automatic traveling device 1.
  • Step S102 The main control module 3 1 divides the image captured by the image collecting device 15 into a sub-image block. In this embodiment, if the thousands of sub-image blocks are divided into three sub-image blocks of a middle portion, a left portion, and a right portion, respectively, corresponding to the intermediate region a, the left region b, and the right region c in the target region.
  • Step S103 The main control module 31 extracts colors of respective pixels of at least one sub-image block.
  • the main control module 31 extracts the three primary color (RGB) components of the respective pixels of each sub-image block.
  • Step S104 The main control module 31 recognizes whether the color of each pixel of the sub-image block is a predetermined color.
  • Step S105 The main control module 31 calculates the proportion of the predetermined color in the sub-image block.
  • the predetermined color is green
  • the main control module 31 stores a color component of a predetermined color, in particular, a numerical range of the three primary color components. If the color components of one pixel respectively fall within the numerical range of the color component of the predetermined color, the color extracting unit 312 judges that the color of the pixel is a predetermined color.
  • the occupancy calculation unit 313 divides the number of green pixels by the total number of pixels in the sub-picture block to obtain the proportion of green pixels in the sub-image block.
  • Step S106 The main control module 31 determines whether the proportion of the predetermined color in the sub-image block reaches or exceeds the first preset value. If yes, go to step S107, otherwise go to step S110.
  • Step S107 The main control module 31 extracts texture feature values of the sub-image block.
  • the texture feature value is a parameter dispersion degree
  • the second preset value is a preset dispersion degree.
  • the main control module 31 stores a preset dispersion and a preset difference value
  • the texture extraction unit 315 calculates a gradient difference of at least one parameter of each adjacent two pixels in a sub-image block, and determines whether the gradient difference is greater than a pre- A difference value is set, and the dispersion of all gradient differences greater than the preset difference value in the sub-image block is calculated.
  • Step S108 The main control module 31 determines whether the texture feature value of the sub-image block reaches or exceeds a second preset value. If yes, go to step S109, otherwise go to step S110.
  • Step S109 If the proportion of the predetermined color in the sub-image block reaches or exceeds the first preset value and the texture feature value reaches or exceeds the second preset value, the main control module 32 identifies the sub-region corresponding to the sub-image block as Work area 5.
  • Step S110 If the proportion of the predetermined color in the sub-image block is smaller than the first preset value and the texture feature value is smaller than the second preset value, the main control module 32 identifies that the sub-region corresponding to the sub-image block is not working. Area 7.
  • the second preferred embodiment of the working area determining method of the present invention comprises the following steps: Step S201: The image collecting device 15 captures an image of the ground in front of the automatic walking device 1. Step S202: The main control module 31 divides the image captured by the image collecting device 15 into a sub-image block. In this embodiment, if the thousands of sub-image blocks are divided into three sub-image blocks of a middle portion, a left portion, and a right portion, respectively, corresponding to the intermediate region a, the left region b, and the right region c.
  • Step S203 The main control module 31 extracts texture feature values of each sub-image block.
  • the texture feature value is a parameter dispersion degree
  • the second preset value is a preset dispersion degree.
  • the main control module 31 stores a preset dispersion and a preset difference value
  • the texture extraction unit 315 calculates a gradient difference of at least one parameter of each adjacent two pixels in a sub-image block, and determines the gradient difference. Whether the score is greater than a preset difference value, and calculating a dispersion of all gradient differences greater than the preset difference value in the sub-image block.
  • Step S204 The main control module 31 determines whether the texture feature value of the sub-image block reaches or exceeds a second preset value. If yes, go to step S205, otherwise go to step S210.
  • Step S205 The main control module 31 extracts colors of respective pixels of at least one sub-image block.
  • the main control module 31 extracts the three primary color (RGB) components of the respective pixels of each sub-image block.
  • Step S206 The main control module 31 recognizes whether the color of each pixel of the sub-image block is a predetermined color.
  • Step S207 The main control module 31 calculates the proportion of the predetermined color in the sub-image block.
  • the predetermined color is green
  • the main control module 3 1 stores a color component of a predetermined color, in particular, a numerical range of the three primary color components. If the color components of one pixel respectively fall within the numerical range of the color component of the predetermined color, the color extracting unit 312 determines that the color of the pixel is a predetermined color. In one sub-image block, the ratio calculation unit 3 13 divides the number of green pixels by the total number of pixels in the sub-picture block to obtain the proportion of green pixels in the sub-image block.
  • Step S208 The main control module 31 determines whether the proportion of the predetermined color in the sub-image block reaches or exceeds the first preset value. If yes, go to step S209, otherwise go to step S210.
  • Step S209 If the proportion of the predetermined color in the sub-image block reaches or exceeds the first preset value and the texture feature value reaches or exceeds the second preset value, the main control module 32 identifies the sub-region corresponding to the portion as Work area 5.
  • Step S210 If the proportion of the predetermined color in the sub-image block is smaller than the first preset value and the texture feature value is smaller than the second preset value, the main control module 32 identifies that the sub-region corresponding to the sub-image block is not working. Area 7.
  • the work area judging method in this embodiment controls the walking direction of the autonomous walking apparatus 1 after determining whether at least one sub-area is a work area.
  • the main control module 3 controls the automatic walking device 1 to maintain the walking direction.
  • the main control module 3 1 changes the traveling direction of the automatic traveling apparatus 1 until the intermediate area a is judged as a work area. Thereby, it is ensured that the autonomous walking device 1 walks only in the work area 5 and does not run out of the work area 5.
  • the The main control module 31 controls the automatic traveling device 1 to randomly turn left or turn right until the intermediate area a is judged as the work area.
  • the main control module 3 1 further adjusts the traveling direction of the autonomous traveling apparatus 1 according to the change trend of the green ratio in the intermediate area a at the time of turning or the change trend of the green dispersion.
  • the main control module 3 1 controls the automatic walking device 1 to continue to turn to the right;
  • the main control module 31 controls the automatic traveling apparatus 1 to stop turning to the right, and then turns to the left.
  • the working area judging method of the present invention photographs the image in front of the automatic walking device 1 by the image collecting device 15, and the main control module 31 combines color recognition and texture analysis to determine whether at least part of the target area is a working area, thus making the working system Set the order, user-friendly, and the work area identification is flexible and convenient.
  • the autonomous walking apparatus 1 of the present invention can also control the automatic traveling equipment 1 to return to the landing station 4 along the boundary 6 based on the distribution of the working area 5 and the non-working area 7 in the predetermined area and finding the boundary 6.
  • the invention therefore also provides a method of returning an automated walking device to a docking station.
  • the main control module 3 1 implements analysis of a predetermined image block corresponding to the predetermined area in the image to monitor whether a boundary appears in the predetermined area.
  • the main control module 3 1 divides the predetermined image block into corresponding thousands of sub-image blocks corresponding to the thousands of sub-regions of the predetermined area, and analyzes each sub-image block to identify the corresponding sub-area as a work.
  • the main control module determines that the boundary position is located in the sub-area.
  • the main control module 3 1 needs to further determine the relative position of itself and the boundary 6. relationship. If a certain sub-area is determined to be the non-working area 7, and the adjacent sub-area of the area is the working area 5, then it is determined that the area includes the boundary 6, and since the actual range of each area is limited, The specific location of the boundary 6 is determined.
  • the manner of identifying the location of the boundary 6 is merely exemplary.
  • the main control module 31 can also use other algorithms to process the video to identify the boundary. For example, dividing a predetermined block into More sub-regions improve the accuracy of the boundary 6 position recognition, change the shape of the predetermined block, such as becoming fanned to cover a wider field of view, changing the size of the predetermined block to find a farther boundary, and the like.
  • control walking module 3 1 After the main control module 3 1 recognizes the position of the boundary 6, the control walking module 3 1 operates to place the automatic traveling device 1 at the boundary position. If the actual coverage of the predetermined block is large, this step may take a long time and action to complete, for example, after finding a boundary 6 on the outermost side of a larger predetermined block divided into more sub-areas.
  • the module 3 1 drives the autonomous walking device to walk until the intermediate area or several adjacent areas closest to the intermediate area a are non-working areas; if, as in this embodiment, the predetermined area is small and is divided into only three sub-areas, then When the boundary 6 is found to be very close to the autonomous walking device 1, the walking to the boundary position at this time only includes controlling the automatic walking device to maintain the current state, avoiding the action of moving away from the boundary 6.
  • the main control module 3 1 continues to control the movement of the walking module 17 to cause the automatic walking device 1 to travel along the boundary 6.
  • the autonomous walking device 1 needs to maintain the orientation and the boundary 6, so that the main control module 31 controls the walking module 17, the holding housing 11 is located in the working area 5, and the boundary 6 is located in the specific one of the housing 11. side.
  • Main control module 3 1 Position the area where the boundary 6 is located on one side of the autonomous device instead of the front to achieve orientation adjustment. Specifically, the main control module operates to make the middle area a a working area, and the left side area b or the right side area c is a non-working area, so that the boundary 6 is located in the left side area b or the right side area c, but not in the middle Area a.
  • the main control module 3 1 can make the boundary 6 on either side of the automatic walking device, or the boundary 6 can be located on a specific side of the automatic walking device.
  • the boundary 6 when the orientation is adjusted, the boundary 6 is located in the automatic walking device 1
  • the specific side that is, the intermediate area a is maintained as the working area 45
  • a specific one of the left area b or the right area c is the non-working area 7, and the other is the working area 5.
  • the main control module 3 1 controls the walking module 17 to operate, and the middle area corresponding to the middle portion is recognized as the working area, and the left side area or the right side area corresponding to the left part or the right part is identified as a non-working area and the boundary is located therein.
  • the main control module 3 1 keeps the orientation and the traveling direction of the automatic walking device 1 and the boundary 6 so that the main control module 31 controls the walking module 17 to operate, and the intermediate region corresponding to the middle portion is recognized as the working area, left.
  • the left side area or the right side area corresponding to the right part or the right part is identified as a non-working area and the boundary is located therein, so that the sub-area where the boundary 6 is located is always located on one side of the automatic walking device 1, that is, the aforementioned intermediate area a is the working area. 45, and one of the left area b or the right area c is the non-working area 7, and the other is the working area 5.
  • the main control module 31 further includes a boundary recognition unit 321 and a stop identification unit. 323, the following is introduced in turn.
  • the boundary identifying unit 321 judges whether the boundary 6 of the current line is correct, that is, whether it leads to the docking station 4.
  • the island 7 1 surrounded by the work area 5 also has a boundary 6.
  • the automatic walking device 1 finds the boundary 6 of the island 7 1 when searching for the boundary 6, it may turn around the island 7 1 continuously, and cannot leave, and cannot return to the docking station 4.
  • the boundary identifying unit 321 determines whether the boundary 6 of the current traveling device 1 is the boundary 6 of the working area 5, and if the determination is yes, the main control module 3 1 Controlling the walking module 17 to cause the automatic walking device 1 to continue to travel along the boundary 6; if the determination result is no, the main control module 3 1 controls the walking module 17 to cause the automatic walking device 1 to leave the boundary 6 of the current line, and then seek Other borders 6.
  • the boundary recognizing unit 321 judges whether or not the current boundary is correct by comparing the actual traveling direction of the autonomous traveling apparatus 1 with the theoretical traveling direction when walking along the correct boundary.
  • the autonomous walking apparatus 1 always ensures that the boundary 6 is located on a specific side of itself when returning along the boundary 6. For example, if the automatic walking device 1 ensures that the boundary 6 is located on the right side of itself, if the automatic walking device 1 is on the peripheral boundary 6 of the working area 5, it will walk inside the boundary 6, and the traveling direction is counterclockwise, if on the island 7 1 On the peripheral boundary, it will walk outside the boundary 6, and its direction of travel is clockwise.
  • the preset standard result is set according to the above correspondence relationship. If the specific side is the left side, the theoretical traveling direction is clockwise, the specific side is the right side, and the theoretical walking direction is counterclockwise.
  • the boundary recognition unit 321 first determines that the walking direction of the automatic walking device 1 is within a preset time or a preset distance, expressed in clockwise and counterclockwise directions, and the walking direction is calculated by a preset time or a preset distance. And the cumulative deflection amount of the automatic walking device 1 is obtained by comparing the accumulated deflection amount with a preset value, and the cumulative deflection amount is a cumulative wheel difference of the distance traveled by the left wheel 13 1 and the right wheel 132 of the automatic traveling device 1 , or The cumulative deflection angle of the autonomous walking device 1.
  • the boundary recognition unit 321 then compares the result of the determination with the preset standard result in the storage unit 3 18 , that is, the theoretical walking direction when walking along the correct boundary 6 , if the result of the comparison is the actual walking direction and the theoretical walking direction. Consistently, the boundary identifying unit 321 determines that the boundary 6 along the current line is the correct boundary 6 and leads to the stopping station 4. If the result of the comparison is inconsistent, the boundary identifying unit 321 determines that the current boundary along the line is incorrect, and does not lead to the docking. Station 4.
  • the docking station identification unit 323 identifies whether the autonomous traveling device 1 has approached or arrived at the docking station 4. When it recognizes the docking station 4, the main control module 31 controls the walking module to cause the autonomous traveling device 1 to walk and dock toward the docking station.
  • the docking station identification unit 323 can be implemented in various manners, and it can monitor whether the docking station 4 appears in the image collected by the image collecting device 15. If the docking station 4 is monitored, the main control module 3 1 controls the walking module. 17.
  • the autonomous vehicle 1 is driven to the stop station 4. It can also use an electromagnetic or other type of proximity sensor to send a prompt signal to the automatic walking device 1 when the docking station 4 and the autonomous walking device 1 are close to each other, and details are not described herein again.
  • step SO the automatic walking device 1 first proceeds to step SO to keep walking.
  • step S 1 is performed to monitor whether or not the boundary 6 appears in the image collected by the image collecting device 15 during the monitoring process.
  • the automatic walking device 1 keeps walking. If the main control module 31 does not find the boundary 6 in the image collected by the image collecting device 15, the step S is continued, and the boundary 6 is continuously monitored; if the main control module 31 is in the image collected by the image collecting device 15.
  • the process proceeds to step S2, and the position is adjusted so that the autonomous traveling device 1 is at the boundary 6 position and the orientation is aligned with the boundary 6.
  • the distance between the automatic walking device 1 and the boundary 6 is already close when the boundary 6 is monitored. At this time, the workload of the step S2 is small, and only the self needs to be adjusted. The location is close to the boundary 6.
  • Monitoring Image Collection Device 15 Whether or not the boundary is included in the captured image can be achieved by the following steps.
  • the predetermined image block is divided into corresponding thousands of image blocks corresponding to the thousands of sub-regions of the predetermined region;
  • each sub-image block is analyzed to identify the corresponding sub-region as one of the work area 5 or the non-work area 7;
  • step S4 the process proceeds to step S4, and the process proceeds along the boundary 6.
  • the specific way of walking along the boundary 6 may be to walk across the boundary 6, or to walk on the side of the boundary 6.
  • the automatic walking device 1 walks on a specific side of the boundary 6.
  • the retaining housing is located within the working area and the boundary is located on a particular side of the housing. That is, the aforementioned intermediate area a is maintained as the working area 45, and a specific one of the left side area b or the right side area c is the non-working area 7, and the other is the working area 5.
  • the main control module 31 controls the movement of the walking module 17, and the intermediate area corresponding to the middle portion is identified as the working area, and the left or right area corresponding to the left or right portion is identified as a non-working area and the boundary is located therein.
  • the autonomous walking apparatus 1 adjusts its orientation so that the boundary 6 is all located on a specific side, that is, the left side area b or the right side area c, and then travels in the direction of the direction.
  • the main control module 3 1 keeps the orientation and the traveling direction of the automatic walking device 1 and the boundary 6 so that the main control module 31 controls the walking module 17 to operate, and the intermediate region corresponding to the middle portion is recognized as the working area, left.
  • the left side area or the right side area corresponding to the right part or the right part is identified as a non-working area and the boundary is located therein, so that the sub-area where the boundary 6 is located is always located on one side of the automatic walking device 1, that is, the aforementioned intermediate area a is the working area. 45, and one of the left area b or the right area c is the non-working area 7, and the other is the working area 5.
  • the image collecting device 15 still collects images in real time. If the boundary 6 deviates from the left side region b or the right side region c, the orientation of the automatic walking device 1, that is, the walking direction and the direction of the boundary 6 is illustrated. No longer coincident, the autonomous walking apparatus 1 adjusts the orientation again so that the boundary 6 is located in the left side area b or the right side area c, and by walking in the above manner and adjusting the direction in real time, the automatic walking apparatus 1 realizes walking along the boundary 6. Since the docking station 4 is placed at the boundary 6 of the work area 5, the autonomous walking device 1 can finally return to the docking station 4 if it walks along the boundary 6 of the work area 5.
  • step S6 While keeping walking along the boundary 6, the autonomous walking apparatus 1 proceeds to step S6 to monitor whether or not the docking station 4 appears in the image collected by the image capturing device 15. If the main control module 3 1 does not find the stop station 4 in the analysis image, it does not act, and continues to walk and monitor the stop station 4. If the main control module 3 1 finds the stop station 4, the process proceeds to step S8, and the main control module 3 1 controls the automatic The traveling equipment 1 travels to the docking station 4, adjusts the direction to face the docking station 4, and docks with the docking station 4, and confirms that the docking, charging, and the like are performed after the docking.
  • the master device 3 1 first performs step S4 and walks along the boundary 6.
  • step S5 While walking along the boundary 6, step S5 is performed, and the boundary recognition module 321 determines the walking direction of the automatic walking device 1 within a preset time or a preset distance.
  • Step S5 can be decomposed into two sub-steps, namely: 1. Calculating the cumulative deflection amount of the automatic walking device 1 within a preset time or a preset distance; and 2, comparing the accumulated deflection amount with the preset value, In order to judge the traveling direction of the automatic walking device 1.
  • the cumulative deflection amount is the degree to which the automatic traveling apparatus 1 deviates from the straight line during traveling, or the accumulated deflection angle.
  • the cumulative amount of deflection can be expressed by the angle of deviation or deviation. For example, within a certain time or a certain driving distance, if the automatic walking device 1 deviates to the left by 5 m and then deviates to the right by 7 m, the accumulated deflection amount can be expressed as being shifted to the right by 2 m; for example, the autonomous walking device 1 Turned clockwise by 15. After that, it turned 12 again counterclockwise. , the cumulative deflection can be Expressed as a clockwise deflection of 3. .
  • the boundary recognition module 321 calculates the cumulative deflection amount by accumulating the difference in the travel distance between the left wheel 13 1 and the right wheel 132.
  • a speed sensor is disposed at each of the left and right wheels 13 1 and the right wheel 132, and the speed sensor transmits the collected speed information to the connected main control module 31, and the main control module 31 can be based on the speed information.
  • the distance traveled by the left wheel 131 and the right wheel 132 respectively during a certain time or distance is calculated, and the difference between the travel distances of the left and right drive wheels representing the accumulated deflection amount is obtained.
  • the cumulative deflection amount may be calculated by accumulating the deflection angle of the autonomous walking apparatus 1.
  • an angle sensor is disposed in the automatic walking device 1, and the angle sensor continuously detects the deflection direction and angle of the automatic walking device 1 and transmits the data to the connected main control module 3 1 , in the main control module 3 1 Based on the data, the boundary recognition module 321 can calculate a cumulative deflection angle representing the cumulative deflection amount within a certain time or distance.
  • the sub-step 2 After calculating the accumulated deflection amount within the fixed time or the driving distance of the automatic walking device 1, the sub-step 2 is entered, and the boundary recognition module 321 compares the accumulated deflection amount with the preset value to determine the automatic walking device 1 Walking direction.
  • the preset value can be set to 0, that is, only the distance value or the angle value must be judged. For example, if the distance or angle is positive, the walking direction is judged to be clockwise, and if it is negative, the walking direction is determined to be inverse. Hour hand. However, in order to ensure the accuracy of the calculation, the preset value can also be set to an interval, for example, (0 ⁇ 10) meters, or (0 ⁇ 180°).
  • the walking is judged according to the value.
  • Direction when the accumulated deflection amount is within the interval, the cumulative deflection amount is recalculated.
  • recalculate There are many ways to recalculate, such as restarting a preset time or preset distance of a cycle, or extending a preset time or preset distance, or scrolling to a value, that is, as time or distance increases, after a corresponding response Move the preset time or the starting point of the preset distance.
  • step S5 the walking direction of the autonomous walking device 1 is finally obtained, and then the process proceeds to step S7, and the calculation result of S5 is compared with the preset standard result. If they are consistent, it is determined that the current boundary 6 of the line is connected to the stopping station 4, if Inconsistent, it is judged that the boundary 6 of the current line is not connected to the docking station 4.
  • the boundary recognizing unit 321 judges whether or not the current boundary is correct by comparing the actual traveling direction of the autonomous traveling apparatus 1 with the theoretical traveling direction when walking along the correct boundary.
  • the automatic traveling apparatus 1 always ensures that the boundary 6 is located on a specific side of itself when returning along the boundary 6. Taking the automatic walking device 1 as an example to ensure that the boundary 6 is located on the right side of itself, if the automatic walking device 1 is on the peripheral boundary 6 of the working area 5, it will walk inside the boundary 6, and the traveling direction is counterclockwise, if at the island 71 On the peripheral boundary, it will walk outside the boundary 6, and its direction of travel is clockwise.
  • the preset standard result is set according to the above correspondence, if the specific side is the left side, the theoretical walking side The direction is clockwise, the specific side is the right side, and the theoretical walking direction is counterclockwise.
  • step S4 the process returns to step S4, and the autonomous traveling device 1 continues to walk along the boundary 6; if the result of the judgment is that the boundary 6 of the current line is not connected to the docking station 4, then the entry is made.
  • step S9 the autonomous walking device 1 leaves the current boundary 6 and returns to the process of finding the boundary 6.
  • the autonomous walking apparatus 1 of the present invention can also be based on the presence or absence of an obstacle 73 in the preset area in front of the ultrasonic detecting device 16.
  • the invention also provides an obstacle detection method for an automatic walking device.
  • the ultrasonic detecting device 16 is disposed on the casing 1 1 , and the ultrasonic detecting device 16 is horizontally mounted forward for detecting whether or not the obstacle 73 exists in the front preset region of the current position of the automatic traveling device 1 .
  • the ultrasonic detecting device 16 may include a transmitter and a receiver.
  • the transmitter transmits ultrasonic waves.
  • an echo is generated, and the echo is received by the receiver, so that a stereoscopic object exists in front.
  • the ultrasonic detecting device 16 may be an ultrasonic sensor having a dual function of transmitting and receiving sound waves.
  • the main control module 3 1 includes a processing unit (not shown), and a storage unit 3 18 .
  • the processing unit receives the ground environment image information acquired by the image collecting device 15 and the environmental information detected by the ultrasonic detecting device 16, and after processing, compares with the obstacle parameter preset in the storage unit 3 18, based on the comparison result.
  • the walking module 17 and the working module 19 are controlled by the control unit 142 for walking and working.
  • the automatic walking device 1 acquires image information of a predetermined area in front of the automatic walking device 1 through the image collecting device 15 during traveling, and transmits the collected image information to the processing unit; the processing unit performs various information in the image information.
  • the analysis further determines the attributes of the respective parts in the area, and can determine that the front of the automatic traveling equipment 1 belongs to the work area or the non-work area.
  • the processing unit extracts component values of three primary colors (RGB) of each pixel in the image of the region from respective regions in the image information.
  • RGB primary colors
  • the storage unit 3 18 stores in advance a color ratio threshold corresponding to the work area 5, and the processing unit compares the calculated ratio of the colors in each area with a pre-stored color ratio threshold to determine each area. Which are part of work area 5 and which are non-work areas.
  • the working area is a lawn
  • the processing unit divides the number of green pixels of each area in the image information by the total number of pixels in each area, and calculates the proportion of green pixels in each area.
  • the proportion of the green pixels of one of the area a, the left area b, and the right area c is smaller than the pre-stored color ratio threshold, the area has the non-working area of the automatic working device 1.
  • the processing unit may also extract texture information in the image of the region from each region in the image for analysis.
  • the existing gray level co-occurrence matrix analysis method or the Tamura texture feature analysis method can be used to obtain the texture features of each region of the image.
  • the gray level co-occurrence matrix analysis method can extract the four characteristics of energy, inertia, entropy and correlation of the image.
  • Tamura texture feature analysis method can extract the roughness, contrast, direction, line image, regularity and coarseness of the image.
  • the storage unit 3 18 pre-stores the texture feature value of the predetermined texture, and the processing unit compares the texture feature value of each region in the image with the texture feature value of the predetermined texture, if the texture feature value of a certain region of the image and the predetermined texture If the texture feature value matches, the area is determined to be a working area. If the texture feature value of a certain area of the image does not match the texture feature value of the preset texture, the area is determined to be a non-working area.
  • the autonomous walking device 1 can identify the working area and the non-working area by using color information or texture features.
  • the working area and the non-working area can also be identified by combining the color information and the texture feature, and the processing unit can recognize the color first.
  • the information is combined with the identification of the texture information for judgment.
  • the lawn is used as the work area 5, the color of the lawn should be green, and as the non-work area, there may be land or cement floor or other types of ground laying types.
  • the color of the non-work area is usually different from the color of the lawn, even if the color is green, usually the artificially processed items, such as the artificially laid floor, have a relatively regular The texture, and the grass relative texture has no obvious rules, so it is possible to further determine whether the target area is a work area according to the texture of the captured image.
  • the processing unit recognizes that the color is green in the rectangular area and the texture is irregular, determining that the part is the working area 5; when the processing unit identifies the area in the rectangular area that the color is not green or the texture has regularity Then there is a non-working area in the rectangular area.
  • the processing unit can also perform texture analysis first, and then combine color recognition to judge.
  • the processing unit may calculate information such as the length, width, and area of the non-working area image based on the image information, and the information of the non-working area image may be obtained by counting pixel points in the image.
  • the coordinate system can also be established, which is calculated by the formula of the perimeter and area of the preset polygon.
  • the above information of the non-working area can also be calculated by the method of calculus or other methods, and is not enumerated here.
  • the storage unit 318 is pre-set with a conversion algorithm of the image size and the actual size, and the image size has a certain proportional relationship with the actual size. According to the proportional relationship, the actual size can be calculated according to the image size, and the image can be calculated according to the actual size. Dimensions, the processing unit calculates the size parameter of the non-working area according to the length, width and area of the non-working area image according to a preset conversion algorithm, and the size parameter of the non-working area, including the length, width and area of the non-working area.
  • the storage unit 318 stores a preset value of a size parameter of the non-working area, where the preset value includes a length preset value, a width preset value, and an area preset value of the non-working area, when the length of the non-working area If any one of the width and the area exceeds the preset value corresponding to it, the main control module 3 1 considers that the automatic walking device 1 has reached the boundary 6; if the length, width, and area of the non-working area are smaller than the preset corresponding thereto At the time of the value, the autonomous vehicle 1 further performs obstacle detection by the ultrasonic detecting device 16.
  • the preset value is a length, a width and an area of a projection of the autonomous walking device 1 on a work area.
  • the preset value of the size parameter of the non-working area stored in the storage unit 3 18 may also include only the width preset value of the non-working area.
  • the ultrasonic detecting device 16 emits an ultrasonic wave.
  • the processing unit counts the time taken for the ultrasonic wave to be emitted from the time the echo is received.
  • the storage unit 143 stores a time threshold of the preset ultrasonic wave from the issuance to the reception of the echo, and is used for limiting the detection range of the ultrasonic detecting device 16 to a certain area, when the ultrasonic wave is emitted from the time of sending to receiving the echo.
  • the echo is returned by the object exceeding the preset ultrasonic detection area, which may be an echo returned by an object at a relatively long distance or an echo returned by the ultrasonic wave on the ground, and the processing unit considers Such an echo is invalid; when the time taken by the ultrasonic wave from the issuance to the reception of the echo is less than a preset time threshold, the echo is returned by the object within the preset ultrasonic detection area, and the processing unit considers this type The echo is effective, and it is judged that there is an obstacle 73 in the front preset area of the current position of the autonomous walking apparatus 1.
  • the obstacle detection method of the automatic walking device 1 provided by the present invention includes the following steps: Step S300: Acquire image information.
  • the image collecting device 15 takes an image of the rectangular area in front of the automatic walking device 1 and transmits the collected image to the main control module 3 1 1 for processing.
  • Step S301 Identify image information colors and textures.
  • the processing unit analyzes the image captured by the image collection device 15 to identify the color and texture of each region of the image.
  • Step S302 Determine whether there is a non-working area in front.
  • the processing unit compares the values set in the storage unit 3 18 with information such as the recognized color and texture, and determines whether or not the non-working area exists in the rectangular area.
  • step S303 When the non-work area appears in front of the automatic walking device 1, the process proceeds to step S303, otherwise, the process returns to step S300.
  • Step S303 Identify the size of the non-working area.
  • the processing unit calculates the size of the non-working area in the rectangular area according to a preset algorithm, for example, calculating the length, width or area of the non-working area.
  • Step S304 Determine whether the size of the non-working area is smaller than a preset value.
  • the processing unit compares the calculated size of the non-working area in the rectangular area with the size of the preset non-working area in the storage unit 3 18 .
  • the process proceeds to steps S 305-S 308, and the detection is performed by the ultrasonic detecting device; when the size of the non-working area in the rectangular area When it is greater than the preset value, the autonomous walking device 1 considers that the boundary of the working area has been reached, and the automatic walking device 1 can perform work related to the boundary, for example, moving away from the boundary or walking along the boundary line, etc., which is not described here.
  • Step S305 Send an ultrasonic wave to start timing.
  • the ultrasonic detecting device 16 transmits ultrasonic waves, and the processing unit starts timing, and rebounds to form an echo when the ultrasonic waves encounter an object.
  • Step S306 Receive an echo and calculate the time.
  • the echo can be received by the ultrasonic detecting device 16, and when the echo reaches the ultrasonic detecting device 16, the processing unit calculates the time taken for the ultrasonic wave to be emitted from the time the echo is received.
  • Step S307 Determine whether the statistical time is less than a preset value.
  • the storage unit 3 18 stores a preset time threshold of the ultrasonic wave from the issuance to the reception of the echo, when When the time from the issuance of the ultrasonic wave to the reception of the echo is greater than the preset time threshold, the processing unit considers that the echo is invalid, and returns to step S300; the time taken for the ultrasonic wave to be received from the time of receiving the echo is less than the preset At the time threshold, the processing unit determines that there is an obstacle 73 in the front preset area of the current position of the automatic walking device 1.
  • Step S308 There is an obstacle, and avoidance is performed.
  • the automatic walking device 1 performs the avoidance when it is determined that there is an obstacle in front.
  • the autonomous walking apparatus 1 can bypass the obstacle 73 from any of the left side area b or the right side area c; otherwise, the automatic walking apparatus 1 can be left In the side area b or the right side area c, the side where the non-working area does not appear bypasses the obstacle 73.
  • the automatic walking device 1 always leaves the obstacle 73 from the side of the rectangular area where the non-working area does not appear.
  • the non-working area is first detected by using image information.
  • the preset ultrasonic detecting area may also be detected by the ultrasonic detecting device.
  • the ultrasonic detecting step is performed. It is conceivable by those skilled in the art that the width of the non-working area in the rectangular area is smaller than the preset width of the automatic walking apparatus 1 and is not a necessary condition for performing ultrasonic detection, and the ultrasonic detection may also be performed from beginning to end. It is also possible to avoid the effect that the automatic walking device 1 collides with the obstacle 73 during operation and the recognition accuracy is high.
  • the invention provides an automatic walking device and an obstacle detecting method thereof, so that the automatic walking device can perform obstacle recognition in a working area by using an image collecting device and an ultrasonic detecting device, and does not need to directly collide with an obstacle when identifying an obstacle, so that the automatic walking The device is not easily damaged by collision with an obstacle, and the automatic walking device has high accuracy in recognizing an obstacle.
  • the invention also provides an automatic working system capable of automatically docking with the docking station 4, and a docking method for docking the automatic walking device with the docking station.
  • the automatic walking device 1 can automatically return to the docking station 4 and automatically dock with the docking station 4.
  • the manner in which the autonomous walking device 1 returns to the docking station 4 may be based on video technology, border based, GPS based, guided line, and the like.
  • the autonomous walking device 1 acquires the environmental image information around the current position through the image collecting device, and monitors whether or not the boundary 6 appears in the environmental image information.
  • the driver automatically The walking device 1 walks on a specific side of the boundary 6.
  • the image gathering device still collects the image information around the current position of the automatic walking device 1 in real time, and adjusts the walking angle when the traveling direction of the automatic walking device 1 is found to deviate from the boundary 6, thereby ensuring that the automatic walking device 1 is always Walk along the border 6. Since the docking station 4 is placed on the boundary 6 of the work area 5, the autonomous vehicle 1 can finally return to the vicinity of the docking station 4 if it walks along the boundary 6.
  • the automatic walking device 1 includes an image collecting device 15, a main control module 3 1 and a walking module 17.
  • the image collecting device 15 is disposed on the outer surface of the autonomous walking apparatus 1, collects environmental image information around the current position of the automatic walking device 1, and transmits the collected environmental image information to the main control module 31.
  • the image collecting means 15 can collect the image information of the docking station 4, so that the environmental image information contains the image information of the docking station 4.
  • the main control module 31 receives the environmental image information transmitted by the image collecting device 15, and includes a first judging component 3 150, a second judging component 3 170, a signal transmitting unit 3 190, and a storage unit 3 18 .
  • the storage unit 3 18 stores the preset parameters.
  • the first determining component 3 150 determines whether there is a stop station 4 around the current position of the automatic walking device 1 according to the environmental image information and the preset parameter.
  • the second determining component 3 170 is The environment image information and the preset parameters determine whether the automatic walking device 1 and the docking station 4 are facing each other; the signal transmitting unit 3 190 transmits a corresponding control signal according to the determination result of the first determining component 3150 and the second determining component 3170.
  • the walking module 17 receives the control signal and drives the walking of the autonomous walking apparatus 1 in accordance with the control signal.
  • the signal transmitting unit 3 190 sends a control signal to the walking module 17, so that the walking module 17 drives the automatic walking device 1 to rotate the preset angle and then continues to walk. .
  • the signal transmitting unit 3 190 sends a control signal to the walking module. 17 , The walking module 17 is driven to drive the automatic walking device 1 to rotate the preset angle and continue to walk.
  • the signal transmitting unit 3190 sends a control signal to the walking module 17.
  • the walking module 17 is driven to drive the automatic walking device 1 to continue walking along the current angle, and the automatic docking of the automatic walking device 1 and the docking station 4 is realized.
  • the automatic traveling apparatus 1 proceeds to step S500 to perform initialization. After step S500, the process proceeds to step S502, and the image collecting means 15 is activated.
  • step S504 the image collecting device 15 starts collecting the environmental image information around the current position of the automatic walking device 1, and transmits the collected environmental image information to the first determining component 3150 of the main control module 31 and The second determining component 3170.
  • the image collecting device 15 and the main control module 31 can perform signal transmission in a manner of over-electrical contact, and can also perform signal transmission through non-electrical contact, and the image collecting device 15 may be provided on the autonomous walking device 1 or in a place other than the autonomous walking device 1.
  • step S504 the process proceeds to step S506, and the first determining component 3150 of the main control module 31 determines, according to the received environment image information and the preset parameters stored by the storage unit 318, whether there is a stop station 4 around the current position of the automatic walking device 1, when If the result of the determination is YES, the process proceeds to step S508; otherwise, if the result of the determination is no, the process proceeds to step S510.
  • step S508 the second determining component 3170 of the main control module 31 determines, according to the received environment image information and the preset parameters stored by the storage unit 318, whether the automatic walking device 1 and the docking station 4 are facing each other.
  • the determination result is yes, Go to step S512; otherwise, when the result of the determination is no, go to step S510.
  • step S510 the signal sending unit 3190 of the main control module 31 receives the signals sent by the first determining component 3150 and the second determining component 3170, and sends corresponding control signals according to the judgment results of the first determining component 3150 and the second determining component 3170.
  • the walking module 17 is controlled to drive the walking module 17 to drive the automatic walking device 1 to rotate a preset angle, so that the image collecting device 15 can collect the environmental image information around the current position of the automatic walking device 1 from a new angle, so as to facilitate the main control module. 31 can judge whether there is a stop station 4 around the current position of the automatic walking device 1 based on the new environmental image information.
  • step S512 the signal sending unit 3190 of the main control module 31 receives the signals sent by the first determining component 3150 and the second determining component 3170, and sends corresponding control signals according to the judgment results of the first determining component 3150 and the second determining component 3170.
  • the walking module 17 is controlled to drive the walking module 17 to drive the walking walking device to maintain the current traveling direction toward the stopping station 4, that is, to keep approaching the stopping station 4 in the direction opposite to the stopping station 4, thereby realizing automatic connection with the stopping station 4. Docking.
  • the first determining component 3150 can determine whether there is a stop station 4 around the current position of the automatic walking device 1 according to the environment image information and the preset parameters stored by the storage unit 318. The details are as follows with reference to FIG. 19 to FIG. A preferred embodiment for determining whether or not there is a stop 4 around the current position of the automated walking device 1 is described.
  • the first determining component 3150 firstly determines whether there is a docking station 4 around the autonomous walking device 1 by identifying whether the environment image information includes a preset color, and then extracts the contour of the sub-region having the preset color, and The contour of the sub-area is matched with the preset contour to accurately determine whether or not the docking station 4 exists around the autonomous walking device 1.
  • the first determining component 3150 includes a color recognizing unit 3151, a region extracting unit 3152, a contour acquiring unit 3153, and a contour determining unit 3155.
  • the color recognizing unit 3151 identifies environmental image information collected by the image collating device 15. Whether the preset color is included or not, when the environment image information includes the preset color, the color recognition unit 3151 outputs a corresponding electrical signal to the region extracting unit 3152. After receiving the electrical signal output by the color recognizing unit 3151, the region extracting unit 3152 extracts the sub-region having the preset color from the environmental image information, and transmits the extracted image information to the contour acquiring unit 3153.
  • the contour acquiring unit 3153 acquires the outline of the sub-area based on the image information of the sub-area transmitted by the area extracting unit 3152, and transfers the outline information of the sub-area to the outline judging unit 3155.
  • the contour determining unit 3155 compares the contour of the sub-region with the preset contour, and determines whether the contour of the sub-region matches the preset contour. When the contour of the sub-region matches the preset contour, the first determining component 3150 determines to automatically walk. There is a stop 4 around the current location of device 1.
  • the color recognizing unit 3151 recognizes the color value included in the environmental image information.
  • the environmental image information is composed of a thousand points of information, and the color value included in each point information can be identified by identifying the RGB values contained in each point information.
  • the color value contained in each point information can be identified by identifying the HS V value.
  • step S520 the process proceeds to step S522, and the color recognition unit 3151 determines whether the environment image information includes the preset color. If the determination result is YES, the process proceeds to step S524. If the determination result is negative, the process proceeds to step S540.
  • the preset color is the color of the docking station 4, and the color may be represented by RGB or by HSV, depending on which form the color recognizing unit 3151 recognizes the color value of the environmental image information.
  • the color recognition unit 3151 can determine whether the environment image information includes a preset color by comparing the colors of the respective point information in the environment image information with the preset colors one by one. Thereby, it is initially determined whether there is a stop station 4 around the current position of the automatic walking device 1.
  • the region extracting unit 3152 extracts the sub-region having the preset color from the environmental image information.
  • a sub-region having a preset color can be extracted from the environmental image information by color space distance and similarity calculation.
  • the image information is generally in RGB format.
  • the image of the RGB color model is converted into an HSV color model, and then the color space distance and the similarity calculation are used to perform image color segmentation, and the sub-region of the preset color in the image is set to foreground white. The rest of the area is set to background black.
  • the color-divided image is used to calculate the number of foreground pixels in rows or columns, and then the histogram is horizontally projected or vertically projected to determine the coordinate value of the desired color region. Sub-areas with preset colors are extracted from the original environmental image information.
  • step S524 the process proceeds to step S526, and the outline obtaining unit 3 153 acquires the outline of the sub-area having the preset color.
  • the contour of the sub-region includes a boundary contour of the sub-region and an inner contour of the sub-region, wherein the boundary contour of the sub-region corresponds to the peripheral structure of the docking station 4, and the inner contour of the sub-region corresponds to the structure of the characteristic portion of the outer surface of the docking station 4. correspond.
  • the contour acquiring unit 3 153 can acquire the outline of the sub-area by performing gradation processing and gradient difference processing on the image information.
  • the contour obtaining unit 3 153 further includes a gradation processing circuit 3 153a and a gradient difference processing circuit 3 153b.
  • step S526 further includes step S528 and step S 530.
  • the gradation processing circuit 3 153a performs gradation processing on the sub-areas according to a preset color to obtain a gradation image, and transmits the processing result to the gradient difference processing circuit 3 153b.
  • the gradient difference processing circuit 3 153b performs gradient difference processing on the grayscale image to obtain the contour of the subregion. Specifically, the gradient difference processing circuit 3 153 b performs gradient difference processing on the grayscale image, including two gradient differential processing and one fine processing. The gradient difference processing circuit 3 153b first performs gradient difference processing on the gray image to obtain the texture image of the sub-area, and then performs gradient differential amplification processing on the texture image to generate the contour band, and finally obtains the contour band to obtain the contour processing.
  • step S532 the contour determining unit 3155 determines whether the outline of the sub-area matches the preset contour.
  • the contour judging unit 3 155 can determine whether the contour of the sub-region matches the preset contour by matching all the details of the contour of the sub-region with the full details of the preset contour, or by extracting the feature quantity of the contour of the sub-region and determining Whether the feature quantity of the contour of the sub-area matches the preset feature quantity to determine whether the contour of the sub-area matches the preset contour, wherein the preset feature quantity is a feature quantity corresponding to the preset contour. In this embodiment, it is determined whether the contour of the sub-area matches the preset contour by matching the feature quantity.
  • the contour judging unit 3 155 includes a feature amount acquiring circuit 3 155a and a feature amount matching circuit 3 155a. Accordingly, step S532 further includes step S534 and step S536.
  • the feature quantity acquisition circuit 3 155a acquires features characterizing the contour of the sub-area the amount.
  • the feature quantity may be a parameter of an internal contour of the sub-area, a parameter of a boundary contour of the sub-area, and a ratio of a parameter of the boundary contour to a parameter of the internal contour.
  • the feature quantity can also be the ratio between the two parameters of the boundary contour or the ratio between the two parameters of the inner contour.
  • the parameter of the boundary contour or the inner contour may be at least one of a length, a height, a shape, and an area of the boundary contour or the inner contour.
  • the feature quantity matching circuit 3 155a determines whether the feature quantity matches the preset feature quantity, and when the determination result is YES, that is, the feature quantity matches the preset feature quantity, that is, the child If the contour of the region matches the preset contour, the process proceeds to step S538. When the determination result is negative, that is, the feature quantity does not match the preset feature quantity, that is, when the contour of the sub-area does not match the preset contour, Go to step S540. Thereby, it is possible to accurately judge whether or not there is a stop station 4 around the automatic walking device 1.
  • step S40 the first judging component 3 150 judges that the docking station 4 does not exist around the current position of the autonomous walking device 1.
  • determining whether the contour of the sub-area matches the preset contour can determine whether the boundary contour of the sub-region matches the preset contour, and the preset contour is the peripheral contour of the docking station 4, and By judging whether the inner contour of the sub-region matches the preset contour, the preset contour is the contour of the characteristic portion of the docking station 4, such as the conductive terminal 41, the base 43 and the like, and the boundary contour and the inner contour of the sub-region can also be simultaneously determined. Whether or not the preset contour matches, the preset contour includes the peripheral contour of the docking station 4 and also the contour of the characteristic portion of the docking station 4.
  • the setting methods of the preset contours are basically similar, and the setting method of the preset contours as the peripheral contours of the docking station 4 will be described below with reference to Figs. 23 to 25 .
  • a perspective view of the docking station 4 includes a base 43, a support arm 45 and a conductive terminal 41.
  • the base 43 is used to mount and fix the docking station 4, and the plane in which it is located is the mounting plane.
  • the support arm 45 is disposed on the base 43 and disposed perpendicular to the base 43 for mounting the conductive terminal 41.
  • the conductive terminal 41 is used to electrically connect the docking station 4 and the autonomous walking device 1 when the autonomous vehicle 1 is successfully docked with the docking station 4.
  • FIG. 24 and 25 respectively show a side view and a front view of the docking station 4, wherein the side view is a projection of the docking station 4 in a width direction of the base 43 in a two-dimensional plane perpendicular to the mounting plane, the front view being a docking The projection of the station 4 in a direction perpendicular to the autonomous walking device 1 on a two-dimensional plane perpendicular to the mounting plane.
  • the projections of the docking station 4 in different directions on a two-dimensional plane perpendicular to the mounting plane are different, and the autonomous walking device 1 is close to the stopping station 4, It can be located at different sides of the docking station 4, so that the peripheral contour of the docking station 4 recognized by the main control module 3 1 is different depending on the angle of the docking station 4, so the preset contour should be set according to the parallel along the mounting plane.
  • the docking station 4 is a longitudinally symmetrical and laterally symmetrical structure, it is only necessary to set the projection in the range of 90 degrees perpendicular to the mounting plane in the direction parallel to the mounting plane. It will be understood by those skilled in the art that in order to obtain a projection of the docking station 4 in a plane parallel to the mounting plane in a plane perpendicular to the mounting plane within a predetermined angle range, the image collecting device 15 can be at different angles at the docking station 4.
  • the image obtained by the docking station is obtained by the designer, and the designer can obtain the projection in the plane parallel to the mounting plane in a direction parallel to the mounting plane in a plane perpendicular to the mounting plane.
  • the present invention proposes to determine whether the automatic walking device 1 is based on whether the positional relationship of the characteristic portion of the docking station 4 relative to the central axis of the environmental image information in the environmental image information satisfies a preset condition. Is it right with the stop station 4? Specifically, as shown in FIG.
  • the second judging component 3 170 includes a feature recognizing unit 3 17 1 and a feature judging unit 3 173 that recognizes the feature portion relative environment image information of the docking station 4 in the environment image information.
  • Fig. 27 shows a first preferred embodiment for judging whether the automatic traveling device 1 and the docking station 4 are facing each other.
  • the main control module 3 1 determines whether the automatic walking device 1 and the docking station 4 are positive according to whether the position of the conductive terminal 41 of the docking station 4 in the environmental image information relative to the central axis of the environmental image information satisfies a preset condition. Correct.
  • the conductive terminal 41 includes a first terminal 411 and a second terminal 412. The distance between the first terminal 411 and the central axis of the environmental image information in the environmental image information is a first distance, and the second terminal 412 and the central axis of the environmental image information The distance is a second distance.
  • the preset condition is that the first terminal 41 1 and the second terminal 412 are respectively located on two sides of the environment image information, and the ratio of the first distance to the second distance is a preset ratio.
  • the feature recognition unit 3171 identifies the central axis of the environmental image information, and typically determines the central axis by identifying the abscissa and the ordinate of each information point in the environmental image information.
  • step S580 the process proceeds to step S582, and the feature recognition unit 3171 identifies the positions of the first terminal 411 and the second terminal 412 of the stop station 4 in the environmental image information.
  • the regions that may be the first terminal 411 and the second terminal 412 are initially identified by recognizing the color, and then the first terminal 411 and the second are accurately determined by identifying the contour of the region that may be the first terminal 411 or the second terminal 412.
  • the area of the terminal 412 finally identifies the positions of the first terminal 411 and the second terminal 412 by the abscissa and the ordinate of the area identifying the first terminal 411 and the second terminal 412.
  • the manner of specifically identifying the first terminal 411 and the second terminal 412 is the same as that of the identification of the docking station 4 described in FIGS. 19 to 22, and details are not described herein again.
  • the feature recognition unit 3171 calculates a first distance from the first terminal 411 to the central axis of the environmental image information, and calculates a second distance from the second terminal 412 to the central axis of the environmental image information.
  • the first distance and the second distance are calculated by calculating the difference between the horizontal and vertical coordinates of the first terminal 411 and the second terminal 412 and the horizontal and vertical coordinates of the central axis of the environmental image information, respectively.
  • step S584 the process proceeds to step S586, and the feature judging unit 3173 calculates the ratio of the first distance to the second distance.
  • step S586 the process proceeds to step S590, and the feature judging unit 3173 compares the calculated ratio with the preset ratio.
  • the preset ratio is calculated according to the distance between the first terminal 411 and the second terminal 412 and the central axis of the environmental image information when the autonomous walking device 1 is facing the docking station 4.
  • step S590 the process proceeds to step S592, and the feature determining unit 3173 determines whether the calculated ratio is the same as the preset ratio. If the result of the determination is YES, the process proceeds to step S594, and if the result of the determination is NO, the process proceeds to step S596.
  • step S592 it may be determined by one judgment, or may be determined by a plurality of determinations to proceed to step S594 or step S596.
  • step S594 the second determining component 3170 determines that the automatic walking device 1 is facing the docking station 4.
  • step S596 the second determining component 3170 determines that the automatic walking device 1 and the docking station 4 are not facing each other.
  • a second preferred embodiment for judging whether the automatic traveling device 1 and the docking station 4 are facing each other is shown.
  • the second determining component 3170 determines whether the automatic walking device 1 and the docking station 4 are positive according to whether the position of the conductive terminal 41 of the docking station 4 in the environmental image information relative to the central axis of the environmental image information satisfies a preset condition. Correct.
  • the difference between this embodiment and the first preferred embodiment shown in FIG. 27 is that, in the embodiment, the conductive terminal 41 includes the first terminal and the second terminal, but the first terminal and the second terminal are integrated on one component.
  • the preset condition is that the conductive terminal 41 is located on the central axis of the environmental image information.
  • the feature recognition unit 3171 identifies the central axis of the environmental image information, and typically determines the central axis by identifying the abscissa and the ordinate of the information points in the environmental image information.
  • step S600 the process proceeds to step S602, and the feature recognition unit 3171 identifies the position of the conductive terminal 41 of the stop station 4.
  • the specific identification manner is the same as the embodiment shown in FIG. 27, and details are not described herein again.
  • step S602 the process proceeds to step S604, and the feature judging unit 3173 calculates the first distance of the conductive terminal 41 to the central axis of the environmental image information.
  • the first distance is calculated by calculating the difference between the abscissa of the conductive terminal 41 and the abscissa of the central axis of the environmental image information.
  • step S612 the feature determining unit 3173 determines whether the first distance is zero, that is, whether the conductive terminal 41 is located on the central axis.
  • the process proceeds to step S614, and if the result of the determination is NO, the process proceeds to step S616.
  • step S612 it may be determined by one judgment, or may be determined by a plurality of determinations to proceed to step S614 or step S616.
  • step S614 the feature judging unit 3173 judges that the autonomous traveling apparatus 1 is facing the docking station 4.
  • step S616 the feature judging unit 3173 judges that the auto-traveling device 1 and the docking station 4 are not facing each other.
  • a third preferred embodiment for judging whether the automatic traveling device 1 and the docking station 4 are facing each other is shown.
  • the second determining component 3170 determines whether the automatic walking device 1 and the docking station 4 are positive according to whether the position of the central axis of the environment image information in the environmental image information of the support arm 45 of the docking station 4 satisfies a preset condition. Correct.
  • the support arm 45 has a first side 451 and a second side 452 in a direction in which the autonomous walking device 1 and the docking station 4 face each other, and the distance between the first side 451 and the central axis of the environmental image information in the environmental image information
  • the first distance, the distance between the second side 452 and the central axis of the environmental image information is a second distance
  • the preset condition is that the ratio of the first distance to the second distance is a preset ratio.
  • the feature recognition unit 3171 identifies the central axis of the environmental image information, and typically determines the central axis by identifying the abscissa and the ordinate of the information points in the environmental image information.
  • step S620 the process proceeds to step S622, and the feature recognition unit 3171 identifies the positions of the first side 451 and the second side 452 of the support arm 45 of the docking station 4.
  • the specific identification mode is the same as that shown in Figure 27, and will not be described here.
  • step S622 the process proceeds to step S624, and the feature determining unit 3173 calculates the first side 451.
  • a first distance to the central axis of the environmental image information is calculated, a second distance from the second side 452 to the central axis of the environmental image information.
  • the first distance and the second distance are calculated by calculating the difference between the abscissa of the first side 45 1 and the second side 452 and the abscissa of the central axis of the environmental image information, respectively.
  • step S624 the process proceeds to step S626, and the feature judging unit 3 173 calculates the ratio of the first distance to the second distance.
  • step S626 the process proceeds to step S630, and the feature judging unit 3 173 compares the calculated ratio with the preset ratio.
  • the preset ratio is calculated according to the distance between the first side 45 1 and the second side 452 and the central axis of the environmental image information when the automatic walking device 1 is facing the docking station 4.
  • step S630 the process proceeds to step S632, and the feature determining unit 3 173 determines whether the calculated ratio is the same as the preset ratio. If the result of the determination is YES, the process proceeds to step S634. If the result of the determination is negative, the process proceeds to step S636.
  • step S 632 it may be determined by one judgment, or may be determined by a plurality of judgments to proceed to step S634 or step S636.
  • the second determining component 3 170 determines that the automatic walking device 1 is facing the docking station 4.
  • step S636 judges that the auto-traveling device 1 and the docking station 4 are not facing each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Harvester Elements (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种自动行走设备及其工作区域判断方法,所述方法包括步骤:拍摄目标区域的图像;将图像划分为若干个子图像块;提取至少一个子图像块的各个像素的颜色;计算预定颜色在该子图像块中所占的比例并与第一预设值比较;提取该子图像块的纹理特征值并与第二预设值比较;当图像的一个子图像块中的预定颜色所占的比例达到或超过第一预设值且预定纹理值达到或超过第二预设值时,判断该子图像块对应的子区域为工作区域。该方法使得工作系统设置简单、人性化,且工作区域识别灵活,方便。

Description

自动行走设备及其工作区域判断方法 技术领域
本发明涉及一种自动行走设备及其工作区域判断方法。
背景技术
随着计算机技术和人工智能技术的不断进步, 类似于智能机器人的自动行 走设备已经开始慢慢的走进人们的生活。 三星、 伊莱克斯等公司均开发了全自 动吸尘器并已经投入市场。 这种全自动吸尘器通常体积小巧, 集成有环境传感 器、 自驱系统、 吸尘系统、 电池和充电系统, 能够无需人工操控, 自行在室内 巡航, 在能量低时自动返回停靠站, 对接并充电, 然后继续巡航吸尘。 同时, 哈斯科瓦纳等公司开发了类似的智能割草机,其能够自动在用户的草坪中割草、 充电, 无需用户千涉。 由于这种自动割草系统一次设置之后就无需再投入精力 管理, 将用户从清洁、 草坪维护等枯燥且费时费力的家务工作中解放出来, 因 此受到极大欢迎。
现有的自动割草机的工作区域一般是通过设置物理的边界线, 如导线或篱 笆, 自动割草机侦测物理的边界线以确定工作区域。 边界布线的过程比较麻烦, 耗时费力, 并且在边界线内可能还存在非草区域, 或者边界线外还存在需割草 的区域, 釆用物理边界线的方法不灵活、 不方便。
因此有必要对现有的自动行走设备及其工作区域判断方法进行改进, 使得 工作系统设置筒单、 人性化, 且工作区域的识别更加灵活、 方便。
发明内容
针对现有技术存在的不足, 本发明提供一种工作系统设置筒单、 人性化, 且工作区域识别灵活、 方便, 以及成本低, 初始安装容易的自动行走设备。
本发明的技术方案是这样实现的: 一种自动行走设备, 包括: 壳体、 行走 模块、 安装在壳体上的图像釆集装置, 以及连接图像釆集装置和行走模块以控 制自动行走设备工作的主控模块, 所述图像釆集装置拍摄目标区域, 形成图像; 所述主控模块将所述图像划分为若千子图像块, 每一子图像块对应目标区域的 一个子区域; 所述主控模块提取至少一个子图像块的各个像素的颜色; 所述主 控模块计算预定颜色在该子图像块中所占的比例并与第一预设值比较; 所述主 控模块提取该子图像块的紋理特征值并与第二预设值比较; 所述主控模块在图 像的一个子图像块中的预定颜色所占的比例达到或超过第一预设值且预定紋理 值达到或超过第二预设值时, 判断该子图像块对应的子区域为工作区域, 若该 子图像块中预定颜色所占的比例小于第一预设值或紋理特征值小于第二预设 值, 则判断该子图像块对应的子区域为非工作区域。
优选地, 所述主控模块包括子区域划分单元、 颜色提取单元、 占比计算单 元、 占比比较单元、 紋理提取单元、 紋理比较单元、 工作区域识别单元及存储 单元, 存储单元存储第一预设值及第二预设值, 子区域划分单元将图像划分为 与若千子区域对应的子图像块, 颜色提取单元提取至少一个子图像块各个像素 的颜色, 占比计算单元将预定颜色的像素数除以总像素数以计算预定颜色在该 子图像块中的占比, 占比比较单元比较该子图像块中预定颜色的占比与第一预 设值, 紋理提取单元提取该子图像块的紋理特征值, 紋理比较单元比较该子图 像块的紋理特征值与第二预设值, 工作区域识别单元根据比较结果判断该子图 像块对应的子区域是否为工作区域。
优选地, 所述存储单元中存有所述预定颜色的颜色分量的数值范围, 若一 个像素的颜色分量分别落入预定颜色的颜色分量的数值范围, 则所述颜色提取 单元判断该像素的颜色为预定颜色。
优选地, 所述颜色分量为三原色分量。
优选地, 所述紋理特征值为参数离散度, 所述第二预设值为预设离散度, 所述存储单元中存储预设离散度及预设差分值, 紋理提取单元计算一个子图像 块中每相邻的两个像素的至少一个参数的梯度差分, 判断该梯度差分是否大于 预设差分值, 计算该子图像块中所有大于该预设差分值的梯度差分的参数离散 度, 紋理比较单元比较参数离散度与预设离散度。
优选地, 所述主控模块还包括转向控制单元, 所述子区域划分单元将所述 图像分为中部、 左部和右部三个子图像块, 分别对应目标区域的中间区域、 左 侧区域及右侧区域, 所述中间区域位于自动行走设备的前方正中, 所述左侧区 域及右侧区域分别位于所述中间区域沿自动行走设备行进方向的左右两侧, 当 工作区域识别单元判断所述中间区域为非工作区域时, 所述转向控制单元改变 所述自动行走设备的行走方向, 直到所述中间区域被判断为工作区域。
优选地, 所述目标区域位于所述自动行走设备的正前方, 且所述目标区域 的宽度大于所述自动行走设备的宽度。
优选地, 所述图像釆集装置的视角范围为 90度至 120度。
优选地, 所述自动行走设备为自动割草机, 所述预定颜色为绿色。 优选地, 所述图像釆集装置上方设有遮挡板, 所述遮挡板从所述图像釆集 装置的顶部向外延伸。
优选地, 所述图像釆集装置釆集壳体前方区域的图像, 并将所述图像传递 到主控模块, 所述前方区域至少包括壳体前方地面的预定区域, 所述预定区域 的宽度大于壳体的宽度, 所述主控模块分析所述图像中的与所述预定区域对应 的预定图像块, 以监控所述预定区域中是否出现边界, 当一个子区域为非工作 区域且其相邻子区域为工作区域时, 主控模块判断边界位于该子区域中, 并在 监控到边界时使自动行走设备处于边界位置并沿边界行走。
优选地, 沿边界行走时, 所述主控模块控制行走模块, 以保持壳体位于工 作区域内, 且边界位于壳体的特定一侧。
优选地, 所述图像釆集装置釆集图像并传递给主控模块, 所述主控模块将 所述图像的预定图像块分为中部、 右部和左部三个子图像块, 分别对应于自动 行走设备正前方和自动行走设备等宽的中间区域、 所述中间区域右侧的右侧区 域、 所述中间区域左侧的左侧区域三个子区域, 所述主控模块控制行走模块动 作以调整自动行走设备的位置, 保持中部对应的中间区域识别为工作区域, 左 部或右部对应的左侧区域或右侧区域识别为非工作区域且边界位于其中, 以保 持壳体位于工作区域内, 且边界位于壳体的特定一侧。
优选地, 所述主控模块还包括边界识别单元, 所述边界识别单元判断当前 沿行的边界是否通向停靠站, 若判断结果为否, 所述主控模块控制行走模块, 使自动行走设备离开当前沿行的边界。
本发明还提供一种自动行走设备的工作区域判断方法, 所述自动行走设备 包括壳体、 行走模块、 安装在壳体上的图像釆集装置, 以及连接图像釆集装置 和行走模块以控制自动行走设备工作的主控模块, 所述工作区域判断方法包括 以下步骤: 所述图像釆集装置拍摄目标区域, 形成图像; 所述主控模块将所述 图像划分为若千子图像块, 每一子图像块对应目标区域的一个子区域; 所述主 控模块提取至少一个子图像块的各个像素的颜色; 所述主控模块计算预定颜色 在该子图像块中所占的比例并与第一预设值比较; 所述主控模块提取该子图像 块的紋理特征值并与第二预设值比较; 若所述图像的一个子图像块中的预定颜 色所占的比例达到或超过第一预设值且紋理特征值达到或超过第二预设值, 所 述主控模块则判断该子图像块对应的子区域为工作区域, 若该子图像块中的预 定颜色所占的比例小于第一预设值或紋理特征值小于第二预设值, 所述主控模 块则判断该子图像块对应的子区域为非工作区域。
优选地, 所述主控模块中存有所述预定颜色的颜色分量的数值范围, 所述 主控模块提取一个子图像块的每一像素的颜色分量, 若一个像素的颜色分量分 别落入预定颜色的颜色分量的数值范围, 则所述主控模块判断该像素的颜色为 预定颜色。
优选地, 所述颜色分量为三原色分量。
优选地, 所述紋理特征值为参数离散度, 所述第二预设值为预设离散度, 主控模块中存储预设离散度及预设差分值, 主控模块计算一个子图像块中每相 邻的两个像素的至少一个参数的梯度差分, 判断该梯度差分是否大于预设差分 值, 计算该子图像块中所有大于该预设差分值的梯度差分的参数离散度, 并判 断参数离散度是否达到预设离散度。
优选地, 所述图像釆集装置拍摄的图像包括中部、 左部和右部三个子图像 块, 分别对应目标区域的中间区域、 左侧区域及右侧区域, 所述中间区域位于 自动行走设备的前方正中, 所述左侧区域及右侧区域分别位于所述中间区域沿 自动行走设备行进方向的左右两侧, 当所述中间区域被判断为非工作区域时, 所述转向控制单元改变所述自动行走设备的行走方向, 直到所述中间区域被判 断为工作区域。
优选地, 所述工作区域判断方法还包括控制自动行走设备向停靠站回归的 步骤, 所述行走模块包括安装在壳体上的轮组和驱动所述轮组的行走马达, 所 述控制自动行走设备向停靠站回归的步骤包括以下子步骤: a.监控所述图像釆 集装置釆集的图像的预定图像块, 该预定图像块对应于壳体前方地面的预定区 域, 以判断该预定区域中是否出现边界; b .若特定区域中出现边界, 控制自动 行走设备处于边界位置; c .沿边界行走。
优选地, 所述预定区域的宽度大于壳体的宽度, 所述步骤 a进一步包括: 将预定图像块划分为对应于预定区域的若千子区域的相应若千子图像块; 分析 各个子图像块以将相对应的子区域识别为工作区域或非工作区域中的一个; 当 一个子区域为非工作区域且其相邻子区域为工作区域时, 判断边界位于该子区 域中。
优选地, 在步骤 c中, 沿边界行走时, 保持壳体位于工作区域内, 且边界 位于壳体的特定一侧。
本发明中的自动行走设备及其工作区域判断方法通过图像釆集装置拍摄目 标区域的图像, 主控模块结合颜色识别及紋理分析, 判断目标区域的至少一个 子区域是否为工作区域, 如此使得工作区域的识别更加灵活、 方便。
本发明还提供一种可识别边界并沿边界行走的自动行走设备, 包括: 壳体; 行走模块,所述行走模块包括安装在壳体上的轮组和驱动所述轮组的行走马达; 图像釆集装置, 安装在壳体上; 工作模块, 执行预定工作; 主控模块, 连接图 像釆集装置、 工作模块和行走模块以控制自动行走设备工作, 其特征在于: 所 述图像釆集装置釆集壳体前方区域的图像, 并将所述图像传递到主控模块, 所 述前方区域至少包括壳体前方地面的预定区域, 所述主控模块分析所述图像中 的与所述预定区域对应的预定图像块, 以监控所述预定区域中是否出现边界, 并在监控到边界时使自动行走设备处于边界位置, 并沿边界行走。
优选的, 所述预定区域的宽度大于壳体的宽度, 所述主控模块将预定图像 块划分为对应于预定区域的若千子区域的相应若千子图像块, 并分析各个子图 像块以将相对应的子区域识别为工作区域或非工作区域中的一个, 当一个子区 域为非工作区域且其相邻子区域为工作区域时, 主控模块判断边界位于该子区 域中。
优选的, 沿边界行走时, 所述主控模块控制行走模块, 以保持壳体位于工 作区域内, 且边界位于壳体的特定一侧。
优选的, 所述图像釆集装置釆集图像并传递给主控模块, 所述主控模块将 所述图像的预定图像块分为中部、 右部和左部三个子图像块, 分别对应于自动 行走设备正前方和自动行走设备等宽的中间区域、 所述中间区域右侧的右侧区 域、 所述中间区域左侧的左侧区域三个子区域, 所述主控模块控制行走模块动 作以调整自动行走设备的位置, 保持中部对应的中间区域识别为工作区域, 左 部或右部对应的左侧区域或右侧区域识别为非工作区域且边界位于其中, 以保 持壳体位于工作区域内, 且边界位于壳体的特定一侧。
优选的, 所述主控模块还包括边界识别单元, 所述边界识别单元判断当前 沿行的边界是否通向停靠站, 若判断结果为否, 所述主控模块控制行走模块, 使自动行走设备离开当前沿行的边界。
优选的, 所述边界识别单元判断在预设的时间或预设的距离内 自动行走设 备的行走方向, 并将判断结果和预设标准结果比对, 若一致, 判断当前沿行的 边界连接到停靠站, 若不一致, 判断当前沿行的边界未连接到停靠站。
优选的, 所述边界识别单元计算在预设的时间或预设的距离内 自动行走设 备的累计偏转量, 将所述累计偏转量和预设值比较以判断自动行走设备的行走 方向。
优选的, 所述累积偏转量为自动行走设备的左轮和右轮行走的距离的累计 轮差、 或自动行走设备的累计偏转角度。
优选的, 所述特定一侧为左侧时, 所述预设标准结果为顺时针; 所述特定 一侧为右侧时, 所述预设标准结果为逆时针。
优选的, 所述主控模块还包括停靠站识别单元, 所述停靠站识别单元监控 图像釆集装置釆集的图像中是否出现停靠站, 若监控到停靠站, 所述主控模块 控制行走模块, 使自动行走设备向所述停靠站行驶。
本发明的另一目的在于提供一种成本低, 初始安装容易的自动行走设备的 向停靠站回归的方法。
所述自动行走设备包括: 壳体、 行走模块, 所述行走模块包括安装在壳体 上的轮组和驱动所述轮组的行走马达、 安装在壳体上的图像釆集装置, 安装在 壳体上、 执行预定工作的工作模块、 连接前述的图像釆集装置、 工作模块和行 走模块以控制自动行走设备工作的主控模块, 所述自动行走设备向停靠站回归 的方法包括以下步骤: a.监控所述图像釆集装置釆集的图像的预定图像块, 该 预定图像块对应于壳体前方地面的预定区域, 以判断该预定区域中是否出现边 界; b .若特定区域中出现边界, 控制自动行走设备处于边界位置; c .沿边界行 走。
优选的, 所述预定区域的宽度大于壳体的宽度, 所述步骤 a进一步包括: 将预定图像块划分为对应于预定区域的若千子区域的相应若千子图像块; 分析 各个子图像块以将相对应的子区域识别为工作区域或非工作区域中的一个; 当 一个子区域为非工作区域且其相邻子区域为工作区域时, 判断边界位于该子区 域中。
优选的, 在步骤 c 中, 沿边界行走时, 保持壳体位于工作区域内, 且边界 位于壳体的特定一侧。
优选的, 所述图像釆集装置釆集釆集图像并传递给主控模块, 所述主控模 块将所述图像的预定图像块分为中部、 右部和左部三个子图像块, 分别对应于 自动行走设备正前方, 和自动行走设备等宽的中间区域、 所述中间区域右侧的 右侧区域、 所述中间区域左侧的左侧区域三个子区域, 所述主控模块控制行走 模块动作以调整自动行走设备的位置, 保持中部对应的中间区域识别为工作区 域,左部或右部对应的左侧区域或右侧区域识别为非工作区域且边界位于其中, 以保持壳体位于工作区域内, 且边界位于壳体的特定一侧。
优选的, 自动行走设备向停靠站回归的方法还包括以下步骤: d.判断当前 沿行的边界是否通向停靠站; e.若步骤 d 的判断结果为否, 离开当前沿行的边 界, 执行步骤 a。
优选的, 所述步骤 d进一步包括以下步骤: d l .判断在预设的时间或预设的 距离内 自动行走设备的行走方向; d2.将 d l 步骤的判断结果和预设标准结果比 对, 若一致, 判断当前沿行的边界连接到停靠站, 若不一致, 判断当前沿行的 边界未连接到停靠站。
优选的, 所述步骤 d l具体为: 计算在预设的时间或预设的距离内 自动行走 设备的累计偏转量, 将所述累计偏转量和预设值比较以判断自动行走设备的行 走方向。
优选的, 所述累积偏转量为自动行走设备的左轮和右轮行走的距离的累计 轮差、 或自动行走设备的累计偏转角度。
优选的, 所述特定一侧为左侧时, 所述预设标准结果为顺时针; 所述特定 一侧为右侧时, 所述预设标准结果为逆时针。
优选的, 自动行走设备向停靠站回归的方法还包括以下步骤: f.监控釆集 图像釆集装置釆集的图像中是否出现停靠站; g.若监控到停靠站, 向所述停靠 站行驶。
与现有技术相比, 本发明的有益效果为: 通过釆用图像釆集装置监控边界 并沿边界回归停靠站, 避免了需要开槽埋设物理边界线, 布置工作系统筒单省 力。
本发明提供一种可以在碰撞前识别障碍物并且识别精度较高的自动行走设 备的障碍检测方法。
为实现上述目的, 本发明提供的技术方案是: 一种自动行走设备, 在工作 区域内 自动行走工作, 包括: 壳体; 工作模块; 行走模块, 支撑并驱动所述自 动行走设备行走; 主控模块, 控制所述工作模块和所述行走模块按预设的方式 工作; 所述自动行走设备还包括图像釆集装置和超声波探测装置; 所述图像釆 集装置获取所述自动行走设备前方预定区域的图像信息, 所述主控模块基于所 述图像信息判断所述预定区域内是否存在非工作区域, 当存在非工作区域时, 将所述非工作区域的大小参数与预设值进行比较; 当所述非工作区域的大小参 数小于预设值时, 所述超声波探测装置检测所述非工作区域是否存在障碍物。
优选的, 所述主控模块根据所述图像信息计算出所述非工作区域的大小参 数, 所述非工作区域的大小参数可以为所述非工作区域的长度、 宽度或者面积 中的至少一个。
优选的, 所述预设值分别小于所述自动行走设备在所述工作区域上的投影 的长度、 宽度或者面积。
优选的, 所述主控模块预先设置有时间阈值, 当超声波探测装置从发出超 声波到接收到回波的时间小于所述时间阈值时, 所述主控模块判断所述非工作 区域存在障碍物。
优选的, 所述主控模块判断所述非工作区域存在障碍物时, 控制所述行走 模块使所述自动行走设备远离所述障碍物。
本发明提供的另一种技术方案是: 一种自动行走设备的障碍检测方法, 所 述自动行走设备在工作区域内 自动行走工作,所述障碍检测方法包括以下步骤: a .通过图像釆集装置获取所述自动行走设备前方预定区域的图像信息; b .基于 所述图像信息判断所述预定区域内是否存在非工作区域; c . 当存在非工作区域 时, 将所述非工作区域的大小参数与预设值进行比较; d . 当所述非工作区域的 大小参数小于预设值时, 通过超声波探测装置检测所述非工作区域是否存在障 碍物。
优选的, 所述步骤 b中通过识别所述图像信息中的颜色和紋理判断是否存 在非工作区域。
优选的, 所述步骤 c所述步骤 c中根据所述图像信息计算出所述非工作区 域的大小参数, 所述非工作区域的大小参数可以为所述非工作区域的长度、 宽 度或者面积中的至少一个。 优选的, 所述预设值分别小于所述自动行走设备在所述工作区域上的投影 的长度、宽度和面积的预设值小于所述自动行走设备在所述工作区域上的投影。
优选的, 所述障碍检测方法进一步包括, 将超声波探测装置从发出超声波 到接收到回波的时间与预设的时间阈值进行比较, 当所述超声波探测装置从发 出超声波到接收到回波的时间小于预设的时间阈值时, 所述非工作区域存在障 碍物。
优选的, 所述自动行走设备的障碍检测方法进一步包括: 当所述预定区域 内存在障碍物时, 所述自动行走设备远离所述障碍物。
与现有技术相比, 本发明提供的自动行走设备及其障碍检测方法, 使得自 动行走设备能够通过图像釆集装置和超声波探测装置进行工作区域内的障碍物 识别, 在识别障碍物时无需与障碍物直接碰撞, 使得自动行走设备不易因为与 障碍物之间的碰撞而损坏, 并且自动行走设备识别障碍物时精度较高。
本发明还提供一种自动行走设备与停靠站对接的对接方法, 所述自动行走 设备上设置有图像釆集装置, 停靠站设置有底座, 通过底座的安装平面将停靠 站安装于固定位置, 所述对接方法包括如下步骤: a.通过图像釆集装置釆集 自动行走设备当前位置的环境图像信息; b.通过所述环境图像信息判断自动行 走设备当前位置周围是否存在停靠站; c .当 自动行走设备当前位置周围存在停 靠站时, 判断自动行走设备与停靠站是否正对; d.当 自动行走设备与停靠站正 对时, 控制自动行走设备沿与停靠站正对的方向朝停靠站靠近。
优选的, 步骤 b包括, b l)识别环境图像信息中是否包含预设颜色; b2)当环 境图像信息中包含预设颜色时, 提取具有预设颜色的子区域; b3)获取子区域的 轮廓; b4)判断子区域的轮廓与预设轮廓是否匹配; b5) 当所述子区域的轮廓与 预设轮廓匹配时, 判断自动行走设备当前位置周围存在停靠站。
优选的, 步骤 b3)包括, 根据预设颜色对所述子区域进行灰度处理获得灰度 图像, 对灰度图像进行梯度差分处理获得子区域的轮廓。
优选的, 步骤 b4)包括, 获取表征子区域的轮廓的特征量; 判断所述特征量 与预设特征量是否匹配; 根据特征量与预设特征量是否匹配的结果, 判断子区 域的轮廓与预设轮廓是否匹配。
优选的, 所述子区域的轮廓包括子区域的边界轮廓和子区域的内部轮廓, 所述特征量表征所述子区域的边界轮廓或内部轮廓中的至少一个。 优选的, 所述特征量为所述子区域的边界轮廓的参数、 内部轮廓的参数、 或边界轮廓的参数与内部轮廓的参数之间的比值中的至少一个, 所述参数包括 长度、 高度、 形状、 面积中的至少一个。
优选的, 所述预设轮廓根据沿与安装平面平行的方向在预设角度范围内停 靠站在垂直于安装平面的平面上的投影设定。
优选的, 所述停靠站包括设置在停靠站主体的外表面的特征部分, 所述步 骤 C 包括, 识别环境图像信息中停靠站的特征部分相对环境图像信息的中轴线 的位置关系, 判断所述位置关系是否满足预设条件, 当所述位置关系满足预设 条件时, 判断自动行走设备与停靠站正对。
优选的, 所述特征部分为停靠站的导电端子, 所述导电端子用于当 自动行 走设备与停靠站对接成功时电性连接停靠站和自动行走设备。
优选的, 所述导电端子包括第一端子和第二端子, 环境图像信息中第一端 子与环境图像信息的中轴线的距离为第一距离, 第二端子与环境图像信息的中 轴线的距离为第二距离, 所述预设条件为环境图像信息中第一端子和第二端子 分别位于环境图像信息的中轴线的两侧, 且第一距离与第二距离的比值为预设 比值。
优选的, 所述预设条件为所述导电端子位于环境图像信息的中轴线上。 优选的, 所述特征部分为与底座垂直设置的支撑臂, 所述支撑臂沿与自动 行走设备正对的方向具有第一侧边和第二侧边, 环境图像信息中, 第一侧边与 环境图像信息的中轴线的距离为第一间隔, 第二侧边与环境图像信息的中轴线 的距离为第二间隔, 所述预设条件为第一间隔与第二间隔的比值为预设比值。
本发明还提供一种自动工作系统, 包括停靠站和可以与停靠站对接的自动 行走设备, 停靠站包括, 底座, 包括安装平面, 通过安装平面将停靠站主体安 装于固定位置; 特征部分, 设置于停靠站主体的外表面; 自动行走设备包括, 图像釆集装置, 釆集自动行走设备当前位置的环境图像信息; 主控模块, 接收 图像釆集装置传递的环境图像信息, 包括第一判断组件、 第二判断组件、 信号 发送单元以及存储单元, 存储单元, 存储有预设参数; 第一判断组件, 根据所 述环境图像信息和预设参数判断自动行走设备当前位置周围是否存在停靠站; 第二判断组件, 根据所述环境图像信息和预设参数判断自动行走设备与停靠站 是否正对; 信号发送单元, 根据第一判断组件和第二判断组件的判断结果发送 相应的控制信号; 行走模块, 接收所述控制信号, 并根据所述控制信号驱动自 动行走的行走。
优选的, 所述预设参数包括预设轮廓, 所述第一判断组件包括颜色识别单 元、 区域提取单元、 轮廓获取单元、 轮廓判断单元, 所述颜色识别单元识别环 境图像信息中是否包含预设颜色, 区域提取单元提取具有预设颜色的子区域, 轮廓获取单元获取子区域的轮廓, 轮廓判断单元判断子区域的轮廓与预设轮廓 是否匹配, 当所述子区域的轮廓与预设轮廓匹配时, 判断自动行走设备当前位 置周围存在停靠站。
优选的, 所述轮廓获取单元包括灰度处理电路和梯度差分处理电路, 所述 灰度处理电路根据预设颜色对所述子区域进行灰度处理获得灰度图像, 梯度差 分处理电路对灰度图像进行梯度差分处理获得子区域的轮廓。
优选的, 所述轮廓判断单元包括特征量获取电路、 特征量匹配电路, 所述 特征量获取电路获取表征子区域的轮廓的特征量, 特征量匹配电路判断所述特 征量与预设特征量是否匹配, 当所述特征量与预设特征量匹配时, 轮廓判断单 元判断子区域的轮廓与预设轮廓。
优选的, 所述子区域的轮廓包括子区域的边界轮廓和子区域的内部轮廓, 所述特征量表征所述子区域的边界轮廓或内部轮廓中的至少一个。
优选的, 所述特征量为所述子区域的边界轮廓的参数、 内部轮廓的参数、 或边界轮廓的参数与内部轮廓的参数之间的比值中的至少一个, 所述参数包括 长度、 高度、 形状、 面积中的至少一个。
优选的, 所述预设轮廓根据沿与安装平面平行的方向在预设角度范围内停 靠站在垂直于安装平面的平面上的投影设定。
优选的, 所述预设参数包括预设条件, 第二判断组件包括特征识别单元和 特征判断单元, 特征识别单元识别环境图像信息中停靠站的特征部分相对环境 图像信息的中轴线的位置关系,特征判断单元所述位置关系是否满足预设条件, 当所述位置关系满足预设条件时, 第二判断组件判断自动行走设备与停靠站正 对。 优选的, 所述特征部分为停靠站的导电端子, 所述导电端子用于当 自动行 走设备与停靠站对接成功时电性连接停靠站和自动行走设备。
优选的, 所述导电端子包括第一端子和第二端子, 所述环境图像信息中第 一端子与环境图像信息的中轴线的距离为第一距离, 第二端子与环境图像信息 的中轴线的距离为第二距离, 所述预设条件为第一距离与第二距离的比值为预 设比值。
优选的, 所述预设条件为所述导电端子位于环境图像信息的中轴线上。 优选的, 所述特征部分为与底座垂直设置的支撑臂, 所述支撑臂沿与 自动 行走设备正对的方向具有第一侧边和第二侧边, 环境图像信息中, 第一侧边与 环境图像信息的中轴线的距离为第一间隔, 第二侧边与环境图像信息的中轴线 的距离为第二间隔, 所述预设条件为第一间隔与第二间隔的比值为预设比值。
本发明的有益效果为: 自动行走设备无需人为千预即可与停靠站可靠对接, 附图说明
下面结合附图和实施例对本发明作进一步说明:
图 1是本发明实施例的自动工作系统图。
图 2是图 1所示的自动工作系统中自动行走设备的模块图。
图 3是图 2所示的自动行走设备的立体图。
图 4是图 2所示的自动行走设备的拍摄区域示意图。
图 5是图 3所示的图像的像素分布示意图。
图 6是本发明的工作区域判断方法的第一实施例的流程示意图。
图 7是本发明的工作区域判断方法的第二实施例的流程示意图。
图 8是本实施例中的自动行走设备保持直线行走的示意图。
图 9是本实施例中的自动行走设备向右转向的示意图。
图 10是图 1所示的自动行走设备沿边界行走的示意图。
图 11是图 10中的自动行走设备沿边界行走的原理示意图。
图 12是图 1所示的自动行走设备脱离孤岛的示意图。
图 13是本发明自动行走设备向停靠站回归的方法的流程示意图。
图 14 是图 13 中识别当前沿行的边界是否通向停靠站的方法的流程示意 图。 图 15是本发明 自动行走设备的超声波探测装置的工作示意图; 图 16是本发明 自动行走设备的障碍检测方法的流程图。
图 17是本发明 自动行走设备另一实施方式中的电路模块图;
图 18是本发明自动行走设备与停靠站对接的对接方法的总体工作流程图; 图 19是图 18所示第一判断组件的电路框图;
图 20是图 18所示第一判断组件判断自动行走设备当前位置周围是否存在 停靠站的一较佳实施方式的工作流程图;
图 21是图 19所示轮廓获取单元的电路框图;
图 22是图 19所示轮廓判断单元的电路框图;
图 23是图 1所示停靠站的立体图;
图 24是图 23所示停靠站的侧视图;
图 25是图 23所示停靠站的正视图;
图 26是图 17所述第二判断组件的电路框图;
图 27是图 26所示第二判断组件判断自动行走设备与停靠站是否正对的第 一较佳实施方式的工作流程图;
图 28是图 26所示第二判断组件判断自动行走设备与停靠站是否正对的第 二较佳实施方式的工作流程图;
图 29是图 26所示第二判断组件判断自动行走设备与停靠站是否正对的第 三较佳实施方式的工作流程图。
其巾,
I、 自动行走设备; 4、 停靠站; 5、 工作区域;
6、 边界; 7、 非工作区域; 71、 孤岛;
I I、 壳体; 15、 图像釆集装置; 16、 超声波探测装置; 17、 行走模块; 19、 工作模块; 33、 能量模块;
3 1、 主控模块; 13、 轮组; 13 1、 左轮;
132、 右轮; 133、 辅助轮; a、 中间区域;
b、 左侧区域; c、 右侧区域; d、 盲区;
29、 遮挡板; 3 11、 子区域划分单元; 312、 颜色提取单元; 3 13、 占比计算单元 3 14、 占比比较单元; 315、 紋理提取单元; 3 16、 紋理比较单元 3 17、 工作区域识别单元; 318、 存储单元;
3 19、 转向控制单元 321、 边界识别单元; 323、 停靠站识别单元; 73、 障碍物; 3150、 第一判断组件; 3151、 颜色识别单元;
3152、 区域提取单元; 3153、 轮廓获取单元; 3153a, 灰度处理电路;
3153b、 梯度差分处理电 3155、 轮廓判断单元; 3155a, 特征量获取电路 路;
3155b, 特征量匹配电路; 3170、 第二判断组件; 3171、 特征识别单元; 3173、 特征判断单元; 3190、 信号发送单元; 41、 导电端子;
411、 第一端子; 412、 第二端子; 43、 底座;
45、 支撑臂; 451、 第一侧边; 452、 第二侧边;
具体实施方式
图 1所示为本发明一实施例的自动工作系统。 自动工作系统设置在地面或 其他表面上。 在本实施例中, 地面划分为工作区域 5和非工作区域 7, 被工作 区域 5 包围的部分非工作区域 7形成孤岛 71, 工作区域 5和非工作区域 7的交 界线形成边界 6。 工作区域 5和非工作区域 7在视觉上具有差异。 自动工作系 统包括自动行走设备 1和停靠站 4。 自动行走设备 1可以为自动吸尘器、 自动 割草机、 自动修剪机等。 在本实施例中, 自动行走设备 1为自动割草机, 停靠 站 4布置在工作区域的外围边界 6上。
结合图 2和图 3, 自动行走设备 1具有壳体 11及安装在壳体 11上的图像 釆集装置 15。 图像釆集装置 15拍摄自动行走设备 1 前方区域的图像, 用于识 别工作区域 5和非工作区域 7。 自动行走设备 1还包括主控模块 31、 行走模块 17、 工作模块 19和能量模块 33。
所述主控模块 31与行走模块 17、 工作模块 19、 能量模块 33、 图像釆集装 置 15均相连。
工作模块 19用于执行特定的工作。 本实施例中, 工作模块 19具体为切割 模块, 包括用于割草的切割部件 ( 图未示) 和驱动切割部件的切割马达 ( 图未 示)。
能量模块 33用于给自动行走设备 1的运行提供能量。 能量模块 33的能源 可以为汽油、 电池包等, 在本实施例中能量模块 33 包括在壳体 2内设置的可充 电电池包。 在工作的时候, 电池包释放电能以维持自动行走设备 1工作。 在非 工作的时候, 电池可以连接到外部电源以补充电能。 特别地, 出于更人性化的 设计, 当探测到电池的电量不足时, 自动行走设备 1会自行的寻找停靠站 4补 充电能。
行走模块 17 包括轮组 13和驱动轮组 13的行走马达。 轮组 13可以有多种 设置方法。 通常轮组 13 包括由行走马达驱动的驱动轮和辅助支撑壳体 11 的辅 助轮 133 , 驱动轮的数量可以为 1个, 2个或者更多。 如图 2所示, 以自动行走 设备 1的移动方向作为前侧, 与前侧相对的一侧为后侧, 与前后侧相邻的两边 分别为左右两侧。 在本实施例中, 自动行走设备 1的驱动轮为 2个, 分别为位 于左侧的左轮 131和位于右侧的右轮 132。 左轮 131和右轮 132关于自动行走 设备 1的中轴线对称设置。 左轮 131和右轮 132优选的位于壳体 11的后部, 辅 助轮 133位于前部, 当然在其他实施例中也可以替换设置。
在本实施例中, 左轮 131和右轮 132各自配接一个驱动马达, 以实现差速 输出以控制转向。 驱动马达可以直接连接驱动轮, 但也可以在驱动马达和驱动 轮之间设传动装置, 如本技术领域内常见的行星轮系等。 在其他的实施例中, 也可设置驱动轮 2个, 驱动马达 1个, 这种情况下, 驱动马达通过第一传动装 置驱动左轮 131 , 通过第二传动装置驱动右轮 132。 即同一个马达通过不同的传 动装置驱动左轮 13 1和右轮 132。
如图 3和图 4 , 图像釆集装置 15安装在壳体 11 的前部靠上的位置, 优选 的居中设置, 釆集壳体 11前方区域的图像, 该前方区域至少包括前方地面的目 标区域。 在本实施例中, 图像釆集装置 15的取景范围为一固定区域, 如固定的 视角范围 90度至 120度。 在其他可选实施例中取景范围也可以为活动的, 可选 取视角范围内一定角度范围作为实际取景范围, 如选取视角范围 120度内位于 中部的 90度范围作为实际取景范围。
图像釆集装置 15的取景范围包括的目标区域为图 4中的矩形的 DCIJ区域, DCIJ 区域位于自动行走设备 1的正前方的地面,且和自动行走设备 1 间隔一小 段距离, 形成盲区 d。 DCIJ 区域的中轴线和自动行走设备 1 的壳体 11 的中轴 线重合, 且 DCIJ 区域的宽度略大于自动行走设备 1 的宽度。 这样能够保证自 动行走设备 1能够釆集其正前方不远处地面的图像信息,供主控模块 3 1判断其 属性。
图像釆集装置 15的全部取景范围可以大于 DCIJ区域, 例如还包括地面以 上的区域, 在这种情况下, 主控模块 3 1会将图像釆集装置 15釆集的完整图像 中, 对应于 DCIJ 区域的预定图形块提取出来进行地面属性分析; 图像釆集装 置 15 的全部取景范围也可以恰好等于 DCIJ 区域, 此时图像釆集装置 15釆集 的完整图像即为对应于 DCIJ区域的预定图像块。
具体在本实施例中, 经过主控模块 31 的处理, 预定图像块分为中部、 左 部和右部三个子图像块, 分别对应于目标区域中的子区域。 中部对应于自动行 走设备 1 的前方正中、 与 自动行走设备 1 等宽的中间区域 a; 左部对应于自动 行走设备 1 的前方、 位于中间区域 a左侧的左侧区域 b ; 右部对应于自动行走 设备 1的前方、 位于中间区域 a右侧的右侧区域 c。 目标区域和自动行走设备 1 之间具有一个图像釆集装置覆盖不到的盲区 d。
请再次参考图 3 , 所述图像釆集装置 15 上方还设置遮挡板 29 , 所述遮挡 板 29从图像釆集装置 15的顶部向外水平延伸, 以避免日光照射图像釆集装置 15 而造成曝光过度, 还可为所述图像釆集装置 15 遮挡雨水。 自动行走设备 1 还包括超声波探测装置 16 , 超声波探测装置 16用于探测自动行走设备 1 前方 是否存在障碍物或充电站。
主控模块 31通过分析图像釆集装置 15拍摄的图像中的各项信息判断取景 区域中的各个部分的属性, 如分析其属于工作区域或非工作区域, 或分析其属 于已工作区域或待工作区域。 具体在本实施例中, 主控模块 3 1通过分析图像中 各个部分的颜色信息和紋理信息, 判断各个部分对应的位置是否是作为工作区 域的草地。 作为工作区域, 草地的颜色为绿色, 且紋理为天然的不规则图案, 而作为非工作区域, 土地地面或水泥等其他地面颜色通常不是绿色, 即使颜色 为绿色, 其通常为人工加工的物品, 从而具有规则的紋理。 因此, 主控模块 3 1 在识别出某部分颜色为绿色, 且紋理不规则, 则判断该部分为草地, 若颜色不 是绿色或者紋理具有规则, 则为非草地。
在判断得到各个部分的属性后, 主控模块 3 1 还控制自动行走设备 1 的行 走方向, 使得自动行走设备 1始终位于工作区域中。
下面对本实施例中的自动行走设备 1根据图像信息判断工作区域的过程进 行说明。
请再次参考图 2 , 所述主控模块 31 包括子区域划分单元 3 11、 颜色提取单 元 312、 占比计算单元 3 13、 占比比较单元 3 14、 紋理提取单元 315、 紋理比较 单元 3 16、 工作区域识别单元 3 17及存储单元 318。
所述图像釆集装置 15 拍摄自动行走设备 1 前方地面的图像后, 所述子区 域划分单元 3 11将图像划分为若千子图像块, 分别对应于目标区域中的若千子 区域。 本实施方式中, 若千子图像块包括中部、 左部和右部三个子图像块, 分 别对应于目标区域中的中间区域 a、 左侧区域 b及右侧区域 c。
请同时参考图 4及图 5 , 所述颜色提取单元 3 12提取至少一个子图像块各 个像素的颜色并判断各个像素是否为预定颜色。 提取的各个像素可以为该子图 像块中所有的像素, 也可以为该子图像块中规则排布的若千像素, 如间隔一个 或多个像素排布的像素。
本实施方式中, 颜色提取单元 3 12分别提取中部、 左部和右部中各个像素 的颜色, 特别地, 颜色提取单元 3 12提取各个像素三原色 ( RGB ) 分量; 所述 存储单元 3 18中存有所述预定颜色的三原色分量的数值范围,颜色提取单元 3 12 比较一个像素的三原色分量与预定颜色的三原色分量的数值范围; 若一个像素 的三原色分量分别落入预定颜色的三原色分量的数值范围, 颜色提取单元 3 12 则判断该像素的颜色为预定颜色。
另一较佳实施方式中, 存储单元 3 18中存有预定颜色的预设色调值 ( Hue ) 范围, 颜色提取单元 3 12在提取一个像素三原色分量后, 将得到的 RGB分量进 一步转换为 HSV ( 色调 Hue , 饱和度 S aturation , 亮度 Value ) 值, 并判断其色 调值是否在预设色调值范围之内, 是则该像素的颜色为预定颜色。
本实施方式中, 所述预定颜色为绿色。
占比计算单元 3 13计算预定颜色的像素在一个子图像块中所占的比例 (以 下筒称占比)。
具体地, 在一个子图像块中, 占比计算单元 3 13将预定颜色的像素的数目 除以该子图像块中总的像素的数目 , 得到预定颜色的像素在该子图像块中的占 比。
所述存储单元 3 18 中存有第一预设值, 所述占比比较单元 3 14比较该子图 像块中预定颜色的占比与第一预设值, 判断两者大小。
紋理提取单元 3 15提取该子图像块的紋理特征值。
一个子图像块所有像素的至少一个参数的离散度可以体现该参数的各个 取值之间的差异程度。 若目标区域为绿色的油漆, 则其图像中一个参数的离散 度很小, 甚至为 0。 由于草地的紋理不规则, 一个子图像块所有像素的一个参 数的差分值的离散度会大于或等于一个预设离散度, 从而体现了该子图像块紋 理的不规则性。 因此, 本实施方式中, 所述紋理特征值为参数离散度, 如颜色 离散度、 灰度离散度、 亮度离散度等。
紋理比较单元 3 16比较该子图像块的紋理特征值与第二预设值以判断紋理 特征值是否达到第二预设值。 本实施方式中, 所述第二预设值为预设离散度。
工作区域识别单元 3 17在该子图像块中的预定颜色的占比达到或超过第一 预设值且紋理特征值达到或超过第二预设值时判断该子图像块对应的子区域为 工作区域。
其他实施方式中, 主控模块 3 1 也可先进行紋理分析, 再进行颜色设别, 只要一个子图像块中的预定颜色的占比达到第一预设值且紋理特征值达到第二 预设值, 所述主控模块 32则识别该子图像块对应的子区域为工作区域 5。 上述 工作区域 5和非工作区域 7的区分方法仅仅是示例性的, 在类似的思路下, 主 控模块 3 1也可以使用其他的算法对图像进行处理来区分工作区域 5和非工作区 域 7。 例如将预定区块划分为更多的子区域以提高位置识别的精确度、 改变预 定区块的形状, 如变为扇形以涵盖更宽的视野等等。
下面以颜色离散度为例, 对紋理分析的具体过程进行说明。 所述存储单元 3 18存储预设离散度及预设差分值。
在颜色提取单元 3 12判断各个像素是否为预定颜色后, 紋理提取单元 3 15 将所有预定颜色的像素标记为 1 , 非预定颜色的像素标记为 0 ; 紋理提取单元 3 15 计算每相邻的两个像素的标记值的梯度差分值, 并判断该梯度差分值是否 大于等于预设差分值, 如 1 ; 紋理提取单元 3 15计算该子区域中所有大于等于 该预设差分值的梯度差分值的离散度, 具体可釆用极差、 平均差或标准差等方 式计算离散度。
另一较佳实施方式中, 紋理提取单元 3 15计算每相邻的两个像素的色调值 的梯度差分值, 并判断该梯度差分值是否大于等于预设差分值; 紋理提取单元 3 15 计算该子区域中所有大于等于该预设差分值的梯度差分值的离散度, 具体 可釆用极差、 平均差或标准差等方式计算离散度。
紋理比较单元 3 16比较该离散度与预设离散度以判断该离散度是否达到预 设离散度。
其他实施方式中, 主控模块 3 1 也可先进行紋理分析, 再进行颜色识别, 只要一个子图像块中的预定颜色的占比达到或超过第一预设值且紋理特征值达 到或超过第二预设值,所述主控模块 32则识别该子图像块对应的子区域为工作 区域 5。 上述工作区域 5和非工作区域 7的区分方法仅仅是示例性的, 在类似 的思路下,主控模块 3 1也可以使用其他的算法对图像进行处理来区分工作区域 5和非工作区域 7。例如将预定区块划分为更多的子区域以提高位置识别的精确 度、 改变预定区块的形状, 如变为扇形以涵盖更宽的视野等等。
下面对本实施例中的自动行走设备 1根据至少一个子区域的工作区域判断 结果控制自动行走设备 1 的行走方向的过程进行说明。
所述主控模块 3 1还包括转向控制单元 3 19。 当所述中间区域 a被判断为工 作区域时, 所述转向控制单元 3 19保持所述自动行走设备 1的行走方向; 当所 述中间区域 a被判断为非工作区域时, 所述转向控制单元 3 19改变所述自动行 走设备 1 的行走方向, 直到所述中间区域 a被判断为工作区域。 从而保证所述 自动行走设备 1仅在工作区域 5 内行走, 不会跑出工作区域 5。
具体地, 在本实施例中, 当所述中间区域 a被判断为非工作区域时, 所述 转向控制单元 3 19控制自动行走设备 1随机地向左转向或者向右转向, 直到所 述中间区域 a被判断为工作区域。
其他实施方式中, 转向控制单元 3 19根据转向时中间区域 a中的绿色占比 的变化趋势或者绿色离散度的变化趋势进一步调整自动行走设备 1 的行走方 向。 如自动行走设备 1 向右转向时, 中间区域 a中的绿色占比变大或者绿色离 散度变大则转向控制单元 3 19控制自动行走设备 1继续向右转向; 相反, 如果 自动行走设备 1 向右转向时, 中间区域 a中的绿色占比变小或者绿色离散度变 小则转向控制单元 3 19控制自动行走设备 1停止向右转向, 之后向左转向。
请参考图 6 , 本发明还提供一种自动行走设备 1 的工作区域判断方法, 本 发明工作区域判断方法的第一较佳实施例包括以下步骤:
步骤 S 101 : 所述图像釆集装置 15拍摄自动行走设备 1前方目标区域的图 像。
步骤 S 102 :所述主控模块 3 1将图像釆集装置 15拍摄的图像划分为若千子 图像块。 本实施方式中, 若千子图像块分为中部、 左部和右部三个子图像块, 分别对应于目标区域中的中间区域 a、 左侧区域 b及右侧区域 c。
步骤 S 103 : 所述主控模块 3 1提取至少一个子图像块的各个像素的颜色。 本实施方式中,主控模块 3 1提取每一子图像块的各个像素的三原色( RGB ) 分量。
步骤 S 104 : 所述主控模块 3 1识别子图像块的各个像素的颜色是否为预定 颜色。 步骤 S105: 所述主控模块 31计算预定颜色在该子图像块中的占比。
本实施方式中, 所述预定颜色为绿色, 所述主控模块 31 中存有预定颜色 的颜色分量, 特别是三原色分量的数值范围。 若一个像素的颜色分量分别落入 预定颜色的颜色分量的数值范围, 则颜色提取单元 312判断该像素的颜色为预 定颜色。 在一个子图像块中, 占比计算单元 313将绿色像素的数目除以该子图 像块中总的像素的数目 , 得到绿色的像素在该子图像块中的占比。
步骤 S106: 所述主控模块 31判断预定颜色在该子图像块中所占的比例是 否达到或超过第一预设值。 是则进入步骤 S107, 否则进入步骤 S110。
步骤 S107: 所述主控模块 31提取该子图像块的紋理特征值。
本实施方式中, 所述紋理特征值为参数离散度, 所述第二预设值为预设离 散度。 所述主控模块 31存有预设离散度及预设差分值, 紋理提取单元 315计算 一个子图像块中每相邻的两个像素的至少一个参数的梯度差分, 判断该梯度差 分是否大于预设差分值, 计算该子图像块中所有大于该预设差分值的梯度差分 的离散度。
步骤 S108: 所述主控模块 31判断该子图像块的紋理特征值是否达到或超 过第二预设值。 是则进入步骤 S109, 否则进入步骤 S110。
步骤 S109:若该子图像块中预定颜色的占比达到或超过第一预设值且紋理 特征值达到或超过第二预设值,主控模块 32则识别该子图像块对应的子区域为 工作区域 5。
步骤 S110:若该子图像块中预定颜色的占比小于第一预设值且紋理特征值 小于第二预设值,所述主控模块 32则识别该子图像块对应的子区域为非工作区 域 7。
请参考图 7, 本发明工作区域判断方法的第二较佳实施例包括以下步骤: 步骤 S201: 所述图像釆集装置 15拍摄自动行走设备 1前方地面的图像。 步骤 S202:所述主控模块 31将图像釆集装置 15拍摄的图像划分为若千子 图像块。 本实施方式中, 若千子图像块分为中部、 左部和右部三个子图像块, 分别对应于中间区域 a、 左侧区域 b及右侧区域 c。
步骤 S203: 所述主控模块 31提取每一子图像块的紋理特征值。
本实施方式中, 所述紋理特征值为参数离散度, 所述第二预设值为预设离 散度。 所述主控模块 31存有预设离散度及预设差分值, 紋理提取单元 315计算 一个子图像块中每相邻的两个像素的至少一个参数的梯度差分, 判断该梯度差 分是否大于预设差分值, 计算该子图像块中所有大于该预设差分值的梯度差分 的离散度。
步骤 S204 : 所述主控模块 3 1判断判断该子图像块的紋理特征值是否达到 或超过第二预设值。 是则进入步骤 S205 , 否则进入步骤 S210。
步骤 S205 : 所述主控模块 3 1提取至少一个子图像块的各个像素的颜色。 本实施方式中,主控模块 3 1提取每一子图像块的各个像素的三原色( RGB ) 分量。
步骤 S206 : 所述主控模块 3 1识别子图像块的各个像素的颜色是否为预定 颜色。
步骤 S207 : 所述主控模块 3 1计算预定颜色在该子图像块中占比。
本实施方式中, 所述预定颜色为绿色, 所述主控模块 3 1 中存有预定颜色 的颜色分量, 特别是三原色分量的数值范围。 若一个像素的颜色分量分别落入 预定颜色的颜色分量的数值范围, 则颜色提取单元 3 12判断该像素的颜色为预 定颜色。 在一个子图像块中, 占比计算单元 3 13将绿色像素的数目除以该子图 像块中总的像素的数目 , 得到绿色的像素在该子图像块中的占比。
步骤 S208 : 所述主控模块 3 1判断预定颜色在该子图像块中所占的比例是 否达到或超过第一预设值。 是则进入步骤 S209 , 否则进入步骤 S210。
步骤 S209 :若该子图像块中预定颜色的占比达到或超过第一预设值且紋理 特征值达到或超过第二预设值,所述主控模块 32则识别该部分对应的子区域为 工作区域 5。
步骤 S210 :若该子图像块中预定颜色的占比小于第一预设值且紋理特征值 小于第二预设值,所述主控模块 32则识别该子图像块对应的子区域为非工作区 域 7。
本实施例中的工作区域判断方法在判断至少一个子区域是否为工作区域 后, 还控制自动行走设备 1的行走方向。
请参考图 8 , 当所述中间区域 a被判断为工作区域时, 所述主控模块 3 1控 制所述自动行走设备 1保持行走方向。
当所述中间区域 a被判断为非工作区域时, 所述主控模块 3 1 改变所述自 动行走设备 1的行走方向, 直到所述中间区域 a被判断为工作区域。 从而保证 所述自动行走设备 1仅在工作区域 5 内行走, 不会跑出工作区域 5。
具体地, 在本实施例中, 当所述中间区域 a被判断为非工作区域时, 所述 主控模块 3 1控制自动行走设备 1随机地向左转向或者向右转向,直到所述中间 区域 a被判断为工作区域。
其他实施方式中, 主控模块 3 1 根据转向时中间区域 a 中的绿色占比的变 化趋势或者绿色离散度的变化趋势进一步调整自动行走设备 1的行走方向。
请参考图 9 , 如自动行走设备 1 向右转向时, 中间区域 a中的绿色占比变 大或者绿色离散度变大则主控模块 3 1控制自动行走设备 1继续向右转向;相反, 如果自动行走设备 1向右转向时, 中间区域 a中的绿色占比变小或者绿色离散 度变小则主控模块 3 1控制自动行走设备 1停止向右转向, 之后向左转向。
本发明的工作区域判断方法通过图像釆集装置 15 拍摄自动行走设备 1 前 方的图像, 主控模块 3 1结合颜色识别及紋理分析, 判断目标区域的至少部分区 域是否为工作区域, 如此使得工作系统设置筒单、 人性化, 且工作区域识别灵 活、 方便。
本发明的自动行走设备 1还可根据预定区域中工作区域 5和非工作区域 7 的分布情况, 并寻找到边界 6 , 控制自动行走设备 1沿边界 6向停靠站 4回归。 因此本发明还提供一种自动行走设备向停靠站回归的方法。
基于上面对工作区域 5和非工作区域 7 的区分, 主控模块 3 1 实现了分析 所述图像中和所述预定区域对应的预定图像块, 以监控所述预定区域中是否出 现边界。
在本实施例中, 主控模块 3 1 将预定图像块划分为对应于预定区域的若千 子区域的相应若千子图像块, 并分析各个子图像块以将相对应的子区域识别为 工作区域或非工作区域中的一个, 当一个子区域为非工作区域且其相邻子区域 为工作区域时, 主控模块判断边界位置位于该子区域中。
具体的, 若分析后发现中间区域 &、 左侧区域 b和右侧区域 c均为工作区 域, 那么判断自动行走设备 1 自身的位置为位于工作区域 5中且可见范围内没 有边界 6存在。若各个区域中某些区域为工作区域 5 ,某些区域为非工作区域 7 , 那么判断自动行走设备 1位于边界 6附近, 此时, 主控模块 3 1需要进一步判断 自身和边界 6 的相对位置关系。 如果某一个子区域被判定为非工作区域 7 , 并 且该区域的相邻子区域为工作区域 5 , 那么则判定该区域中包括边界 6 , 由于各 个区域的实际范围有限, 因此通过这种方式能够判定边界 6的具体位置。
上述边界 6位置的识别方式仅仅是示例性的, 在类似的思路下, 主控模块 3 1也可以使用其他的算法对视频进行处理来识别边界。 例如将预定区块划分为 更多的子区域以提高边界 6位置识别的精确度、 改变预定区块的形状, 如变为 扇形以涵盖更宽的视野、 改变预定区块的大小以发现更远处的边界等等。
主控模块 3 1识别出边界 6的位置后, 控制行走模块 3 1动作, 使自动行走 设备 1处于边界位置。 若预定区块的实际覆盖范围较大, 该步骤可能需要较长 的时间和动作来完成, 例如在一个较大的、 分为更多子区域的预定区块的最外 侧发现边界 6后, 行走模块 3 1 带动自动行走设备行走, 直到中间区域或几个最 靠近中间区域 a的相邻区域为非工作区域; 若如在本实施例中, 预定区域较小、 仅划分为三个子区域, 那么发现边界 6时边界 6 已经很靠近自动行走设备 1 , 则此时行走到达边界位置仅包括控制自动行走设备保持当前状态, 避免远离边 界 6的动作。
处于边界 6位置后, 主控模块 3 1继续控制行走模块 17动作, 使自动行走 设备 1 沿边界 6行走。 沿边界行走时, 自动行走设备 1 需要保持朝向和边界 6 一致, 因此, 主控模块 3 1控制行走模块 17 , 保持壳体 11位于工作区域 5 内, 且边界 6位于壳体 1 1的特定一侧。
主控模块 3 1 使边界 6 所在的区域位于自动行走设备的一侧而非前方, 以 实现朝向调整。 具体的, 主控模块动作使中间区域 a为工作区域, 而左侧区域 b或右侧区域 c为非工作区域, 这样, 边界 6就位于左侧区域 b或右侧区域 c 中, 而不在中间区域 a。 主控模块 3 1 可以使边界 6位于自动行走设备的任意一 侧, 也可以使边界 6位于自动行走设备的特定一侧, 在本实施例中, 调整朝向 时, 使边界 6位于自动行走设备 1 的特定一侧, 即保持前述的中间区域 a为工 作区域 45 , 且左侧区域 b或右侧区域 c中的特定一个是非工作区域 7 , 另一个 是工作区域 5。 具体的, 主控模块 3 1控制行走模块 17动作, 保持中部对应的 中间区域识别为工作区域, 左部或右部对应的左侧区域或右侧区域识别为非工 作区域且边界位于其中。
朝向调整完毕后, 主控模块 3 1 保持自动行走设备 1 的朝向及行走方向与 边界 6—致, 即主控模块 3 1控制行走模块 17动作, 保持中部对应的中间区域 识别为工作区域, 左部或右部对应的左侧区域或右侧区域识别为非工作区域且 边界位于其中, 使边界 6所在的子区域始终位于自动行走设备 1的一侧, 即保 持前述的中间区域 a为工作区域 45 , 且左侧区域 b或右侧区域 c中的一个是非 工作区域 7 , 另一个是工作区域 5。
请再次参考图 2 , 主控模块 3 1还包括边界识别单元 321和停靠站识别单元 323 , 以下依次介绍。
边界识别单元 321判断当前沿行的边界 6是否正确, 即是否通向停靠站 4。 除了工作区域 5 外, 被工作区域 5 环绕的孤岛 7 1 同样具有边界 6。 如图 12 , 若自动行走设备 1在寻找边界 6时先找到了孤岛 7 1 的边界 6 , 则可能不停 的绕孤岛 7 1转圏, 无法脱离, 无法返回停靠站 4。 为了避免这种情况, 在沿边 界 6行走时, 边界识别单元 321还判断自动行走设备 1 当前所沿行的边界 6是 否是工作区域 5的边界 6 , 如果判断结果为是, 主控模块 3 1控制行走模块 17 , 使自动行走设备 1 继续沿该边界 6 行走; 如果判断结果为否, 则主控模块 3 1 控制行走模块 17 , 使自动行走设备 1 离开当前沿行的边界 6 , 转而寻找其他的 边界 6。
边界识别单元 321通过比对自动行走设备 1的实际行走方向和沿正确边界 行走时的理论行走方向来判断当前的边界是否正确。 如前所述, 本实施例中, 自动行走设备 1在沿边界 6 回归时, 始终保证边界 6位于自身的特定一侧。 以 自动行走设备 1保证边界 6位于自身右侧为例, 若自动行走设备 1 工作区域 5 的外围边界 6上, 其将在边界 6的内部行走, 行走方向为逆时针方向, 若在孤 岛 7 1的外围边界上, 其将在边界 6的外部行走, 其行走方向为顺时针。 预设标 准结果就根据上述这种对应关系而设定, 若所述特定一侧为左侧, 理论行走方 向为顺时针, 所述特定一侧为右侧, 理论行走方向为逆时针。
边界识别单元 321首先判断在预设的时间或预设的距离内, 自动行走设备 1 的行走方向, 以顺时针和逆时针表示, 这个行走方向通过计算在预设的时间 或预设的距离内, 自动行走设备 1的累计偏转量, 并将所述累计偏转量和预设 值比较来获得, 累积偏转量为自动行走设备 1的左轮 13 1和右轮 132行走的距 离的累计轮差、 或自动行走设备 1的累计偏转角度。
随后边界识别单元 321将判断的结果和存储单元 3 1 8中的预设标准结果、 即沿正确的边界 6行走时的理论行走方向比对, 如果比对的结果是实际行走方 向和理论行走方向一致, 那么边界识别单元 321判断当前沿行的边界 6为正确 的边界 6 , 通向停靠站 4 , 如果比对的结果不一致, 那么边界识别单元 321判断 当前沿行的边界不正确, 不通向停靠站 4。
停靠站识别单元 323识别自动行走设备 1是否已经靠近或到达停靠站 4 , 当其识别到停靠站 4后, 主控模块 3 1控制行走模块, 使自动行走设备 1朝向停 靠站行走并对接。 停靠站识别单元 323的具体实现形式可以有多种, 其可以监控图像釆集装 置 15 釆集的图像中是否出现停靠站 4 , 若监控到停靠站 4 , 所述主控模块 3 1 控制行走模块 17 , 使自动行走设备 1向所述停靠站 4行驶。 其也可以釆用电磁 式或其他类型的接近传感器, 在停靠站 4和自动行走设备 1靠近时向自动行走 设备 1发出提示信号等, 在此不再赘述。
以下参照图 13 , 详细描述自动行走设备 1的朝向停靠站 4回归的方法。 自动行走设备 1在回归程序开始后, 首先进入步骤 SO , 保持行走, 在保持 行走的同时, 执行步骤 S 1 , 监控图像釆集装置 15釆集的图像中是否出现边界 6 , 在监控的过程中 自动行走设备 1保持行走。 若主控模块 3 1在图像釆集装置 15釆集的图像中没有发现边界 6 , 则继续执行步骤 SO , 持续监控边界 6 ; 若主 控模块 3 1在图像釆集装置 15釆集的图像中发现了边界 6 , 则进入步骤 S2 , 调 整位置使自动行走设备 1位于边界 6位置且朝向和边界 6走向一致。 在本实施 例中, 由于图像釆集装置 15的预定区域较小, 监控到边界 6时自动行走设备 1 离边界 6 的距离已经较近, 此时步骤 S2 的工作量较小, 仅需调整自身位置靠 近边界 6即可。
监控图像釆集装置 15 釆集的图像中是否包括边界 6可以通过以下步骤实 现。
首先, 将预定图像块划分为对应于预定区域的若千子区域的相应若千子图 像块;
随后, 分析各个子图像块以将相对应的子区域识别为工作区域 5或非工作 区域 7中的一个;
当一个子区域为非工作区域 7且其相邻子区域为工作区域 5时, 判断边界 6位置位于该子区域中。
如图 10和图 11 , 步骤 S2完成后进入步骤 S4 , 沿边界 6行走。 沿边界 6 行走的具体方式可为跨边界 6行走, 也可为在边界 6—侧行走, 在本实施例中, 为了保持行走的稳定性, 自动行走设备 1在边界 6的特定一侧行走。 且在沿边 界行走时, 保持壳体位于工作区域内, 且边界位于壳体的特定一侧。 即保持前 述的中间区域 a为工作区域 45 , 且左侧区域 b或右侧区域 c中的特定一个是非 工作区域 7 , 另一个是工作区域 5。具体的,主控模块 3 1控制行走模块 17动作, 保持中部对应的中间区域识别为工作区域, 左部或右部对应的左侧区域或右侧 区域识别为非工作区域且边界位于其中。 自动行走设备 1调整自身的朝向, 使边界 6全部位于特定一侧, 即左侧区 域 b或者右侧区域 c中, 然后按该朝向方向行走。 朝向调整完毕后, 主控模块 3 1保持自动行走设备 1 的朝向及行走方向与边界 6—致, 即主控模块 3 1控制 行走模块 17动作, 保持中部对应的中间区域识别为工作区域, 左部或右部对应 的左侧区域或右侧区域识别为非工作区域且边界位于其中, 使边界 6所在的子 区域始终位于自动行走设备 1的一侧,即保持前述的中间区域 a为工作区域 45 , 且左侧区域 b或右侧区域 c中的一个是非工作区域 7 , 另一个是工作区域 5。
在行走的过程中, 图像釆集装置 15 仍实时釆集图像, 若边界 6偏离了左 侧区域 b或者右侧区域 c , 则说明 自动行走设备 1 的朝向, 也就是行走方向和 边界 6的走向不再一致, 自动行走设备 1再次调整朝向使边界 6位于左侧区域 b或右侧区域 c 中, 通过上述方式行走和实时调整方向, 自动行走设备 1 实现 了沿边界 6行走。 因为停靠站 4设置在工作区域 5的边界 6上, 自动行走设备 1若沿工作区域 5的边界 6行走, 则最终能够回到停靠站 4。
保持沿边界 6行走的同时, 自动行走设备 1进入步骤 S6 , 监控图像釆集装 置 15釆集的图像中是否出现停靠站 4。若主控模块 3 1分析图像未发现停靠站 4 , 则不作动作, 继续行走和监控停靠站 4 , 若主控模块 3 1发现停靠站 4 , 则进入 步骤 S 8 , 主控模块 3 1控制自动行走设备 1 向停靠站 4行走, 并进行调整朝向 以正对停靠站 4 , 和停靠站 4对接, 确认对接后进行停机、 充电等动作。
以下参照图 14 ,详细描述自动行走设备 1的识别当前沿行的边界 6是否通 向停靠站 4的方法。
在边界判断流程中, 主控设备 3 1首先执行步骤 S4 , 沿边界 6行走。
在沿边界 6行走的同时, 执行步骤 S5 , 边界识别模块 321判断在预设的时 间或者预设的距离内, 自动行走设备 1的行走方向。
步骤 S5可以分解为两个子步骤, 即: 1、 计算在预设的时间或预设的距离 内, 自动行走设备 1的累计偏转量; 以及 2、 将累计偏转量和和预设值相比较, 以判断自动行走设备 1的行走方向。
在上述的子步骤 1 中, 累计偏转量为 自动行走设备 1在行驶中偏离直线的 程度, 或者说累计的偏转角度。 累计偏转量可以用偏离距离或者偏离的角度来 表示。 例如, 在一定的时间或者一定的行驶距离内, 自动行走设备 1向左偏离 了 5m后又向右偏离了 7m , 则累计偏转量可以表示为向右偏离了 2m; 又如, 自动行走设备 1顺时针转向了 15。后, 又逆时针转向了 12。, 则累计偏转量可以 表示为顺时针偏转了 3。。
在本实施例中, 边界识别模块 321釆用累计左轮 13 1和右轮 132的行驶距 离差的方式来计算累计偏转量。 具体的, 在左轮 13 1和右轮 132者左右驱动马 达处各设置速度传感器, 速度传感器将釆集的速度信息传递给相连接的主控模 块 31 , 主控模块 3 1根据该速度信息便可以分别计算得到一定的时间或距离内, 左轮 131和右轮 132分别行驶的距离, 进而得到代表累计偏转量的左右驱动轮 的行驶路程之差。 在其他的实施例中, 也可以釆用累计自动行走设备 1的偏转 角度的方式来计算累计偏转量。 具体的, 在自动行走设备 1 内设置角度感应仪, 角度感应仪不断的检测自动行走设备 1的偏转方向和角度并将数据发送给相连 接的主控模块 3 1 , 主控模块 3 1 中的边界识别模块 321根据该数据便可以计算 得到一定的时间或距离内, 代表累计偏转量的累计偏转角度。
计算得到自动行走设备 1 的一定时间或行驶距离内的累计偏转量后, 进入 到上述的子步骤 2 , 边界识别模块 321 将累计偏转量和和预设值相比较, 以判 断自动行走设备 1 的行走方向。 在理想的情况下, 预设值可以设为 0 , 即只需 要判断距离值或角度值的正负, 例如, 距离或角度为正就判断行走方向为顺时 针, 为负就判断行走方向为逆时针。 但为了保证计算的准确性, 也可以将预设 值设置为一个区间, 例如 ( 0±10 ) 米, 或 ( 0±180° ) , 当累计偏转量位于该区间 外时, 根据其值判断行走方向, 在累计偏转量位于该区间内时, 重新计算累计 偏转量。 重新计算的方式可以有多种, 例如重新开始一个周期的预设时间或预 设距离, 或延长预设时间或预设距离, 或滚动取值, 即随着时间或距离增加, 不断的相应后移预设时间或预设距离的起始点。
在步骤 S5 中, 最终得到自动行走设备 1 的行走方向, 随后进入步骤 S7 , 将 S5 的计算结果和预设标准结果比对, 若一致, 判断当前沿行的边界 6 连接 到停靠站 4 , 若不一致, 判断当前沿行的边界 6未连接到停靠站 4。
边界识别单元 321通过比对自动行走设备 1的实际行走方向和沿正确边界 行走时的理论行走方向来判断当前的边界是否正确。 如前所述, 本实施例中, 自动行走设备 1在沿边界 6回归时, 始终保证边界 6位于自身的特定一侧。 以 自动行走设备 1保证边界 6位于自身右侧为例, 若自动行走设备 1 工作区域 5 的外围边界 6上, 其将在边界 6的内部行走, 行走方向为逆时针方向, 若在孤 岛 71的外围边界上, 其将在边界 6的外部行走, 其行走方向为顺时针。 预设标 准结果就根据上述这种对应关系而设定, 若所述特定一侧为左侧, 理论行走方 向为顺时针, 所述特定一侧为右侧, 理论行走方向为逆时针。
若判断结果为当前沿行的边界 6连接到停靠站 4 , 则返回步骤 S4 , 自动行 走设备 1继续沿边界 6行走; 若判断结果为当前沿行的边界 6不连接到停靠站 4 , 则进入步骤 S9 , 自动行走设备 1 离开当前的边界 6 , 重新回到寻找边界 6 的流程。
本发明的自动行走设备 1 还可根据超声波探测装置 16 前方预设区域内是 否存在障碍物 73。 本发明还提供一种自动行走设备的障碍检测方法。
请参见图 15 , 超声波探测装置 16设置于壳体 1 1 上, 超声波探测装置 16 向前水平地安装, 用于探测自动行走设备 1 当前位置的前方预设区域内是否存 在障碍物 73。
超声波探测装置 16 可以包括发送器和接收器, 发送器发射超声波, 当超 声波遇到物体后, 产生回波, 回波被接收器接收到, 即可判断前方存在立体的 物体。 当然, 超声波探测装置 16可以是一个具有发送和接收声波的双重作用的 超声波传感器。
主控模块 3 1 包括处理单元 (图未示), 存储单元 3 18。 处理单元接收图像 釆集装置 15获取的地面环境图像信息以及超声波探测装置 16检测到的环境信 息, 经过处理后, 与存储单元 3 18中预先设定的障碍物参数进行比较, 基于所 述比较结果, 通过控制单元 142控制行走模块 17和工作模块 19进行行走和工 作。
自动行走设备 1 在行走过程中通过图像釆集装置 15 获取自动行走设备 1 前方预定区域的图像信息, 并将釆集到的图像信息传输给处理单元; 处理单元 对图像信息中的各项信息进行分析进而判断出该区域中的各个部分的属性, 能 够判断出 自动行走设备 1 的前方属于工作区域或非工作区域。
在本实施方式中, 处理单元从图像信息中的各个区域中分别提取该区域的 图像中每个像素点的三原色 ( RGB ) 的分量值。 存储单元 3 18 中预先存储有不 同颜色的分量值的区间,处理单元将从图像釆集装置 15拍摄到的图像提取出的 分量值, 与预先存储于存储单元 3 18中的不同颜色的分量值的区间进行比较, 从而判断出该像素点属于哪个颜色, 处理单元根据分属不同颜色的像素点的数 量, 计算出不同颜色在图像釆集装置 15拍摄到的图像中所占的比例。 存储单元 3 18 中预先存储有工作区域 5对应的颜色比例阈值, 处理单元将计算出的各个 区域中颜色的比例与预先存储的颜色比例阈值进行比较, 从而判断出各个区域 中, 哪些属于工作区域 5 , 哪些属于非工作区域。
在本实施方式中, 工作区域为草坪, 处理单元将图像信息中的各个区域的 绿色像素的数目分别除以各个区域总的像素的数目 , 计算出各个区域中绿色像 素所占的比例, 当中间区域 a、 左侧区域 b、 右侧区域 c中某个区域的绿色像素 所占的比例小于预先存储的颜色比例阈值时, 则该区域存在自动工作设备 1的 非工作区域。
处理单元也可以从图像中的各个区域中分别提取该区域的图像中的紋理 信息进行分析。 例如可以釆用现有的灰度共生矩阵分析方法或田村紋理特征分 析方法获取所述图像每一区域的紋理特征。 其中灰度共生矩阵分析方法可提取 图像的能量、 惯量、 熵和相关性四个特征, 田村紋理特征分析方法可提取图像 的粗糙度、 对比度、 方向度、 线像度、 规整度和粗略度六个特征。
存储单元 3 18中预先存储有预定紋理的紋理特征值, 处理单元将图像中每 一区域的紋理特征值与预定紋理的紋理特征值进行比较, 若图像某一区域的紋 理特征值与预定紋理的紋理特征值相符, 则判断该区域为工作区域, 若图像某 一区域的紋理特征值与预设紋理的紋理特征值不相符, 则判断该区域为非工作 区域。
当然, 自动行走设备 1可以通过颜色信息或者紋理特征对工作区域和非工 作区域进行识别; 也可以通过颜色信息和紋理特征相结合来对工作区域和非工 作区域进行识别, 处理单元可以先识别颜色信息, 再结合识别紋理信息进行判 断, 在本实施方式中, 草坪作为工作区域 5 , 草坪的颜色应该为绿色, 而作为 非工作区域, 有可能是土地或者水泥地面或者是其他类型的地面铺设类型, 也 有可能是草坪中的树木或者石头等障碍物 73 ; 非工作区域的颜色通常与草坪的 颜色是不同的, 即使颜色为绿色, 通常人工加工的物品, 例如人工铺设的地面 等, 具有较规则的紋理, 而草地相对紋理没有明显规则, 因此可根据拍摄图像 的紋理进一步确定目标区域是否为工作区域。 当处理单元在所述矩形区域中识 别出颜色为绿色, 且紋理不规则, 则判断该部分为工作区域 5 ; 当处理单元在 所述矩形区域中识别出颜色不是绿色或者紋理具有规则性的区域, 则在所述矩 形区域中存在非工作区域。 当然, 处理单元也可先进行紋理分析, 再结合颜色 识别进行判断。
处理单元基于所述图像信息, 可以计算出非工作区域图像的长度, 宽度以 及面积等信息, 该非工作区域的图像上述信息, 可以通过统计图像中像素点的 方式获取, 也可以建立坐标系, 通过预设的多边形的周长、 面积计算公式计算 得出。 当然, 也可以通过微积分的方法计算或者其他方法计算出该非工作区域 的上述信息, 在此不——列举。
存储单元 318中预设有图像尺寸与实际尺寸的转换算法, 图像尺寸与实际 尺寸之间存在一定的比例关系, 按照比例关系, 能够依据图像尺寸计算出实际 尺寸, 也可以根据实际尺寸计算出图像尺寸, 处理单元依据非工作区域图像的 长度、 宽度和面积, 按照预设转换算法计算出非工作区域的大小参数, 非工作 区域的大小参数, 包括非工作区域的长度、 宽度和面积。
存储单元 318中存储有非工作区域的大小参数的预设值, 所述预设值包括 所述非工作区域的长度预设值、 宽度预设值和面积预设值, 当非工作区域的长 度、 宽度、 面积中的任意一个超出与其对应的预设值时, 则主控模块 3 1认为自 动行走设备 1 到达了边界 6 ; 如果非工作区域的长度、 宽度、 面积均小于与其 对应的预设值时,自动行走设备 1则进一步的通过超声波探测装置 16进行障碍 物检测。 优选的, 所述预设值为所述自动行走设备 1在工作区域上的投影的长 度、 宽度和面积。
当然, 存储单元 3 18中存储的非工作区域的大小参数的预设值也可以仅包 括非工作区域的宽度预设值, 当非工作区域的宽度超出所述宽度预设值时, 主 控模块 3 1认为 自动行走设备 1到达了边界 6 , 当非工作区域的宽度小于所述宽 度预设值时,自动行走设备 1则进一步的通过超声波探测装置 16进行障碍物探 测。
超声波探测装置 16 发射超声波, 当超声波遇到物体后, 产生回波, 回波 被接收器接收到, 处理单元统计超声波从发出到接收到回波所用的时间。 存储 单元 143 中存储有预设的超声波从发出到接收到回波的时间阈值, 用于将超声 波探测装置 16的探测范围限定在一定的区域内,当超声波从发出到接收到回波 所用的时间大于预设的时间阈值时, 则该回波为超出预设超声波检测区域的物 体返回的, 其有可能是较远距离的物体返回的回波或者超声波遇到地面返回的 回波, 处理单元认为这类回波是无效的; 当超声波从发出到接收到回波所用的 时间小于预设的时间阈值时, 则该回波为在预设超声波检测区域内的物体返回 的, 处理单元认为这类回波是有效的, 判断出 自动行走设备 1 当前位置的前方 预设区域内存在障碍物 73。
请参见图 16 ,本发明提供的自动行走设备 1的障碍检测方法包括以下步骤: 步骤 S300: 获取图像信息。
自动行走设备 1 在工作区域 5 内行走工作, 图像釆集装置 15拍摄自动行 走设备 1前方矩形区域的图像, 并将釆集到的图像传送给主控模块 3 1 1进行处 理。
步骤 S301: 识别图像信息颜色和紋理。
处理单元对图像釆集装置 15 拍摄的图像进行分析, 识别图像各个区域的 颜色和紋理。
步骤 S302: 判断前方是否存在非工作区域。
处理单元根据识别出的颜色和紋理等信息, 与预先设定在存储单元 3 18中 的值进行比较, 判断出所述矩形区域中是否存在非工作区域。
当 自动行走设备 1前方出现非工作区域时, 进入步骤 S 303 , 否则, 返回继 续进行步骤 S300。
步骤 S303: 识别非工作区域的大小。
处理单元按照预设的算法, 计算出所述矩形区域中非工作区域的大小, 例 如计算出非工作区域的长度、 宽度或者面积等。
步骤 S304 : 判断非工作区域的大小是否小于预设值。
处理单元将计算出的所述矩形区域中非工作区域的大小与存储单元 3 18中 的预设非工作区域的大小进行比较。
当所述矩形区域中非工作区域的大小小于存储单元 3 18 中存储的预设值 时, 进入步骤 S 305-S 308 ,通过超声波探测装置进行检测; 当所述矩形区域中非 工作区域的大小大于预设值时, 自动行走设备 1认为已到达工作区域边界, 自 动行走设备 1可执行与边界相关的工作, 例如远离边界或者沿边界线行走等, 在此不——赘述。
步骤 S305 : 发送超声波, 开始计时。
超声波探测装置 16发送超声波, 处理单元开始计时,当超声波遇到物体时 发生反弹形成回波。
步骤 S306 : 接收回波, 计算时间。
回波能够被超声波探测装置 16接收到,当回波到达超声波探测装置 16时, 处理单元计算出超声波从发出到接收到回波所用的时间。
步骤 S307: 判断统计时间是否小于预设值。
存储单元 3 18中存储有预设的超声波从发出到接收到回波的时间阈值, 当 超声波从发出到接收到回波所用的时间大于预设的时间阈值时, 处理单元认为 这类回波是无效的,返回步骤 S 300 ; 当超声波从发出到接收到回波所用的时间 小于预设的时间阈值时, 处理单元判断出 自动行走设备 1 当前位置的前方预设 区域内存在障碍物 73。
步骤 S308 : 存在障碍物, 进行回避。
自动行走设备 1判断出前方存在障碍物时进行回避。 当所述矩形区域中仅 中间区域 a出现非工作区域时, 自动行走设备 1可从左侧区域 b或者右侧区域 c中任意一个绕行离开障碍物 73 ; 否则, 自动行走设备 1可从左侧区域 b或者 右侧区域 c中, 没有出现非工作区域的一侧绕行离开障碍物 73。 总之, 自动行 走设备 1 总是从所述矩形区域中没有出现非工作区域的一侧绕行离开障碍物 73。
当然, 自动行走设备 1的障碍检测方法中的一些步骤是可以调整的, 例如 在本实施方式中, 先使用图像信息检测非工作区域, 当然, 也可以先通过超声 波探测装置检测预设超声波检测区域内的物体信息, 再使用图像信息检测帮助 确定障碍物的信息; 在本实施方式中, 当所述矩形区域中非工作区域的宽度小 于自动行走设备 1 的预设宽度时, 进入超声波检测步骤, 本领域技术人员能够 想到的是, 所述矩形区域中非工作区域的宽度小于自动行走设备 1 的预设宽度 并非进行超声波检测的必须条件, 超声波检测也可以是从始至终都在进行的, 同样可以达到避免自动行走设备 1在工作过程中与障碍物 73碰撞并且识别精度 较高的效果。
本发明提供的自动行走设备及其障碍检测方法, 使得自动行走设备能够通 过图像釆集装置和超声波探测装置进行工作区域内的障碍识别, 在识别障碍物 时无需与障碍物直接碰撞, 使得自动行走设备不易因为与障碍物之间的碰撞而 损坏, 并且自动行走设备识别障碍物时精度较高。
本发明还提供一种能与停靠站 4 自动对接的自动工作系统, 及自动行走设 备与停靠站对接的对接方法。
自动行走设备 1可自动返回停靠站 4 , 并自动与停靠站 4对接。 自动行走 设备 1返回停靠站 4的方式可以是基于视频技术, 基于边界, 基于 GPS , 基于 引导线等。 当 自动行走设备 1基于视频技术返回停靠站 4时, 在返回的过程中, 自动行走设备 1通过图像釆集装置获取当前位置周围的环境图像信息, 并监控 环境图像信息中是否出现边界 6。 当环境图像信息中出现边界 6 时, 驱动自动 行走设备 1在边界 6的特定一侧行走。 在行走的过程中, 图像釆集装置仍实时 釆集自动行走设备 1 当前位置周围的图像信息, 在发现自动行走设备 1的行走 方向偏离边界 6时, 调整行走角度, 从而保证自动行走设备 1始终沿边界 6行 走。 因为停靠站 4设置在工作区域 5的边界 6上, 自动行走设备 1若沿边界 6 行走, 则最终能够返回到停靠站 4附近。
以下结合图 17至图 18具体介绍自动行走设备 1返回停靠站 4附近后, 如 何实现与停靠站 4的自动对接。
如图 17所示, 自动行走设备 1 包括图像釆集装置 15 , 主控模块 3 1 , 行走 模块 17。 图像釆集装置 15设置在自动行走设备 1 的外表面上, 釆集自动行走 设备 1 当前位置周围的环境图像信息, 并将釆集的环境图像信息传递给主控模 块 31。 当 自动行走设备 1返回到停靠站 4附近后, 图像釆集装置 15可以釆集 到停靠站 4的图像信息, 因此环境图像信息中包含了停靠站 4的图像信息。 主 控模块 31 接收图像釆集装置 15 传递的环境图像信息, 其包括第一判断组件 3 150、 第二判断组件 3 170、 信号发送单元 3 190以及存储单元 3 18。 其中, 存储 单元 3 18 , 存储有预设参数; 第一判断组件 3 150根据所述环境图像信息和预设 参数判断自动行走设备 1 当前位置周围是否存在停靠站 4 ; 第二判断组件 3 170 根据所述环境图像信息和预设参数判断自动行走设备 1与停靠站 4是否正对; 信号发送单元 3 190根据第一判断组件 3150和第二判断组件 3170的判断结果发 送相应的控制信号。 行走模块 17接收所述控制信号, 并根据所述控制信号驱动 自动行走设备 1的行走。当第一判断组件 3150判断自动行走设备 1 当前位置周 围不存在停靠站 4时, 信号发送单元 3 190发送控制信号给行走模块 17 , 使得 行走模块 17 驱动自动行走设备 1 转动预设角度后继续行走。 当第一判断组件 3 150 判断自动行走设备 1 当前位置周围存在停靠站 4 , 但第二判断组件 3170 判断自动行走设备 1与停靠站 4不正对时,信号发送单元 3 190发送控制信号给 行走模块 17 , 使得行走模块 17驱动自动行走设备 1转动预设角度后继续行走。 当第一判断组件 3 150判断自动行走设备 1 当前位置周围存在停靠站 4 , 且第二 判断组件 3170判断自动行走设备 1与停靠站 4正对时, 信号发送单元 3190发 送控制信号给行走模块 17 , 使得行走模块 17驱动自动行走设备 1 沿当前角度 继续行走, 实现自动行走设备 1与停靠站 4的自动对接。
自动行走设备 1各模块之间的工作方式如图 18所示。
自动行走设备 1进入步骤 S500 , 进行初始化。 步骤 S500之后, 进入步骤 S502, 启动图像釆集装置 15。
步骤 S502之后, 进入步骤 S504, 图像釆集装置 15开始釆集自动行走设备 1 当前位置周围的环境图像信息, 并将釆集到的环境图像信息传递给主控模块 31的第一判断组件 3150和第二判断组件 3170。本领域技术人员可以理解的是, 图像釆集装置 15与主控模块 31之间可以同过电性接触的方式进行信号传递, 也可以通过非电性接触的方式进行信号传递,图像釆集装置 15可以设置在自动 行走设备 1上, 也可以设置在自动行走设备 1以外的其他地方。
步骤 S504之后, 进入步骤 S506, 主控模块 31 的第一判断组件 3150根据 接收到的环境图像信息和存储单元 318存储的预设参数判断, 自动行走设备 1 当前位置周围是否存在停靠站 4, 当判断结果为是时, 进入步骤 S508; 反之, 当判断结果为否时, 进入步骤 S510。
步骤 S508 中, 主控模块 31 的第二判断组件 3170根据接收到的环境图像 信息和存储单元 318存储的预设参数判断, 自动行走设备 1与停靠站 4是否正 对, 当判断结果为是时, 进入步骤 S512; 反之, 当判断结果为否时, 进入步骤 S510。
步骤 S510中, 主控模块 31 的信号发送单元 3190接收第一判断组件 3150 和第二判断组件 3170发送的信号, 并根据第一判断组件 3150和第二判断组件 3170的判断结果发出相应的控制信号给行走模块 17 , 从而控制行走模块 17驱 动自动行走设备 1旋转预设的角度,使得图像釆集装置 15能从新的角度釆集自 动行走设备 1 当前位置周围的环境图像信息,以便于主控模块 31能根据新的环 境图像信息判断自动行走设备 1 当前位置周围是否存在停靠站 4。
步骤 S512中, 主控模块 31 的信号发送单元 3190接收第一判断组件 3150 和第二判断组件 3170发送的信号, 并根据第一判断组件 3150和第二判断组件 3170的判断结果发出相应的控制信号给行走模块 17 , 从而控制行走模块 17驱 动行走行走设备保持当前的行走方向朝停靠站 4靠近, 即保持沿与停靠站 4正 对的方向朝停靠站 4靠近, 从而实现与停靠站 4的自动对接。
前述实施方式中, 仅概括说明第一判断组件 3150 可以根据环境图像信息 和存储单元 318存储的预设参数判断自动行走设备 1 当前位置周围是否存在停 靠站 4,以下结合图 19至图 22进行详细介绍一种较佳的判断自动行走设备 1 当 前位置周围是否存在停靠站 4的实施方式。
如图 19和 20示出了一种较佳的识别自动行走设备 1周围是否存在停靠站 4的实施方式, 其中图 19为该实施方式的电路模块图, 图 20为该实施方式的 工作流程图。 该实施方式中, 第一判断组件 3150首先通过识别环境图像信息中 是否包含预设颜色进行初步判断自动行走设备 1 周围是否存在停靠站 4, 然后 通过提取具有预设颜色的子区域的轮廓, 并将子区域的轮廓与预设轮廓进行匹 配进行精确判断自动行走设备 1周围是否存在停靠站 4。
如图 19所示, 第一判断组件 3150包括颜色识别单元 3151、 区域提取单元 3152、 轮廓获取单元 3153、 轮廓判断单元 3155, 所述颜色识别单元 3151识别 图像釆集装置 15釆集的环境图像信息中是否包含预设颜色,当环境图像信息中 包含预设颜色时, 颜色识别单元 3151输出相应的电信号给区域提取单元 3152。 区域提取单元 3152收到颜色识别单元 3151输出的电信号后, 将具有预设颜色 的子区域从环境图像信息中提取出来, 并将提取出来的图像信息传递给轮廓获 取单元 3153。 轮廓获取单元 3153根据区域提取单元 3152传递的子区域的图像 信息获取子区域的轮廓, 并将子区域的轮廓信息传递给轮廓判断单元 3155。 轮 廓判断单元 3155将子区域的轮廓与预设轮廓进行比较,判断子区域的轮廓与预 设轮廓是否匹配, 当所述子区域的轮廓与预设轮廓匹配时, 第一判断组件 3150 判断自动行走设备 1 当前位置周围存在停靠站 4。
具体的, 如步骤 S520所示, 颜色识别单元 3151识别环境图像信息中所包 含的颜色值。 典型地, 环境图像信息是由若千个点信息组成, 通过识别每个点 信息所包含的 RGB值可以识别出各个点信息所包含的颜色值。 当然, 也可以通 过识别 HS V值来识别各个点信息所包含的颜色值。
步骤 S520之后, 进入步骤 S522 , 颜色识别单元 3151判断环境图像信息中 是否包含预设颜色, 当判断结果为是时, 进入步骤 S524, 当判断结果为否时, 进入步骤 S540。 所述预设颜色即为停靠站 4的颜色, 该颜色可以通过 RGB来 表示, 也可以通过 HSV表示, 具体为哪种形式取决于颜色识别单元 3151通过 哪种形式识别环境图像信息的颜色值。 典型地, 颜色识别单元 3151通过将环境 图像信息中各个点信息的颜色逐一地与预设颜色进行比较, 即可判断出环境图 像信息中是否包含预设颜色。 从而实现初步判断自动行走设备 1 当前位置周围 是否存在停靠站 4。
步骤 S524中, 区域提取单元 3152将具有预设颜色的子区域从环境图像信 息中提取出来。 典型地, 可以通过颜色空间距离及相似度计算将具有预设颜色 的子区域从环境图像信息中提取出来。 具体的, 由于图像釆集装置 15釆集的环 境图像信息一般为 RGB格式, 首先, 将 RGB颜色模型的图像转换为 HSV颜色 模型, 然后利用颜色空间距离及相似度计算, 进行图像颜色分割, 将图像中预 设颜色的子区域设置为前景白色, 其余区域设为背景黑色, 最后, 将经过颜色 分割后的图像按行或列计算前景象素的个数和, 再进行直方图水平投影或纵向 投影, 以确定所要颜色区域的坐标值, 从而将具有预设颜色的子区域从原始的 环境图像信息中提取出来。
步骤 S524之后, 进入步骤 S526 , 轮廓获取单元 3 153获取具有预设颜色的 子区域的轮廓。 子区域的轮廓包括子区域的边界轮廓和子区域的内部轮廓, 其 中子区域的边界轮廓与停靠站 4的外围结构相对应, 子区域的内部轮廓与停靠 站 4的外表面的特征部分的结构相对应。 典型地, 轮廓获取单元 3 153可以通过 对图像信息进行灰度处理及梯度差分处理, 获取子区域的轮廓。 如图 21所示, 轮廓获取单元 3 153 进一步包括灰度处理电路 3 153 a 和梯度差分处理电路 3 153b , 对应的, 步骤 S526进一步包括步骤 S528和步骤 S 530。
如步骤 S 528 所示, 灰度处理电路 3 153 a根据预设颜色对所述子区域进行 灰度处理获得灰度图像, 并将处理结果传递给梯度差分处理电路 3 153b。
如步骤 S530所示, 梯度差分处理电路 3 153b接收到灰度图像后, 对灰度 图像进行梯度差分处理进而获得子区域的轮廓。 具体的, 梯度差分处理电路 3 153 b 对灰度图像进行梯度差分处理包括两次梯度差分处理和一次精细处理。 梯度差分处理电路 3 153b首次对灰度图像进行梯度差分处理获得子区域的紋理 图像, 然后对紋理图像进行梯度差分扩大化处理生成轮廓带, 最后对轮廓带进 行精细化处理获得。
步骤 S526之后, 进入步骤 S532 , 轮廓判断单元 3 155判断子区域的轮廓与 预设轮廓是否匹配。轮廓判断单元 3 155可以通过获取子区域的轮廓的全部细节 与预设轮廓的全部细节匹配来判断子区域的轮廓与预设轮廓是否匹配, 也可以 通过提取子区域的轮廓的特征量, 并判断子区域的轮廓的特征量与预设特征量 是否匹配来判断子区域的轮廓与预设轮廓是否匹配, 其中预设特征量为与预设 轮廓相对应的特征量。 本实施方式中, 通过特征量的匹配来判断子区域的轮廓 与预设轮廓是否匹配。 如图 22所示, 轮廓判断单元 3 155 包括特征量获取电路 3 155a和特征量匹配电路 3 155a。 相应地, 步骤 S532进一步包括步骤 S534和步 骤 S536。
如步骤 S 534 所示, 特征量获取电路 3 155a获取表征子区域的轮廓的特征 量。 该特征量可以是子区域的内部轮廓的参数, 也可以是子区域的边界轮廓的 参数, 还可以边界轮廓的参数与内部轮廓的参数的比值。 当然, 特征量还可以 是边界轮廓的两个参数之间的比值或内部轮廓的两个参数之间的比值。 边界轮 廓或内部轮廓的参数可以是边界轮廓或内部轮廓的长度、 高度、 形状、 面积中 的至少一个。
如步骤 S 536 所示, 特征量匹配电路 3 155a判断所述特征量与预设特征量 是否匹配, 当判断结果为是时, 即所述特征量与预设特征量匹配, 亦即所述子 区域的轮廓与预设轮廓匹配时, 进入步骤 S538 , 当判断结果为否时, 即所述特 征量与预设特征量不匹配, 亦即所述子区域的轮廓与预设轮廓不匹配时, 进入 步骤 S540。 从而实现精确判断自动行走设备 1周围是否存在停靠站 4。
步骤 S538中, 第一判断组件 3 150判断自动行走设备 1 当前位置周围存在 停靠站 4。
步骤 S40 中, 第一判断组件 3 150判断自动行走设备 1 当前位置周围不存 在停靠站 4。
本领域技术人员可以理解的是, 判断子区域的轮廓与预设轮廓是否匹配可 以通过判断子区域的边界轮廓与预设轮廓是否匹配, 此时预设轮廓为停靠站 4 的外围轮廓, 也可以通过判断子区域的内部轮廓与预设轮廓是否匹配, 此时预 设轮廓为停靠站 4的特征部分的轮廓, 如导电端子 41、 底座 43等, 还可以同 时判断子区域的边界轮廓及内部轮廓与预设轮廓是否匹配, 此时预设轮廓包括 停靠站 4的外围轮廓也包括停靠站 4的特征部分的轮廓。 在不同的匹配判断方 案中, 预设轮廓的设置方法基本类似, 以下结合图 23至图 25介绍预设轮廓为 停靠站 4的外围轮廓的设定方法。
如图 23所示为停靠站 4的立体视图, 停靠站 4 包括底座 43、 支撑臂 45和 导电端子 41。 底座 43用于将停靠站 4安装固定, 其所在的平面为安装平面。 支撑臂 45设置与底座 43上, 与底座 43垂直设置, 用于安装导电端子 41。 导 电端子 41用于在自动行走设备 1与停靠站 4对接成功时电性连接停靠站 4和自 动行走设备 1。 如图 24和图 25分别示出了停靠站 4的侧视图和正视图, 其中 侧视图为停靠站 4 沿底座 43 的宽度方向在垂直于安装平面的二维平面上的投 影, 正视图为停靠站 4沿与自动行走设备 1正对的方向在垂直于安装平面的二 维平面上的投影。 根据图 24和图 25可知, 停靠站 4沿不同方向在垂直于安装 平面的二维平面上的投影各不相同, 而自动行走设备 1在接近停靠站 4时, 可 能位于停靠站 4不同的侧面,使得主控模块 3 1识别到的停靠站 4的外围轮廓因 相对停靠站 4的角度的不同而不同, 因此设定预设轮廓应根据在沿与安装平面 平行的方向上停靠站 4在垂直于安装平面的 360度范围内的投影设定。 若停靠 站 4 为沿纵向对称的结构, 则仅需根据在沿与安装平面平行的方向上停靠站 4 在垂直于安装平面的 180度范围内的投影设定。 若停靠站 4为沿纵向对称并且 横向对称的结构, 则仅需根据在沿与安装平面平行的方向上停靠站 4在垂直于 安装平面的 90度范围内的投影设定。 本领域技术人员可以理解的是, 为获得停 靠站 4沿与安装平面平行的方向在预设角度范围内在垂直于安装平面的平面上 的投影,可以由图像釆集装置 15在停靠站 4不同角度上釆集到的停靠站的图像 获得, 也可以由设计人员在对停靠站进行图纸设计时, 沿与安装平面平行的方 向在预设角度范围内在垂直于安装平面的平面上的投影获得。
为实现自动行走设备 1与停靠站 4的成功对接, 自动行走设备 1在发现当 前位置周围存在停靠站 4后, 需进一步调整其相对停靠站 4的位置, 以实现与 停靠站 4正对, 并沿与停靠站 4正对的方向朝停靠站 4靠近, 从而实现自动对 接。 为判断自动行走设备 1与停靠站 4是否正对, 本发明提出根据环境图像信 息中停靠站 4的特征部分相对环境图像信息的中轴线的位置关系是否满足预设 条件, 来判断自动行走设备 1与停靠站 4是否正对。 具体地, 如图 26所示, 第 二判断组件 3 170包括特征识别单元 3 17 1和特征判断单元 3 173 , 特征识别单元 3 17 1识别环境图像信息中停靠站 4的特征部分相对环境图像信息的中轴线的位 置关系, 特征判断单元 3 173所述位置关系是否满足预设条件, 当所述位置关系 满足预设条件时, 第二判断组件 3 170判断自动行走设备 1与停靠站 4正对。
以下结合图 27至图 29介绍基于上述判断原理的三种较佳的判断自动行走 设备 1与停靠站 4是否正对的实施方式。
图 27所示为第一种较佳的判断自动行走设备 1 与停靠站 4是否正对的实 施方式。 该实施方式中, 主控模块 3 1根据停靠站 4的导电端子 41在环境图像 信息中相对环境图像信息的中轴线的位置是否满足预设条件, 来判断自动行走 设备 1与停靠站 4是否正对。 其中, 导电端子 41 包括第一端子 411和第二端子 412 , 环境图像信息中第一端子 411 与环境图像信息的中轴线的距离为第一距 离, 第二端子 412与环境图像信息的中轴线的距离为第二距离, 预设条件为第 一端子 41 1与第二端子 412分别位于环境图像信息的两侧, 且第一距离与第二 距离的比值为预设比值。 具体的, 如步骤 S580所示, 特征识别单元 3171识别环境图像信息的中轴 线, 典型地, 通过识别环境图像信息中各信息点的横坐标及纵坐标来确定中轴 线。
步骤 S580之后, 进入步骤 S582, 特征识别单元 3171识别环境图像信息中 停靠站 4的第一端子 411和第二端子 412的位置。 具体的, 通过识别颜色来初 步识别可能为第一端子 411和第二端子 412的区域, 然后通过识别可能为第一 端子 411或第二端子 412的区域的轮廓精确判断第一端子 411和第二端子 412 的区域, 最后通过识别第一端子 411和第二端子 412的区域的横坐标和纵坐标 来识别第一端子 411和第二端子 412的位置。 具体识别第一端子 411和第二端 子 412的方式同图 19至图 22所述的识别停靠站 4的实施方式,在此不再赘述。
步骤 S582之后, 进入步骤 S584, 特征识别单元 3171计算第一端子 411至 环境图像信息的中轴线的第一距离, 计算第二端子 412至环境图像信息的中轴 线的第二距离。 典型地, 通过计算第一端子 411、 第二端子 412 的横纵坐标与 环境图像信息的中轴线的横纵坐标的差值分别计算第一距离和第二距离。
步骤 S584之后, 进入步骤 S586, 特征判断单元 3173计算第一距离与第二 距离的比值。
步骤 S586之后, 进入步骤 S590, 特征判断单元 3173将计算的比值与预设 比值进行比较。 其中预设比值根据自动行走设备 1与停靠站 4正对时, 第一端 子 411、 第二端子 412与环境图像信息的中轴线的距离计算得到。
步骤 S590之后, 进入步骤 S592, 特征判断单元 3173判断计算的比值与预 设比值是否相同, 当判断结果为是时, 进入步骤 S594, 当判断结果为否时, 进 入步骤 S596。 步骤 S592 中, 可以是通过一次判断, 也可以是通过多次判断来 决定进入步骤 S594或步骤 S596。 步骤 S594 中, 第二判断组件 3170判断自动 行走设备 1与停靠站 4正对。 步骤 S596中, 第二判断组件 3170判断自动行走 设备 1与停靠站 4不正对。
如图 28所示为第二种较佳的判断自动行走设备 1 与停靠站 4是否正对的 实施方式。 该实施方式中, 第二判断组件 3170根据停靠站 4的导电端子 41在 环境图像信息中相对环境图像信息的中轴线的位置是否满足预设条件, 来判断 自动行走设备 1与停靠站 4是否正对。该实施方式与如图 27所示的第一较佳实 施方式的区别在于, 本实施方式中导电端子 41 虽然包含第一端子和第二端子, 但第一端子和第二端子集成在一个部件上, 或者第一端子和第二端子虽然分开 设置, 但该两个端子确定的直线垂直与停靠站 4的安装平面。 相应地, 预设条 件为, 导电端子 41位于环境图像信息的中轴线上。
具体的, 如步骤 S600所示, 特征识别单元 3171识别环境图像信息的中轴 线, 典型地, 通过识别环境图像信息中个信息点的横坐标及纵坐标来确定中轴 线。
步骤 S600之后, 进入步骤 S602, 特征识别单元 3171识别停靠站 4的导电 端子 41 的位置。 具体识别方式同图 27所示的实施方式, 在此不再赘述。
步骤 S602之后, 进入步骤 S604, 特征判断单元 3173计算导电端子 41至 环境图像信息的中轴线的第一距离。 典型地, 通过计算导电端子 41 的横坐标与 环境图像信息的中轴线的横坐标的差值计算第一距离。
步骤 S604之后, 进入步骤 S612, 特征判断单元 3173判断第一距离是否为 零, 即导电端子 41是否位于中轴线上。 当判断结果为是时, 进入步骤 S614, 当判断结果为否时, 进入步骤 S616。 步骤 S612 中, 可以是通过一次判断, 也 可以是通过多次判断来决定进入步骤 S614或步骤 S616。 步骤 S614中, 特征判 断单元 3173判断自动行走设备 1与停靠站 4正对。 步骤 S616中, 特征判断单 元 3173判断自动行走设备 1与停靠站 4不正对。
如图 29所示为第三种较佳的判断自动行走设备 1 与停靠站 4是否正对的 实施方式。 该实施方式中, 第二判断组件 3170根据停靠站 4的支撑臂 45在环 境图像信息中相对环境图像信息的中轴线的位置是否满足预设条件, 来判断自 动行走设备 1与停靠站 4是否正对。 其中, 支撑臂 45沿自动行走设备 1与停靠 站 4正对的方向具有第一侧边 451和第二侧边 452, 环境图像信息中, 第一侧 边 451与环境图像信息的中轴线的距离为第一距离, 第二侧边 452与环境图像 信息的中轴线的距离为第二距离, 所述预设条件为第一距离与第二距离的比值 为预设比值。
具体的, 如步骤 S620所示, 特征识别单元 3171识别环境图像信息的中轴 线, 典型地, 通过识别环境图像信息中个信息点的横坐标及纵坐标来确定中轴 线。
步骤 S620之后, 进入步骤 S622, 特征识别单元 3171识别停靠站 4的支撑 臂 45的第一侧边 451和第二侧边 452的位置。 具体识别方式同图 27所示的实 施方式, 在此不再赘述。
步骤 S622之后, 进入步骤 S624, 特征判断单元 3173 计算第一侧边 451 至环境图像信息的中轴线的第一距离, 计算第二侧边 452至环境图像信息的中 轴线的第二距离。 典型地, 通过计算第一侧边 45 1、 第二侧边 452 的横坐标与 环境图像信息的中轴线的横坐标的差值分别计算第一距离和第二距离。
步骤 S624之后, 进入步骤 S626 , 特征判断单元 3 173计算第一距离与第二 距离的比值。
步骤 S626之后, 进入步骤 S630 , 特征判断单元 3 173将计算的比值与预设 比值进行比较。 其中预设比值根据自动行走设备 1与停靠站 4正对时, 第一侧 边 45 1、 第二侧边 452与环境图像信息的中轴线的距离计算得到。
步骤 S630之后, 进入步骤 S632 , 特征判断单元 3 173判断计算的比值与预 设比值是否相同, 当判断结果为是时, 进入步骤 S634 , 当判断结果为否时, 进 入步骤 S 636。 步骤 S 632 中, 可以是通过一次判断, 也可以是通过多次判断来 决定进入步骤 S634或步骤 S 636。 步骤 S634 中, 第二判断组件 3 170判断自动 行走设备 1与停靠站 4正对。 步骤 S 636中, 第二判断组件 3 170判断自动行走 设备 1与停靠站 4不正对。
本领域技术人员可以想到的是, 本发明中的工作区域判断方法的具体步骤 可以有其他的变化形式,自动行走设备 1的具体结构也可以有很多的变化形式, 但其釆用技术方案的主要技术特征与本发明相同或相似, 均应涵盖于本发明保 护范围内。

Claims

权 利 要 求 书
1. 一种自动行走设备, 其特征在于, 所述自动行走设备包括: 壳体、 行走模块、 安装在壳体上的图像釆集装置, 以及连接图像釆集装置和行走模块以控制自动 行走设备工作的主控模块, 其中,
所述图像釆集装置拍摄目标区域, 形成图像;
所述主控模块将所述图像划分为若千子图像块, 每一子图像块对应目标区 域的一个子区域;
所述主控模块提取至少一个子图像块的各个像素的颜色;
所述主控模块计算预定颜色在该子图像块中所占的比例并与第一预设值 比较;
所述主控模块提取该子图像块的紋理特征值并与第二预设值比较; 所述主控模块在图像的一个子图像块中的预定颜色所占的比例达到或超 过第一预设值且预定紋理值达到或超过第二预设值时, 判断该子图像块对应的 子区域为工作区域, 若该子图像块中预定颜色所占的比例小于第一预设值或紋 理特征值小于第二预设值, 则判断该子图像块对应的子区域为非工作区域。
2. 根据权利要求 1所述的自动行走设备, 其特征在于: 所述主控模块包括子区 域划分单元、 颜色提取单元、 占比计算单元、 占比比较单元、 紋理提取单元、 紋理比较单元、 工作区域识别单元及存储单元, 存储单元存储第一预设值及第 二预设值, 子区域划分单元将图像划分为与若千子区域对应的子图像块, 颜色 提取单元提取至少一个子图像块各个像素的颜色, 占比计算单元将预定颜色的 像素数除以总像素数以计算预定颜色在该子图像块中的占比, 占比比较单元比 较该子图像块中预定颜色的占比与第一预设值, 紋理提取单元提取该子图像块 的紋理特征值, 紋理比较单元比较该子图像块的紋理特征值与第二预设值, 工 作区域识别单元根据比较结果判断该子图像块对应的子区域是否为工作区域。
3. 根据权利要求 2所述的自动行走设备, 其特征在于: 所述存储单元中存有所 述预定颜色的颜色分量的数值范围, 若一个像素的颜色分量分别落入预定颜色 的颜色分量的数值范围, 则所述颜色提取单元判断该像素的颜色为预定颜色。
4. 根据权利要求 3所述的自动行走设备, 其特征在于: 所述颜色分量为三原色 分量。
5. 根据权利要求 2所述的自动行走设备, 其特征在于: 所述紋理特征值为参数 离散度, 所述第二预设值为预设离散度, 所述存储单元中存储预设离散度及预 设差分值, 紋理提取单元计算一个子图像块中每相邻的两个像素的至少一个参 数的梯度差分, 判断该梯度差分是否大于预设差分值, 计算该子图像块中所有 大于该预设差分值的梯度差分的参数离散度, 紋理比较单元比较参数离散度与 预设离散度。
6. 根据权利要求 2所述的自动行走设备, 其特征在于: 所述主控模块还包括转 向控制单元, 所述子区域划分单元将所述图像分为中部、 左部和右部三个子图 像块, 分别对应目标区域的中间区域、 左侧区域及右侧区域, 所述中间区域位 于自动行走设备的前方正中, 所述左侧区域及右侧区域分别位于所述中间区域 沿自动行走设备行进方向的左右两侧, 当工作区域识别单元判断所述中间区域 为非工作区域时, 所述转向控制单元改变所述自动行走设备的行走方向, 直到 所述中间区域被判断为工作区域。
7. 根据权利要求 1所述的自动行走设备, 其特征在于: 所述目标区域位于所述 自动行走设备的正前方,且所述目标区域的宽度大于所述自动行走设备的宽度。
8. 根据权利要求 1所述的自动行走设备, 其特征在于: 所述图像釆集装置的视 角范围为 90度至 120度。
9. 根据权利要求 1所述的自动行走设备, 其特征在于: 所述自动行走设备为自 动割草机, 所述预定颜色为绿色。
10. 根据权利要求 1所述的自动行走设备, 其特征在于: 所述图像釆集装置上 方设有遮挡板, 所述遮挡板从所述图像釆集装置的顶部向外延伸。
11. 根据权利要求 1所述的自动行走设备, 其特征在于: 所述图像釆集装置釆 集壳体前方区域的图像, 并将所述图像传递到主控模块, 所述前方区域至少包 括壳体前方地面的预定区域, 所述预定区域的宽度大于壳体的宽度, 所述主控 模块分析所述图像中的与所述预定区域对应的预定图像块, 以监控所述预定区 域中是否出现边界,当一个子区域为非工作区域且其相邻子区域为工作区域时, 主控模块判断边界位于该子区域中, 并在监控到边界时使自动行走设备处于边 界位置并沿边界行走。
12. 根据权利要求 11所述的自动行走设备, 其特征在于: 沿边界行走时, 所述 主控模块控制行走模块, 以保持壳体位于工作区域内, 且边界位于壳体的特定 一侧。
13. 根据权利要求 12所述的自动行走设备, 其特征在于, 所述图像釆集装置釆 集图像并传递给主控模块, 所述主控模块将所述图像的预定图像块分为中部、 右部和左部三个子图像块, 分别对应于自动行走设备正前方和自动行走设备等 宽的中间区域、 所述中间区域右侧的右侧区域、 所述中间区域左侧的左侧区域 三个子区域, 所述主控模块控制行走模块动作以调整自动行走设备的位置, 保 持中部对应的中间区域识别为工作区域, 左部或右部对应的左侧区域或右侧区 域识别为非工作区域且边界位于其中, 以保持壳体位于工作区域内, 且边界位 于壳体的特定一侧。
14. 根据权利要求 12或 13 所述的自动行走设备, 其特征在于, 所述主控模块 还包括边界识别单元,所述边界识别单元判断当前沿行的边界是否通向停靠站, 若判断结果为否, 所述主控模块控制行走模块, 使自动行走设备离开当前沿行 的边界。
15. 一种自动行走设备的工作区域判断方法, 所述自动行走设备包括壳体、 行 走模块、 安装在壳体上的图像釆集装置, 以及连接图像釆集装置和行走模块以 控制自动行走设备工作的主控模块, 其特征在于, 所述工作区域判断方法包括 以下步骤:
所述图像釆集装置拍摄目标区域, 形成图像;
所述主控模块将所述图像划分为若千子图像块, 每一子图像块对应目标区 域的一个子区域;
所述主控模块提取至少一个子图像块的各个像素的颜色;
所述主控模块计算预定颜色在该子图像块中所占的比例并与第一预设值 比较;
所述主控模块提取该子图像块的紋理特征值并与第二预设值比较; 若所述图像的一个子图像块中的预定颜色所占的比例达到或超过第一预 设值且紋理特征值达到或超过第二预设值, 所述主控模块则判断该子图像块对 应的子区域为工作区域, 若该子图像块中的预定颜色所占的比例小于第一预设 值或紋理特征值小于第二预设值, 所述主控模块则判断该子图像块对应的子区 域为非工作区域。
16. 根据权利要求 15所述的工作区域判断方法, 其特征在于: 所述主控模块中 存有所述预定颜色的颜色分量的数值范围, 所述主控模块提取一个子图像块的 每一像素的颜色分量, 若一个像素的颜色分量分别落入预定颜色的颜色分量的 数值范围, 则所述主控模块判断该像素的颜色为预定颜色。
17. 根据权利要求 16所述的工作区域判断方法, 其特征在于: 所述颜色分量为 三原色分量。
18. 根据权利要求 15所述的工作区域判断方法, 其特征在于: 所述紋理特征值 为参数离散度, 所述第二预设值为预设离散度, 主控模块中存储预设离散度及 预设差分值, 主控模块计算一个子图像块中每相邻的两个像素的至少一个参数 的梯度差分, 判断该梯度差分是否大于预设差分值, 计算该子图像块中所有大 于该预设差分值的梯度差分的参数离散度, 并判断参数离散度是否达到预设离 散度。
19. 根据权利要求 15所述的工作区域判断方法, 其特征在于: 所述图像釆集装 置拍摄的图像包括中部、 左部和右部三个子图像块, 分别对应目标区域的中间 区域、 左侧区域及右侧区域, 所述中间区域位于自动行走设备的前方正中, 所 述左侧区域及右侧区域分别位于所述中间区域沿自动行走设备行进方向的左右 两侧, 当所述中间区域被判断为非工作区域时, 所述转向控制单元改变所述自 动行走设备的行走方向, 直到所述中间区域被判断为工作区域。
20. 根据权利要求 15所述的工作区域判断方法, 其特征在于: 所述工作区域判 断方法还包括控制自动行走设备向停靠站回归的步骤, 所述行走模块包括安装 在壳体上的轮组和驱动所述轮组的行走马达, 所述控制自动行走设备向停靠站 回归的步骤包括以下子步骤:
a.监控所述图像釆集装置釆集的图像的预定图像块, 该预定图像块对应于壳体 前方地面的预定区域, 以判断该预定区域中是否出现边界;
b .若特定区域中出现边界, 控制自动行走设备处于边界位置;
c .沿边界行走。
21.根据权利要求 20所述的工作区域判断方法, 其特征在于, 所述预定区域的 宽度大于壳体的宽度, 所述步骤 a进一步包括:
将预定图像块划分为对应于预定区域的若千子区域的相应若千子图像块; 分析各个子图像块以将相对应的子区域识别为工作区域或非工作区域中的一 个;
当一个子区域为非工作区域且其相邻子区域为工作区域时, 判断边界位于该子 区域中。
22.根据权利要求 21所述的工作区域判断方法, 其特征在于, 在步骤 c中, 沿 边界行走时, 保持壳体位于工作区域内, 且边界位于壳体的特定一侧。
PCT/CN2014/075954 2013-04-22 2014-04-22 自动行走设备及其工作区域判断方法 WO2014173290A1 (zh)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
CN201310140824.4 2013-04-22
CN201310141126.6 2013-04-22
CN201310141126.6A CN104111460B (zh) 2013-04-22 2013-04-22 自动行走设备及其障碍检测方法
CN201310140775.4A CN104111652A (zh) 2013-04-22 2013-04-22 自动工作系统及其对接方法
CN201310140286.9A CN104111651A (zh) 2013-04-22 2013-04-22 自动行走设备及其向停靠站回归的方法
CN201310140775.4 2013-04-22
CN201310140286.9 2013-04-22
CN201310140824.4A CN104111653A (zh) 2013-04-22 2013-04-22 自动行走设备及其工作区域判断方法

Publications (1)

Publication Number Publication Date
WO2014173290A1 true WO2014173290A1 (zh) 2014-10-30

Family

ID=51791058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/075954 WO2014173290A1 (zh) 2013-04-22 2014-04-22 自动行走设备及其工作区域判断方法

Country Status (1)

Country Link
WO (1) WO2014173290A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107553497A (zh) * 2017-10-20 2018-01-09 苏州瑞得恩光能科技有限公司 太阳能面板清扫机器人的边缘定位装置及其定位方法
CN109426267A (zh) * 2017-08-30 2019-03-05 苏州宝时得电动工具有限公司 自移动设备
US20210294348A1 (en) * 2018-08-08 2021-09-23 Positec Power Tools (Suzhou) Co., Ltd. Self-moving device, automatic working system, and control method therefor
CN113495552A (zh) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
CN113985287A (zh) * 2021-10-19 2022-01-28 安徽明德源能科技有限责任公司 一种电芯安全识别方法及装置
CN115060665A (zh) * 2022-08-16 2022-09-16 君华高科集团有限公司 一种食品安全自动巡检系统
CN115464557A (zh) * 2022-08-15 2022-12-13 深圳航天科技创新研究院 基于路径调整移动机器人作业的方法及移动机器人
CN116203606A (zh) * 2023-03-03 2023-06-02 上海筱珈数据科技有限公司 基于rtk与视觉融合技术的剪草机器人导航方法和装置
CN116523275A (zh) * 2023-07-04 2023-08-01 河北润博星原科技发展有限公司 一种公共区域监控设备运维管理平台
EP4310790A1 (en) * 2022-07-19 2024-01-24 Suzhou Cleva Precision Machinery & Technology Co., Ltd. Image analysis method and apparatus, computer device, and readable storage medium
EP4312187A1 (en) * 2022-07-19 2024-01-31 Suzhou Cleva Precision Machinery & Technology Co., Ltd. Image analysis method and apparatus, computer device, and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6255793B1 (en) * 1995-05-30 2001-07-03 Friendly Robotics Ltd. Navigation method and system for autonomous machines with markers defining the working area
CN1539119A (zh) * 2001-04-20 2004-10-20 �ʼҷ����ֵ������޹�˾ 改善图像的图像处理装置及方法以及包括这种图像处理装置的图像显示装置
EP2336719A2 (en) * 2009-12-17 2011-06-22 Deere & Company Automated tagging for landmark identification
CN102169345A (zh) * 2011-01-28 2011-08-31 浙江亚特电器有限公司 一种机器人行动区域设定系统及其设定方法
CN102880175A (zh) * 2011-07-16 2013-01-16 苏州宝时得电动工具有限公司 自动行走设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6255793B1 (en) * 1995-05-30 2001-07-03 Friendly Robotics Ltd. Navigation method and system for autonomous machines with markers defining the working area
CN1539119A (zh) * 2001-04-20 2004-10-20 �ʼҷ����ֵ������޹�˾ 改善图像的图像处理装置及方法以及包括这种图像处理装置的图像显示装置
EP2336719A2 (en) * 2009-12-17 2011-06-22 Deere & Company Automated tagging for landmark identification
CN102169345A (zh) * 2011-01-28 2011-08-31 浙江亚特电器有限公司 一种机器人行动区域设定系统及其设定方法
CN102880175A (zh) * 2011-07-16 2013-01-16 苏州宝时得电动工具有限公司 自动行走设备

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426267A (zh) * 2017-08-30 2019-03-05 苏州宝时得电动工具有限公司 自移动设备
CN107553497B (zh) * 2017-10-20 2023-12-22 苏州瑞得恩光能科技有限公司 太阳能面板清扫机器人的边缘定位装置及其定位方法
CN107553497A (zh) * 2017-10-20 2018-01-09 苏州瑞得恩光能科技有限公司 太阳能面板清扫机器人的边缘定位装置及其定位方法
US20210294348A1 (en) * 2018-08-08 2021-09-23 Positec Power Tools (Suzhou) Co., Ltd. Self-moving device, automatic working system, and control method therefor
CN113495552A (zh) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
CN113985287A (zh) * 2021-10-19 2022-01-28 安徽明德源能科技有限责任公司 一种电芯安全识别方法及装置
EP4312187A1 (en) * 2022-07-19 2024-01-31 Suzhou Cleva Precision Machinery & Technology Co., Ltd. Image analysis method and apparatus, computer device, and readable storage medium
EP4310790A1 (en) * 2022-07-19 2024-01-24 Suzhou Cleva Precision Machinery & Technology Co., Ltd. Image analysis method and apparatus, computer device, and readable storage medium
CN115464557A (zh) * 2022-08-15 2022-12-13 深圳航天科技创新研究院 基于路径调整移动机器人作业的方法及移动机器人
CN115060665B (zh) * 2022-08-16 2023-01-24 君华高科集团有限公司 一种食品安全自动巡检系统
CN115060665A (zh) * 2022-08-16 2022-09-16 君华高科集团有限公司 一种食品安全自动巡检系统
CN116203606A (zh) * 2023-03-03 2023-06-02 上海筱珈数据科技有限公司 基于rtk与视觉融合技术的剪草机器人导航方法和装置
CN116203606B (zh) * 2023-03-03 2024-02-20 上海筱珈数据科技有限公司 基于rtk与视觉融合技术的剪草机器人导航方法和装置
CN116523275A (zh) * 2023-07-04 2023-08-01 河北润博星原科技发展有限公司 一种公共区域监控设备运维管理平台

Similar Documents

Publication Publication Date Title
WO2014173290A1 (zh) 自动行走设备及其工作区域判断方法
EP3951544A1 (en) Robot working area map constructing method and apparatus, robot, and medium
CN111035327B (zh) 清洁机器人、地毯检测方法及计算机可读存储介质
WO2021026831A1 (zh) 移动机器人及其控制方法和控制系统
WO2021212926A1 (zh) 自行走机器人避障方法、装置、机器人和存储介质
CN103891464B (zh) 自动割草系统
US20180064025A1 (en) Auto mowing system
US20190053683A1 (en) Autonomous traveler
CN110636789B (zh) 电动吸尘器
CN114847803A (zh) 机器人的定位方法及装置、电子设备、存储介质
WO2016045593A1 (zh) 自移动机器人
CN104111651A (zh) 自动行走设备及其向停靠站回归的方法
CN104111460A (zh) 自动行走设备及其障碍检测方法
WO2022021630A1 (zh) 自动行走设备及其控制方法和系统及可读存储介质
KR101951414B1 (ko) 로봇 청소기 및 이의 제어 방법
CN103901890A (zh) 基于家庭庭院的户外自动行走装置及其控制系统和方法
CN111353431A (zh) 自动工作系统、自动行走设备及其控制方法及计算机可读存储介质
CN113331743A (zh) 清洁机器人清洁地面的方法以及清洁机器人
CN107643751A (zh) 智能行走设备斜坡识别方法和系统
US20220280007A1 (en) Mobile robot and method of controlling the same
US20240029298A1 (en) Locating method and apparatus for robot, and storage medium
CN110946512A (zh) 基于激光雷达和摄像头的扫地机器人的控制方法及装置
EP4123406A1 (en) Automatic working system, automatic walking device and method for controlling same, and computer-readable storage medium
EP4006681A1 (en) Autonomous work machine, control device, method for controlling autonomous work machine, method for operating control device, and program
JP7014586B2 (ja) 自律走行体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14788936

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14788936

Country of ref document: EP

Kind code of ref document: A1