WO2023217231A1 - 作业边界生成方法、作业控制方法、设备及存储介质 - Google Patents

作业边界生成方法、作业控制方法、设备及存储介质 Download PDF

Info

Publication number
WO2023217231A1
WO2023217231A1 PCT/CN2023/093536 CN2023093536W WO2023217231A1 WO 2023217231 A1 WO2023217231 A1 WO 2023217231A1 CN 2023093536 W CN2023093536 W CN 2023093536W WO 2023217231 A1 WO2023217231 A1 WO 2023217231A1
Authority
WO
WIPO (PCT)
Prior art keywords
boundary
area
corrected
outdoor robot
trajectory information
Prior art date
Application number
PCT/CN2023/093536
Other languages
English (en)
French (fr)
Inventor
宋庆祥
朱永康
刘浩
王曦
Original Assignee
科沃斯机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 科沃斯机器人股份有限公司 filed Critical 科沃斯机器人股份有限公司
Publication of WO2023217231A1 publication Critical patent/WO2023217231A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present application relates to the field of data processing technology, and in particular to a job boundary generation method, job control method, equipment and storage medium.
  • magnetic strips are laid on the boundary of the work area in advance.
  • the outdoor robot moves near the magnetic strip, it can generate an induction signal with the magnetic strip. Based on the strength of the induction signal, it is judged whether it has reached the boundary of the work area. If the outdoor robot moves to the work area, If it reaches the boundary of the area, it will no longer continue to move in the current direction of travel, and will adjust other directions of travel to continue performing work tasks.
  • defining the boundaries of the work area in this way is more cumbersome, less efficient, and more expensive to implement.
  • Various aspects of the present application provide a method for generating a work boundary, a method for controlling work, equipment and a storage medium, which make it simple to operate and highly efficient when generating work boundaries, without the need to lay magnetic stripes, thus saving costs.
  • An embodiment of the present application provides a method for generating an operating boundary, which includes: generating an operating boundary of an operable area on an environmental map based on first trajectory information of an outdoor robot, where the first trajectory information is that the outdoor robot moves along the boundary of the operable area. Formed; according to the work boundary, the outdoor robot is controlled to traverse the workable area, and the environmental image collected by the outdoor robot during the traversal process is obtained; according to the boundary information of the workable area contained in the environmental image, the work in the environmental map is The boundary is corrected to obtain the corrected operating boundary.
  • An embodiment of the present application provides a method for generating an operating boundary, which includes: generating an operating boundary of an operable area on an environmental map based on first trajectory information of an outdoor robot, where the first trajectory information is that the outdoor robot moves along the boundary of the operable area. Formed; obtain the environmental image collected by the outdoor robot while moving along the boundary of the operable area; correct the operation boundary in the environmental map according to the boundary information of the operable area contained in the environmental image, and obtain the corrected Job boundaries.
  • Embodiments of the present application provide a method for generating an operating boundary, which includes: obtaining first trajectory information and second trajectory information of an outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area.
  • the second trajectory information It is formed by the outdoor robot moving along the boundary of the non-operating area; based on the first trajectory information and the second trajectory information of the outdoor robot, the operating boundary of the operable area and the restricted area boundary of the non-operating area and the restricted area boundary are respectively generated on the environment map. Together with the work boundary, the workable area is defined.
  • Embodiments of the present application also provide a method for generating an environment map, which includes: obtaining first trajectory information and second trajectory information of an outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area.
  • the second trajectory information The information is formed by the outdoor robot moving along the boundary of the non-operating area; based on the first trajectory information and the second trajectory information of the outdoor robot, the operating boundary of the operable area and the restricted area boundary of the non-operating area and the restricted area are respectively generated on the environment map.
  • the boundary is located within the working boundary, and the restricted area boundary and the working boundary jointly define the workable area; according to the working boundary and the restricted area boundary, the outdoor robot is controlled to traverse the workable area, and the surrounding environment images are collected during the traversal process. Add other environmental information within the workable area to the map.
  • the embodiment of the present application also provides an operation control method, which is applied to outdoor robots.
  • the method includes Including: receiving operation instructions, which instruct the outdoor robot to perform operation tasks in the operable area; obtaining the environment map of the operable area, which includes the operation boundary corresponding to the operable area and the forbidden area boundary corresponding to the non-operating area, and the forbidden area boundary It is located within the working boundary, and the restricted area boundary and the working boundary jointly define the workable area; according to the restricted area boundary and the working boundary, the outdoor robot is controlled to perform work tasks within the working area.
  • Embodiments of the present application provide an operating boundary generating device, which includes: a generating module, a control module and a correction module; the generating module is used to generate the operating boundary of the operable area on the environment map based on the first trajectory information of the outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • the control module is used to control the outdoor robot to traverse the workable area according to the work boundary, and obtain the environmental images collected by the outdoor robot during the traversal process;
  • the correction module is used to correct the operation boundary in the environment map based on the boundary information of the operable area contained in the environment image, and obtain the corrected operation boundary.
  • An embodiment of the present application provides an operating boundary generation device, which includes: a generation module, an acquisition module, and a correction module; the generation module is used to generate an operation boundary of an operable area on an environmental map based on the first trajectory information of an outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • the acquisition module is used to obtain the environmental image collected by the outdoor robot while moving along the boundary of the workable area;
  • the correction module is used to adjust the environmental image according to the The boundary information of the workable area contained in the environment map is corrected to obtain the corrected work boundary.
  • Embodiments of the present application provide an operating boundary generation device, including: an acquisition module and a generation module; an acquisition module for acquiring first trajectory information and second trajectory information of an outdoor robot, where the first trajectory information is an operable line along which the outdoor robot can operate.
  • the second trajectory information is formed by moving the boundary of the area, and the second trajectory information is formed by the outdoor robot moving along the boundary of the non-operation area;
  • the generation module is used to generate respectively on the environment map based on the first trajectory information and the second trajectory information of the outdoor robot.
  • the operating boundary of the operable area and the forbidden area boundary of the non-operating area The forbidden area boundary is located within the operating boundary.
  • the forbidden area boundary and the operating boundary jointly define the operable area.
  • Embodiments of the present application provide an environment map generation device, including: an acquisition module, a generation module, a traversal module and an adding module; the acquisition module is used to acquire the first trajectory information and the second trajectory information of an outdoor robot, and the first trajectory information is The outdoor robot moves along the boundary of the workable area, and the second trajectory information is formed by the outdoor robot moving along the boundary of the non-operational area; the generation module is used to generate the first trajectory information and the second trajectory information of the outdoor robot based on the first trajectory information and the second trajectory information of the outdoor robot.
  • the operating boundary of the workable area and the restricted area boundary of the non-operational area are respectively generated on the environment map.
  • the restricted area boundary is located within the operating boundary.
  • the restricted area boundary and the operating boundary jointly define the operable area; based on the operating boundary and the restricted area boundary, the outdoor robot is controlled to The workable area is traversed, the surrounding environment images are collected during the traversal process, and other environmental information in the workable area is added to the environment map based on the environment image.
  • Embodiments of the present application provide an operation control device, including: a receiving module, an acquisition module and a control module; the receiving module is used to receive operation instructions, and the operation instructions instruct the outdoor robot to perform operation tasks in the operable area; the acquisition module is used to Obtain the environment map of the operable area.
  • the environment map includes the operation boundary corresponding to the operable area and the forbidden area boundary corresponding to the non-operating area.
  • the forbidden area boundary is located within the operation boundary, and the forbidden area boundary and the operation boundary jointly define the operable area; control module , used to control outdoor robots to perform work tasks within the workable area based on the boundary of the restricted area and the working boundary.
  • An embodiment of the present application provides an electronic device, including: a memory and a processor; the memory is used to store a computer program; the processor is coupled to the memory and used to execute the computer program to implement the job boundary generation method provided by the embodiment of the present application. and steps in the environment map generation method.
  • Embodiments of the present application provide an outdoor robot, including: a positioning module and an equipment body.
  • the equipment body includes: a memory and a processor; the memory is used to store computer programs; the processor is coupled with the memory and used to execute the computer program to implement The steps in the job control method provided by the embodiments of the present application.
  • Embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the computer program When executed by a processor, it causes the processor to implement the job boundary generation method, environment map generation method and job control provided by the embodiments of the present application. steps in the method.
  • the outdoor robot is controlled to move along the boundary of the workable area to obtain the working boundary of the workable area in the environment map, and the outdoor robot is controlled to traverse the workable area along the working boundary and collect the working boundary at the same time.
  • the working boundary in the environmental map is corrected to obtain the corrected working boundary.
  • the method provided by this application The operation is simple and efficient, and there is no need to lay magnetic strips, which saves costs.
  • the outdoor robot moves along the boundary of the workable area, it is difficult to move accurately along the boundary of the workable area, which makes the obtained work boundary different from the actual workable area.
  • the operation boundary in the environment map is corrected through the boundary information of the operable area in the environment image, which can reduce the error between the operation boundary and the actual operable area boundary, thereby improving the accuracy of the obtained operation boundary. Accuracy.
  • Figure 1a is a schematic flowchart of a method for generating a job boundary provided by an exemplary embodiment of the present application
  • Figure 1b is a schematic flowchart of another method for generating work boundaries provided by an exemplary embodiment of the present application
  • Figure 2a is a schematic diagram of a working boundary before correction provided by an exemplary embodiment of the present application
  • Figure 2b is a schematic diagram after correcting the operation boundary provided by an exemplary embodiment of the present application.
  • Figure 2c is a schematic diagram of correcting the operation boundary provided by an exemplary embodiment of the present application.
  • Figure 3 is a schematic structural diagram of an outdoor robot provided by an exemplary embodiment of the present application.
  • Figure 4a is a schematic flowchart of yet another method for generating a job boundary provided by an exemplary embodiment of the present application
  • Figure 4b is a schematic flowchart of yet another method for generating a job boundary provided by an exemplary embodiment of the present application
  • Figure 4c is a schematic flowchart of an environment map generation method provided by an exemplary embodiment of the present application.
  • Figure 4d is a schematic flowchart of a job control method provided by an exemplary embodiment of the present application.
  • Figure 5a is a schematic structural diagram of a work boundary generation device provided by an exemplary embodiment of the present application.
  • Figure 5b is a schematic structural diagram of an environment map generation device provided by an exemplary embodiment of the present application.
  • Figure 5c is a schematic structural diagram of a job control device provided by an exemplary embodiment of the present application.
  • Figure 6 is a schematic structural diagram of an electronic device provided by an exemplary embodiment of the present application.
  • the outdoor robot is controlled to move along the boundary of the workable area, the working boundary of the workable area in the environment map is obtained, and the outdoor robot is controlled to move along the boundary of the workable area.
  • the work boundary traverses the workable area and collects the environmental image of the work boundary at the same time. Based on the boundary information of the workable area in the environmental image, the work boundary in the environmental map is corrected to obtain the corrected work boundary.
  • the boundary generation method the method provided by this application is simple to operate, highly efficient, and does not require the laying of magnetic strips, thus saving costs.
  • Figure 1a is a schematic flowchart of a method for generating a job boundary provided by an exemplary embodiment of the present application. As shown in Figure 1a, the method includes:
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • control the outdoor robot to traverse the operable area, and obtain the environment images collected by the outdoor robot during the traversal process;
  • an outdoor robot is any device capable of autonomous movement outdoors. Outdoor robots can perform work tasks outdoors. Depending on the type of outdoor robot, the types of work tasks performed by outdoor robots are also different. For example, if the outdoor robot is a snow shoveling robot, the task performed by the snow shoveling robot may be a snow shoveling task; if the outdoor robot is a lawn mowing robot, the task performed by the lawn mowing robot may be a lawn mowing task; if the outdoor robot is Disinfection robot, the task performed by the disinfection robot may be a disinfection task.
  • the outdoor robot can perform work tasks in the workable area.
  • the workable area is an area where work tasks can be performed.
  • the workable area is a relatively flat safe area, optionally corresponding to the workable area. refers to non-operation areas, which are dangerous areas such as swimming pools, puddles, or sunken steps. To ensure the safety of outdoor robots, outdoor robots are prohibited from performing tasks in non-operation areas.
  • the outdoor robot can be controlled to move along the boundary of the workable area.
  • the first trajectory information of the outdoor robot is generated.
  • the method of controlling the outdoor robot to move along the boundary of the workable area is not limited.
  • the outdoor robot has a remote controller, and the remote controller is communicatively connected with the outdoor robot.
  • the user can control the outdoor robot along the boundary of the workable area through the remote control.
  • Mobile for another example, the user's terminal device has An application (app) corresponding to the outdoor robot, through which the user can control the outdoor robot to move along the boundary of the workable area.
  • the outdoor robot has a positioning sensor module, which can generate the first trajectory information of the outdoor robot based on the positioning sensor module. Different positioning sensor modules generate different first trajectory information.
  • the positioning sensor module please refer to the subsequent implementation. For example, we will not go into details here.
  • the first trajectory information may be GPS location information or environmental information, such as an environmental image.
  • the method of generating the work boundary of the workable area on the environment map is also different. If the first trajectory information is GPS position information, the operation boundary of the operable area can be marked on the environmental map based on the GPS position information. If the first trajectory information is an environmental image collected by the outdoor robot while moving along the operable area, the operable area can be generated on the environmental map through Visual Simultaneous Localization and Mapping (VSLAM) technology. job boundaries.
  • VSLAM Visual Simultaneous Localization and Mapping
  • the working boundary of the workable area can be generated on the environment map in real time while the outdoor robot moves along the boundary of the workable area, or after the outdoor robot completes moving along the boundary of the workable area, the working boundary of the workable area can be generated on the environment map in real time. Generate work boundaries for workable areas.
  • the outdoor robot moves along the boundary of the workable area.
  • the real boundary of the workable area has obvious segmentation characteristics.
  • the real boundary of the workable area may be the dividing line formed by the lawn and the road, or the boundary between the lawn and other plants, etc. In order to reduce this error, the environmental image of the operable area can be collected.
  • the boundary information of the operable area contained in the environmental image can reflect the true boundary of the operable area. Then, the boundary information of the operable area contained in the environmental image can be used. Boundary information, correct the operation boundary in the environment map, obtain the corrected operation boundary, reduce the error between the operation boundary on the environment map and the real boundary, and improve the accuracy of generating the operation boundary.
  • the outdoor robot is controlled to move along the boundary of the workable area, and we obtain The work boundary of the workable area in the environment map is controlled, and the outdoor robot is controlled to traverse the workable area along the work boundary.
  • the environmental image of the work boundary is collected.
  • the environment map is The working boundary is corrected to obtain the corrected working boundary.
  • controlling the outdoor robot to traverse the workable area along the work boundary may be controlling the outdoor robot to move along an arcuate trajectory within the workable area to traverse the workable area.
  • the shape of the movement trajectory of the outdoor robot in the workable area is not limited to a bow shape, and can also be other shapes, such as a zigzag shape, which is not specified in this application.
  • the embodiment of the present application also provides another method for generating job boundaries, as shown in Figure 1b.
  • the method includes:
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • the implementation of collecting environmental images of the workable area is not limited.
  • the outdoor robot can be controlled to traverse the workable area based on the working boundary of the workable area, and the environmental image of the workable area can be collected during the traversal process, as shown in Figure 1a .
  • the outdoor robot can collect environmental images of the operable area while moving along the boundary of the operable area. During the entire process, the operation boundary of the operable area can be generated on the environmental map, and the operable area can also be obtained.
  • the environmental image of the area is shown in Figure 1b.
  • the embodiment of the present application also provides another method for generating job boundaries, which method includes:
  • the first trajectory information of the outdoor robot Based on the first trajectory information of the outdoor robot, generate operations in the workable area on the environment map Boundary, the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • the working boundary will be used as the target boundary so that the outdoor robot can move and work according to the target boundary;
  • the boundary fragment to be corrected in the working boundary is obtained.
  • the boundary fragment to be corrected is the entire working boundary or the local working boundary, and the environment image collected by the outdoor robot is obtained to
  • the boundary segment to be corrected can be corrected according to the boundary information of the workable area contained in the environment image to obtain the corrected working boundary, and the corrected working boundary can be used as the target boundary, so that the outdoor robot can Ability to move and work within the boundaries of this target.
  • the target boundary can be determined according to user needs, thereby improving user satisfaction.
  • boundary segment to be corrected in the working boundary where the boundary segment to be corrected is the entire working boundary or a partial working boundary, so that the boundary segment to be corrected can be modified according to the boundary information of the workable area contained in the environment image.
  • the corrected boundary fragments are corrected to obtain the corrected working boundary.
  • the boundary segment to be corrected in the work boundary is obtained according to user instructions.
  • the user instruction may be a triggering operation to enable the correction function.
  • the user instruction can also be a trigger operation for the user to turn on and off the correction function.
  • Example A1 After generating the operation boundary of the workable area, the operation boundary is dynamically displayed on the environment map; in response to the user's trigger operation of turning on the correction function, the boundary of the boundary fragment to be corrected in the operation boundary is obtained on the environment map.
  • the start point and the boundary end point define the boundary segment to be corrected.
  • the operation boundary corresponding to the workable area is displayed on the environment map.
  • the user can initiate a correction triggering operation for the operation boundary on the environment map, and then the boundary fragment to be corrected can be obtained in response to the correction triggering operation initiated by the user.
  • the boundary segments to be corrected in the environment map are corrected to obtain the corrected work boundary.
  • the boundary selects the boundary fragment to be corrected, for example, an environment map is displayed, and the environment map includes the work boundary of the operable area.
  • the user can select the first track point and the second track point on the work boundary, and the operation can be triggered in response to the user's selection.
  • the boundary segment between the first trajectory point and the second trajectory point is used as the boundary segment to be corrected.
  • after generating the working boundary of the workable area for example, after the user controls the outdoor robot to move along the boundary of the workable area through the remote control to generate the working boundary of the workable area, the user then controls the outdoor robot to move along the boundary of the dangerous area again.
  • the boundary that prohibits correction is set, and the part of the working boundary of the workable area outside the boundary that prohibits correction is the boundary segment to be corrected.
  • Example A2 When the outdoor robot moves along the boundary of the workable area, in response to the user's trigger operation to turn on the correction function, the boundary point corresponding to the current position of the outdoor robot is obtained as the boundary starting point; in response to the user's trigger operation to turn off the correction function, Obtain the current boundary point corresponding to the current location of the outdoor robot as the boundary end point; the boundary starting point and the boundary end point define the boundary segment to be corrected.
  • the operation boundary is dynamically displayed on the environment map; and in response to a correction triggering operation initiated by the user for the operation boundary, the boundary fragment to be corrected is obtained.
  • the boundary segment to be corrected has a first visual attribute that is different from other boundary segments, and the first visual attribute indicates that the boundary segment needs to be corrected.
  • the first visual attribute may be information such as color, thickness, or linearity of the boundary segment.
  • the user turns on the correction function, and in response to the user's triggering operation of turning on the correction function, determines the outdoor The boundary point corresponding to the current position of the robot is used as the boundary starting point; the outdoor robot continues to move along the boundary of the operable area, and the user turns off the correction function.
  • the current boundary point corresponding to the current position of the outdoor robot is determined. as the end point of the boundary.
  • the dangerous area can be a swimming pool, sand pit, sunken steps, flower garden, vegetable patch, etc.
  • the operation interface such as APP operation interface
  • the user needs to confirm whether the current location of the outdoor robot is a dangerous area. If the user confirms that the current location of the outdoor robot is not a dangerous area, the current location of the outdoor robot will be used as the starting point of the boundary.
  • the current location of the outdoor robot will not be used as the starting point of the boundary. Furthermore, the outdoor robot continues to move along the boundary of the workable area, and during the movement of the outdoor robot, the user can confirm the position of the outdoor robot in real time. When encountering a dangerous area, the correction function button can be turned off through the operation interface , the position of the outdoor robot at this time is the boundary end point.
  • the collected environment image can be semantically segmented to obtain an image area corresponding to the workable area.
  • the image boundary of the image area corresponds to the boundary information of the workable area.
  • the The operation boundary in the environment map is corrected to obtain the corrected operation boundary.
  • a neural network model can be used to perform semantic segmentation of environmental images.
  • the neural network model can include but is not limited to: Region-Convolutional Neural Networks (R-CNN), Fast Region-Convolutional Neural Networks (Fast Region- Convolutional Neural Networks, Fast-R-CNN), You Only Look Once (YOLO) model or Single Shot MultiBox Detector (SSD), etc., there are no restrictions on this.
  • the image boundaries of the image area are mapped to the environmental map to obtain the reference boundary; specifically, the depth information corresponding to the operable area can be calculated based on the depth information in the environmental image.
  • the position coordinates under the system are converted into position coordinates under the map coordinate system to obtain the reference boundary; among them, the map coordinate system is the coordinate system corresponding to the environmental map; furthermore, according to the distance between the corresponding boundary positions on the work boundary and the reference boundary,
  • the boundary segment to be corrected on the working boundary is corrected to obtain a corrected working boundary.
  • the first distance threshold is used to correct the boundary segment to be corrected to its corresponding boundary position on the reference boundary
  • the boundary segment to be corrected is corrected to a position corresponding to the first distance threshold.
  • case B1 In order to avoid excessive correction errors caused by identification errors of the reference boundary, the boundary positions on the boundary fragment to be corrected can be divided into two types according to the distance between the working boundary and the reference boundary, namely: The first boundary position and the second boundary position are respectively corrected for the first boundary position and the second boundary position to reduce the correction error.
  • the first boundary position is a boundary position where the distance between the boundary segment to be corrected and the corresponding boundary position on the reference boundary is less than or equal to the set first distance threshold
  • the second boundary position is the boundary position between the boundary segment to be corrected and the reference boundary.
  • the distance between corresponding boundary positions on the boundary is greater than the first distance threshold; the first distance threshold and the second distance threshold may be the same or different, and the first distance threshold may be 20 cm, 50 cm, or 1 meter, etc. .
  • the first distance threshold may be the body width of the outdoor robot.
  • the boundary segment to be corrected may have a first boundary position, a second boundary position, or both a first boundary position and a second boundary position. If there is a first boundary position on the boundary segment to be corrected, the first boundary position is corrected to the corresponding boundary position on the reference boundary; if there is a second boundary position on the boundary segment to be corrected, the second boundary position is corrected is the position corresponding to the first distance threshold.
  • the working boundary of the workable area is far away from the restricted area boundary of the non-working area.
  • the boundary of the forbidden area is the boundary of the non-operation area located within the operation boundary.
  • the boundary of the forbidden area and the operation boundary jointly define the operable area; the distance between the operation boundary and the boundary of the non-exclusion area can be understood as the distance between the operation boundary and the boundary of the forbidden area is greater than the set second distance threshold.
  • the second distance threshold is not limited.
  • the second distance threshold may be 20 cm, 50 cm, or 1 meter.
  • the workable area does not contain a non-operation area, or the operation boundary of the operable area contains a non-operation area, but the operation boundary is far away from the forbidden area boundary.
  • the working boundary of the operable area is close to the boundary of the restricted area of the non-operating area, which can be understood to mean that the distance between the working boundary and the boundary of the restricted area is less than or equal to the set second distance threshold.
  • the operation boundary of the operable area contains the non-operation area, and some boundary segments of the operation boundary are close to the boundary of the restricted area.
  • the corrected working boundary may be close to the boundary of the forbidden area, or even located inside the boundary of the forbidden area. If the outdoor robot performs the operation task inside the corrected working boundary, the outdoor robot may need to There are certain dangers when performing operations in non-operation areas. Therefore, in order to protect the safety of outdoor robots, it is necessary to ensure that the corrected operating boundary is away from the non-operating area.
  • the area to be corrected can be selected from the operable area.
  • the area to be corrected refers to the operable area. In some areas far away from the boundary of the restricted area, corrections will be made to the areas to be corrected that are far away from the boundary of the restricted area, and other areas in the operable area close to the boundary of the restricted area will not be corrected.
  • the local reference boundary located in the area to be corrected in the reference boundary can be obtained, and the local working boundary located in the area to be corrected in the working boundary can be obtained; according to the distance between the local reference boundary and the corresponding boundary position on the local working boundary, Correct local operation boundaries.
  • the boundary positions on the local operation boundary can be divided into two types according to the distance between the local operation boundary and the local reference boundary, namely the third
  • the boundary position and the fourth boundary position are respectively corrected for the third boundary position and the fourth boundary position to reduce the correction error.
  • There are two main types of boundary positions on the local operation boundary namely the third boundary position and the fourth boundary position.
  • the third boundary position is when the distance between the corresponding boundary position on the local operation boundary and the local reference boundary is less than or equal to the set value.
  • the boundary position of the first distance threshold is determined; the fourth boundary position is the boundary position where the distance between the corresponding boundary position on the local working boundary and the local reference boundary is greater than the first distance threshold.
  • the third boundary position is corrected to the corresponding boundary position on the local reference boundary; if there is a fourth boundary position on the local working boundary, the fourth boundary position is corrected to the position corresponding to the first distance threshold.
  • the solid line represents the corrected operating boundary.
  • a process of generating a boundary of the restricted area on the environmental map is also included, wherein the boundary of the restricted area is located within the operating boundary of the operable area.
  • the second trajectory information of the outdoor robot is obtained.
  • the second trajectory information is formed by the outdoor robot moving along the boundary of the non-operation area; based on the second trajectory information of the outdoor robot, the boundary of the restricted area of the non-operation area is generated on the environment map.
  • the operating boundary is represented by a solid line
  • the restricted area border is represented by a dotted line.
  • three non-operating areas and one operating area are used as examples for illustration, but are not limited to this.
  • the three non-operation areas are non-operation area D1, non-operation area D2 and non-operation area D3.
  • the non-operation area D1 is located at the edge of the operation boundary, there is no need to correct the operation boundary.
  • Figure 2a is a schematic diagram of the local working boundary in the area to be corrected before correction
  • Figure 2b is a schematic diagram of the local working boundary in the area to be corrected after correction.
  • the outdoor robot includes: an equipment body 31 and a positioning module 32 .
  • the equipment body 31 includes a memory 33 , a processor 34 , a first power supply component, an image sensor 35 and an area array depth sensor 36 .
  • the positioning module 32 includes: a second power supply component, a processor, a memory, a communication component and a positioning sensor module.
  • the outdoor robot also includes components such as wheels, drive motors, and working height adjustment devices.
  • the power component provides power to various components of the outdoor robot.
  • the power component may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the device where the power component is located.
  • the power component can be implemented as a rechargeable battery, the positioning sensor module is used for mapping and positioning, the image sensor is used to collect environmental images of the workable area, and the area array depth sensor collects the depth information of the environment image. Image sensor and area array depth sensor Jointly determine the reference boundary corresponding to the operation boundary.
  • the camera in the positioning sensor module can be a panoramic camera that can capture 360-degree images
  • the image sensor can be implemented as a pinhole camera
  • the area array depth sensor can be implemented as a binocular camera, or other sensors that can measure depth information.
  • the positioning module 32 is a detachable module, and the positioning module is installed on the fixed socket of the outdoor robot. on the trough. Based on the fixed slot, the positioning module can communicate with the outdoor robot mainboard. Based on the communication connection, the positioning module and the outdoor robot can interact with each other. For example, when the outdoor robot performs a task, the positioning module provides the outdoor robot with a positioning function. .
  • the outdoor robot can also charge the power supply of the positioning module.
  • the second power component supplies power to other components of the positioning module.
  • the positioning sensor module is used to obtain the positioning data of the positioning module.
  • the processor is used to calculate and integrate the positioning data to obtain the position information of the positioning module.
  • the memory is used to store the position information or positioning data of the positioning module.
  • the communication module is used to communicate with the positioning module. Terminal devices exchange information.
  • the positioning sensor module can be implemented in various ways, and examples are given below.
  • the positioning sensor module includes: a camera and an inertial measurement unit (IMU).
  • the camera is mainly used to collect environmental images in the environment where the positioning module is located.
  • the IMU is mainly used to detect the acceleration and angular velocity of the positioning module; the camera and IMU detection data to determine the position information of the positioning module, and perform mapping and positioning through visual SLAM technology.
  • the positioning sensor module includes: Ultra Wide Band (UWB), barometer and IMU.
  • Ultra-wideband technology achieves wireless transmission by sending and receiving extremely narrow pulses with a nanosecond or microsecond level. Because the pulse time width is extremely short, ultra-wideband spectrum can be achieved, and the bandwidth is usually above 500MHz.
  • Ultra-wideband (UWB) positioning technology uses pre-arranged anchor nodes and bridge nodes with known positions to communicate with newly added blind nodes, and performs positioning by measuring the transmission delay difference between different base stations and positioning modules.
  • the main ultra-wideband positioning methods include two categories: composite angle positioning method and time difference positioning method. The composite angle positioning method is based on radio direction finding.
  • the time difference positioning method is based on the time when the signal reaches the monitoring station, and the intersection positioning is performed through time and distance conversion.
  • the details of the time difference positioning method are as follows: by measuring the time when the signal reaches the monitoring station, the distance of the signal source (such as positioning module) can be determined.
  • the location of the signal source (positioning module) can be determined by using the distance from the signal source (such as the positioning module) to multiple radio monitoring stations (with the radio monitoring station as the center and the distance as the radius as a circle).
  • the barometer is based on the experimental principles of Evangelista Torricelli (1608 ⁇ 1647) It is an instrument used to measure atmospheric pressure; the barometer can be used to measure altitude. For every 12 meters of elevation, the mercury column decreases by about 1 mm; in actual use, a combination of UWB, barometer and IMU can be used for fusion mapping and positioning; then use the environment to first arrange several anchor nodes with known positions to initialize the coordinate system and establish a basic coordinate system. Then there is a UWB signal receiving module on the positioning module as a blind node, and the UWB positioning method is used to calculate the position coordinates of the blind node. information; then integrate the information measured by the barometer and IMU to calculate more accurate position coordinates of the positioning module.
  • the positioning sensor module includes: real-time differential positioning (Real Time Kinematic, RTK) and IMU sensors.
  • RTK Real Time Kinematic
  • IMU sensors The combination of RTK and IMU is used for simultaneous mapping and positioning.
  • Real-time differential positioning RTK is a measurement method that can obtain centimeter-level positioning accuracy in real time in the wild.
  • RTK is based on Real-time differential GPS (RTDGPS) technology based on carrier phase observation, which consists of three parts: base station receiver, data link, and rover receiver.
  • RTDGPS Real-time differential GPS
  • a receiver is installed on the base station as a reference station to continuously observe satellites and send the observation data and station information to the rover in real time through radio transmission equipment.
  • the rover GPS receiver is receiving the GPS satellites.
  • the data transmitted by the base station is received through the wireless receiving equipment, and then based on the principle of relative positioning, the three-dimensional coordinates of the rover and its accuracy (i.e., the coordinate differences ⁇ X, ⁇ Y, ⁇ between the base station and the rover) are calculated in real time.
  • H add the WGS-84 coordinates of each point obtained by the reference coordinates, and obtain the plane coordinates X, Y and altitude H) of each point of the rover through the coordinate conversion parameters.
  • the position coordinate information of the positioning module can be obtained.
  • the positioning sensor module can also be a combination of other sensors for simultaneous mapping and positioning.
  • the positioning sensor module includes a camera, IMU, UWB, and barometer; for another example, the positioning sensor module includes a camera, RTK, and IMU, etc., these combinations can obtain higher-precision position coordinate information of the positioning module.
  • the outdoor robot is implemented as a lawn mowing robot.
  • the lawn mowing robot has a positioning sensor module, an image sensor and an area array depth sensor. It can use the image information collected by the positioning sensor module to perform construction. Map and positioning, use the environmental image collected by the image sensor to perform semantic segmentation, and use the area array depth sensor to collect the three-dimensional depth information of the working boundary of the target area.
  • the user when the lawn mowing robot has a positioning function, the user first remotely controls the lawn mowing robot from the charging base and moves to the working boundary of the workable area, and then selects the starting position, and the remote controlled lawn mowing robot moves along the working boundary. One circle, and then returns to the starting position again; the user then controls the lawn mower robot to return to the charging base.
  • the lawn mower robot moves around the working boundary while collecting environmental images for positioning and mapping, and recording its first trajectory.
  • the user starts to remotely control the lawn mowing robot from the charging base again and moves to a non-working area, that is, a dangerous area (such as a swimming pool, pond or large obstacles, etc.), move along the boundary of the dangerous area to set the boundary of the restricted area.
  • a dangerous area such as a swimming pool, pond or large obstacles, etc.
  • the specific setting process is: the user remotely controls the lawn mowing robot from the charging base, moves to the boundary of the restricted area, then selects the starting position of the restricted area boundary, moves along the boundary of the restricted area, and returns to the starting position again.
  • the restricted area of the dangerous area X1 is determined boundaries, and then move along the boundaries of other dangerous areas in sequence until the boundaries of the restricted areas in all dangerous areas are set, and the user remotely controls the lawn mowing robot to return to the charging base.
  • second trajectory information corresponding to the boundary of the restricted area can be generated, and based on the second trajectory information, the boundary of the restricted area is generated on the environmental map.
  • the user when the user moves the remote-controlled lawn mowing robot along the working boundary, it is difficult to move along the boundary line segment completely without error. That is, there is a certain error between the working boundary generated based on the movement trajectory of the lawn mowing robot and the actual lawn boundary.
  • the user when the user remotely controls the lawn mowing robot to move along the working boundary, the user can trigger the boundary position correction operation and thereby obtain the boundary segment to be corrected in the working boundary.
  • the lawn mowing robot can self-position and record the world coordinates of its own movement trajectory, and synchronize this coordinate trajectory to the mobile phone APP in real time and display it in real time, the user can determine when the remote control lawn mowing robot is walking as needed. Whether to trigger the boundary position correction operation.
  • the app will ask the user to confirm that there are no dangerous areas in the part where boundary position correction is required, such as swimming pools, sandpits, sunken steps, flower gardens, and vegetable fields.
  • the trajectory starting from this coordinate point will be synchronized to the mobile APP in real time and Real-time display in a special color allows the user to understand that the boundary trajectory of this segment will be corrected according to the lawn boundary; when the user remotely controls the lawn mower robot to reach the boundary of the dangerous area, such as swimming pools, sand pits, sunken steps, flower beds, vegetable fields In dangerous areas, the user can turn off the boundary position correction button on the app.
  • the trajectory starting from this coordinate point will be synchronized to the mobile app in real time and displayed in a special color in real time, so that the user can understand that this boundary trajectory will be strictly based on The user remotely controls the boundary to work, and the boundary position will not be corrected.
  • the lawn mower robot when the remote control lawn mowing robot starts to move along the boundary of the work area and generates the working boundary, the lawn mower robot will ask the user through the mobile APP whether to set it to the boundary positioning mode. If the user chooses yes, it will start recording the positioning module in the world. coordinates in the coordinate system, generate movement trajectory information, and set the movement trajectory information as the working boundary of the target area; and synchronize this trajectory information to the mobile APP in real time and display it in real time, allowing users to observe the lawn mower robot in real time. Movement trajectory information on the environment map.
  • the user can select the modify trajectory button on the APP.
  • the mobile APP will display a message asking the user to remotely control the lawn mowing robot to return to the wrong starting point of the trajectory on the original path.
  • the user can remotely mow the grass.
  • the robot returns to the starting point of the wrong trajectory, and then the mobile APP will display the position of the lawn mower robot at this time, and display the distance from the current point (the starting point of the wrong trajectory) to the position of the positioning module when the user selects the modify trajectory button on the APP.
  • the wrong track segment is displayed in a different color from the correct track segment.
  • the user can click on the wrong track segment and delete the wrong track segment. After deletion, the user can click the restart recording boundary button, and the user controls The lawn mower robot continues to move along the boundary of the target area, recording and displaying.
  • the lawn mowing robot can set the boundary area that prohibits correction at the same time during the remote control boundary definition stage. It can also be done after the lawn mower robot moves along the boundary of the target area and before the boundary definition is completed, the user can re-control the lawn mowing robot along the boundary. Danger area boundary sets the boundary that prohibits correction.
  • the user remotely controls the lawn mower robot to return to the charging base; at this time, the working boundaries and restricted area boundaries will be displayed to the user on the APP.
  • the user needs to follow the working boundaries displayed on the APP.
  • the boundary between the boundary and the forbidden area is judged to be in line with your expectations. If so, press the confirmation button, and then the lawn mower robot begins to enter the automatic working mode and perform mowing tasks in the work area; if not, press to cancel the generated boundary and follow The previous steps regenerate the working boundaries and restricted area boundaries.
  • the lawn mower robot When the lawn mower robot begins to enter the automatic working mode, the lawn mower robot will restart from the charging base, enter the working boundary, and start the "bow-shaped" mapping mode. At this time, the lawn mower robot will not perform the lawn mowing task and use the image
  • the sensor's camera collects environmental images and uses image semantic segmentation technology to identify the lawn area. Then it uses an area array depth sensor to measure the position of the lawn boundary in the machine coordinate system. Then, based on the position and posture of the lawn mowing robot, and the area array depth sensor The relative position relationship with the lawn mowing robot is to convert the boundary coordinates of the lawn boundary in the machine coordinate system to the world coordinate system.
  • the boundary segments to be corrected in the operation boundary generated by the user's remote control are processed. Correction to bring it closer to the real boundary.
  • the boundary error correction range is set, that is, the distance between the boundary segment that is allowed to be corrected by the lawn mower robot and the operation boundary generated by remote control cannot exceed the first
  • the distance threshold for example, the first distance threshold is set to 50 cm or (-50) cm. Of course, it can also be set to other values, which is not specified in this application.
  • the lawn mowing robot After the lawn mowing robot has completed exploring the entire working area, it will position and map the entire working area, and correct the working boundaries. After obtaining the corrected working boundaries, the lawn mowing robot returns to the charging base, and then removes the working area from the charging base. Start again and start your lawn mowing mission.
  • Figure 4b is a schematic flowchart of yet another method for generating job boundaries provided by an embodiment of the present application. As shown in Figure 4b, the method includes:
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area
  • the second trajectory information is formed by the outdoor robot moving along the boundary of the non-operating area.
  • the first trajectory information and the second trajectory information of the outdoor robot respectively generate the operating boundary of the operable area and the restricted area boundary of the non-operating area on the environment map.
  • the restricted area boundary is located within the operating boundary, and the restricted area boundary and the operating boundary are common. Limit the workable area.
  • the method provided by the embodiment of the present application also includes: based on the operation boundary and At the boundary of the restricted area, the outdoor robot is controlled to traverse the workable area, and the environmental image collected by the outdoor robot during the traversal process is obtained; according to the boundary information of the workable area contained in the environmental image, the working boundary in the environmental map is corrected. Obtain the corrected operation boundary.
  • Figure 4c is a schematic flowchart of an environment map generation method provided by an exemplary embodiment of the present application. As shown in Figure 4c, the method includes:
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area
  • the second trajectory information is formed by the outdoor robot moving along the boundary of the non-operating area.
  • the first trajectory information and the second trajectory information of the outdoor robot respectively generate the operating boundary of the operable area and the restricted area boundary of the non-operating area on the environment map.
  • the restricted area boundary is located within the operating boundary, and the restricted area boundary and the operating boundary are common. Limit the workable area;
  • Control the outdoor robot to traverse the operable area according to the operation boundary and the restricted area boundary collect the surrounding environment images during the traversal process, and add other environmental information in the operable area to the environment map according to the environment image.
  • Figure 4d is a schematic flow chart of an operation control method provided by an embodiment of the present application. As shown in Figure 4c, the method is applied to outdoor robots. The method includes:
  • the environmental map includes the operating boundary corresponding to the operable area and the forbidden area boundary corresponding to the non-operating area.
  • the forbidden area boundary is located within the operating boundary, and the forbidden area boundary and the operating boundary jointly define the operable area;
  • the outdoor robot is controlled to move along the boundary of the workable area, the working boundary of the workable area in the environment map is obtained, and the outdoor robot is controlled to move along the working boundary to the workable area.
  • the working area is traversed and the environmental image of the working boundary is collected at the same time.
  • the boundary information of the workable area in the environmental image the working boundary in the environmental map is corrected to obtain the corrected working boundary.
  • Boundary generation method The method provided by this application is simple to operate and efficient. It does not require laying magnetic strips and saves costs.
  • the operation boundary in the environment map is corrected through the boundary information of the operable area in the environment image, which can reduce the error between the operation boundary and the actual operable area boundary, thereby improving the accuracy of the obtained operation boundary. Accuracy.
  • the execution subject of each step of the method provided in the above embodiments may be the same device, or the method may also be executed by different devices.
  • the execution subject of steps 101a to 103a may be device A; for another example, the execution subject of steps 101a and 102a may be device A, the execution subject of step 103a may be device B; and so on.
  • Figure 5a is a schematic structural diagram of a work boundary generation device provided by an exemplary embodiment of the present application. As shown in Figure 5a, the device includes: a generation module 51a, a control module 52a, an acquisition module 53a and a modification module 54a.
  • the generation module 51a is configured to generate the working boundary of the workable area on the environmental map based on the first trajectory information of the outdoor robot, where the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • the control module 52a is used to control the outdoor robot to traverse the workable area according to the work boundaries.
  • history, and acquisition module 53a used to acquire the environment images collected by the outdoor robot during the traversal process;
  • the correction module 54a is configured to correct the work boundary in the environment map based on the boundary information of the workable area contained in the environment image to obtain the corrected work boundary.
  • the correction module 54a is specifically configured to: dynamically display the work boundary on the environment map during the process of generating the work boundary; and in response to a correction trigger operation initiated by the user for the work boundary, based on the information contained in the environment image.
  • the boundary information of the workable area is used to correct the work boundary in the environment map to obtain the corrected work boundary.
  • the correction module 54a is specifically configured to: in response to a correction trigger operation initiated by the user for the work boundary, determine the boundary segment to be corrected in the work boundary, where the boundary segment to be corrected is the entire work boundary or the local work boundary; according to the environment image The boundary information of the workable area contained is corrected on the boundary fragment to be corrected to obtain the corrected work boundary.
  • the correction module 54a is specifically configured to: when the outdoor robot moves along the boundary of the workable area, respond to the user's triggering operation of turning on the correction function, and determine the boundary point corresponding to the current position of the outdoor robot as the boundary starting point; In response to the user's triggering operation of turning off the correction function, determine the current boundary point corresponding to the current location of the outdoor robot as the boundary end point; the boundary starting point and the boundary end point define the boundary segment to be corrected.
  • the correction module 54a is specifically configured to: perform semantic segmentation on the environment image to obtain an image area corresponding to the workable area, and the image boundary of the image area corresponds to the boundary information of the workable area; according to the image boundary of the image area , correct the operation boundary in the environment map, and obtain the corrected operation boundary.
  • the correction module 54a is specifically used to: combine the pose data when the outdoor robot collects environmental images, map the image boundary of the image area to the environment map, and obtain the reference boundary; according to the corresponding boundary position between the working boundary and the reference boundary. distance between them, correct the working boundary, and obtain the corrected working boundary.
  • the correction module 54a is specifically configured to: if there is a first boundary position on the working boundary, correct the first boundary position to its corresponding boundary position on the reference boundary, and the first boundary position is on the working boundary. The distance from the corresponding boundary position on the reference boundary is less than or equal to the setting The boundary position of the first distance threshold; if there is a second boundary position on the working boundary, the second boundary position is corrected to the position corresponding to the first distance threshold, and the second boundary position is the corresponding boundary position on the working boundary and the reference boundary The distance between them is greater than the first distance threshold.
  • the correction module 54a is specifically configured to: obtain the local reference boundary located in the area to be corrected among the reference boundaries, and the local working boundary located within the area to be corrected among the working boundaries; according to the local reference boundary and the local The distance between the corresponding boundary positions on the operation boundary is used to correct the local operation boundary; the area to be corrected refers to the part of the operable area that is far away from the boundary of the restricted area, and the boundary of the restricted area is the boundary of the non-operating area located within the operational boundary.
  • the restricted area boundary and the operating boundary jointly define the operable area.
  • the correction module 54a is specifically configured to: if there is a third boundary position on the local working boundary, correct the third boundary position to its corresponding boundary position on the local reference boundary, and the third boundary position is the same as the local working boundary on the local working boundary.
  • the distance between the corresponding boundary positions on the reference boundary is less than or equal to the boundary position where the first distance threshold is set; if there is a fourth boundary position on the local work boundary, the fourth boundary position is corrected to the position corresponding to the first distance threshold,
  • the fourth boundary position is a boundary position where the distance between the corresponding boundary position on the local work boundary and the local reference boundary is greater than the first distance threshold.
  • the acquisition module is also used to obtain the second trajectory information of the outdoor robot, which is formed by the outdoor robot moving along the boundary of the non-working area; the generation module is also used to obtain the second trajectory information of the outdoor robot according to the second trajectory information of the outdoor robot.
  • the trajectory information is used to generate the boundary of the restricted area of the non-operating area on the environmental map, and the boundary of the restricted area is located within the operating boundary.
  • An example embodiment of the present application provides another operation boundary generation device.
  • the implementation structure of the operation boundary generation device can be seen in Figure 5a.
  • the device includes: a generation module, an acquisition module and a correction module;
  • the generation module is used to generate the working boundary of the workable area on the environment map based on the first trajectory information of the outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area;
  • the acquisition module is used to acquire the data collected by the outdoor robot while moving along the boundary of the workable area. Collected environment images;
  • the correction module is used to correct the operation boundary in the environment map based on the boundary information of the operable area contained in the environment image, and obtain the corrected operation boundary.
  • An exemplary embodiment of the present application provides a schematic structural diagram of yet another operation boundary generation device.
  • the implementation structure of the operation boundary generation device can be seen in Figure 5a.
  • the device includes: an acquisition module and a generation module;
  • the acquisition module is used to obtain the first trajectory information and the second trajectory information of the outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the operable area
  • the second trajectory information is formed by the outdoor robot moving along the non-operating area. Formed by shifting boundaries;
  • the generation module is used to respectively generate the operating boundary of the operable area and the restricted area boundary of the non-operational area on the environment map based on the first trajectory information and the second trajectory information of the outdoor robot.
  • the restricted area boundary is located within the operating boundary, and the restricted area boundary and The work boundaries jointly define the workable area.
  • the operation boundary generation device further includes: a control module and a correction module; a control module for controlling the outdoor robot to traverse the operable area according to the operation boundary and the restricted area boundary, and obtaining the information of the outdoor robot during the traversal process.
  • the environmental image collected in the environment image; the correction module is used to correct the operation boundary in the environmental map based on the boundary information of the operable area contained in the environmental image, and obtain the corrected operation boundary.
  • Figure 5b is a schematic structural diagram of an environment map generation device provided by an exemplary embodiment of the present application. As shown in Figure 5b, the device includes: an acquisition module 51b, a generation module 52b, a traversal module 53b and an adding module 54b;
  • the acquisition module 51b is used to obtain the first trajectory information and the second trajectory information of the outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the operable area
  • the second trajectory information is formed by the outdoor robot moving along the non-operating area. Formed by the movement of boundaries;
  • the generation module 52b is used to respectively generate the working boundary of the operable area and the restricted area boundary of the non-operational area on the environment map based on the first trajectory information and the second trajectory information of the outdoor robot.
  • the restricted area boundary is located within the operating boundary, and the restricted area boundary Together with the operation boundary, the operable area is defined;
  • the traversal module 53b is used to control the outdoor robot to traverse the workable area according to the working boundary and the restricted area boundary, and collect the surrounding environment images during the traversal process.
  • the adding module 54b is used to add the workable area in the environment map according to the environment image. other environmental information within.
  • Figure 5c is a schematic structural diagram of a job control device provided by an exemplary embodiment of the present application. As shown in Figure 5c, the device includes: a receiving module 51c, an acquisition module 52c and a control module 53c;
  • the receiving module 51c is used to receive operation instructions, which instruct the outdoor robot to perform operation tasks in the operable area;
  • the acquisition module 52c is used to obtain the environment map of the operable area.
  • the environment map includes the operation boundary corresponding to the operable area and the forbidden area boundary corresponding to the non-operating area.
  • the forbidden area boundary is located within the operation boundary, and the forbidden area boundary and the operation boundary are jointly defined. Workable area;
  • the control module 53c is used to control the outdoor robot to perform work tasks within the workable area according to the boundary of the restricted area and the work boundary.
  • the working boundary generation device controls the outdoor robot to move along the boundary of the workable area, obtains the working boundary of the workable area in the environment map, and controls the outdoor robot to traverse the workable area along the working boundary.
  • the environmental image of the working boundary is collected.
  • the working boundary in the environmental map is corrected to obtain the corrected working boundary.
  • the operation boundary in the environment map is corrected through the boundary information of the operable area in the environment image, which can reduce the error between the operation boundary and the actual operable area boundary, thereby improving the accuracy of the obtained operation boundary. Accuracy.
  • Figure 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
  • the device is implemented as a job boundary generation device, as shown in FIG. 6 , and the device includes: a memory 64 and a processor 65 .
  • Memory 64 is used to store computer programs and may be configured to store various other data to support operations on the job boundary generation device. Examples of this data include instructions for any application or method operating on the job boundary generating device.
  • Memory 64 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory magnetic memory
  • flash memory magnetic or optical disk.
  • the processor 65 is coupled to the memory 64 and is used to execute the computer program in the memory 54 to: generate the operating boundary of the operable area on the environment map according to the first trajectory information of the outdoor robot, where the first trajectory information is outdoor The robot moves along the boundary of the workable area; according to the work boundary, the outdoor robot is controlled to traverse the workable area, and the environmental image collected by the outdoor robot during the traversal process is obtained; according to the workable area contained in the environmental image, Boundary information, correct the operation boundary in the environment map, and obtain the corrected operation boundary.
  • the processor 65 corrects the work boundary in the environment map according to the boundary information of the workable area contained in the environment image to obtain the corrected work boundary, and is specifically configured to: generate the work boundary.
  • the operation boundary is dynamically displayed on the environment map; and in response to the correction trigger operation initiated by the user for the operation boundary, the operation boundary in the environment map is corrected according to the boundary information of the operable area contained in the environment image, and the operation boundary in the environment map is obtained. Corrected job boundaries.
  • the processor 65 in response to a correction trigger operation initiated by the user for the work boundary, corrects the work boundary in the environment map according to the boundary information of the workable area contained in the environment image, and obtains the corrected
  • the operation boundary is the operation boundary, it is specifically used to: in response to the correction triggering operation initiated by the user for the operation boundary, determine the boundary fragment to be corrected in the operation boundary, and the boundary fragment to be corrected is the entire operation boundary or the local operation boundary; according to the environment image contained in the The boundary information of the workable area is corrected on the boundary fragment to be corrected, and the corrected work boundary is obtained.
  • the processor 65 determines the boundary segment to be corrected in the working boundary in response to a correction triggering operation initiated by the user for the working boundary, it is specifically used to: move the outdoor robot along the boundary of the workable area.
  • the boundary point corresponding to the current position of the outdoor robot is determined as the boundary starting point; in response to the user's trigger operation of turning off the correction function, the current boundary point corresponding to the current position of the outdoor robot is determined as the boundary end point. Points; the boundary start point and the boundary end point define the boundary segment to be corrected.
  • the processor 65 when the processor 65 corrects the working boundary in the environmental map according to the boundary information of the workable area contained in the environmental image to obtain the corrected working boundary, it is specifically used to: modify the environmental image. Semantic segmentation is performed to obtain the image area corresponding to the workable area, and the image boundary of the image area corresponds to the boundary information of the workable area; based on the image boundary of the image area, the work boundary in the environment map is corrected to obtain the corrected work boundary.
  • the processor 65 when the processor 65 corrects the working boundary in the environmental map according to the image boundary of the image area to obtain the corrected working boundary, it is specifically used to: combine the position of the outdoor robot when collecting the environmental image. According to the attitude data, the image boundary of the image area is mapped to the environment map to obtain the reference boundary; the operation boundary is corrected according to the distance between the corresponding boundary position on the operation boundary and the reference boundary, and the corrected operation boundary is obtained.
  • the processor 65 when the processor 65 corrects the working boundary according to the distance between the working boundary and the corresponding boundary position on the revised boundary to obtain the revised working boundary, it is specifically used to: if there is a The first boundary position is corrected to the corresponding boundary position on the reference boundary.
  • the first boundary position is when the distance between the working boundary and the corresponding boundary position on the reference boundary is less than or equal to the set first distance threshold.
  • the boundary position of Boundary position greater than the first distance threshold.
  • the processor 65 when the processor 65 corrects the working boundary according to the distance between the corresponding boundary position on the working boundary and the revised boundary to obtain the corrected working boundary, it is specifically used to: obtain the position located in the reference boundary.
  • the local reference boundary within the area to be corrected, and the location within the work boundary that is to be corrected Correct the local operation boundary within the area; correct the local operation boundary based on the distance between the local reference boundary and the corresponding boundary position on the local operation boundary; where the area to be corrected refers to the part of the workable area away from the boundary of the restricted area,
  • the boundary of the restricted area is the boundary of the non-operating area located within the operating boundary.
  • the boundary of the restricted area and the operating boundary jointly define the operable area.
  • the processor 65 when the processor 65 corrects the local working boundary according to the distance between the local reference boundary and the corresponding boundary position on the local working boundary, it is specifically used to: if there is a third boundary on the local working boundary The third boundary position is corrected to the corresponding boundary position on the local reference boundary.
  • the third boundary position is the distance between the local working boundary and the corresponding boundary position on the local reference boundary that is less than or equal to the set first distance threshold.
  • the fourth boundary position is the distance between the corresponding boundary position on the local working boundary and the local reference boundary. Boundary positions whose distance is greater than the first distance threshold.
  • the processor 65 is also configured to: obtain the second trajectory information of the outdoor robot, the second trajectory information is formed by the outdoor robot moving along the boundary of the non-working area; according to the second trajectory information of the outdoor robot , generate the restricted area boundary of the non-operating area on the environment map, and the restricted area boundary is located within the operating boundary.
  • the work boundary generation device also includes: a communication component 66 , a display 67 , a power supply component 68 , an audio component 69 and other components. Only some components are schematically shown in Figure 6, which does not mean that the job boundary generation device only includes the components shown in Figure 6. It should be noted that the components within the dotted box in Figure 6 are optional components, not required components, and may depend on the product form of the job boundary generation device.
  • Embodiments of the present application also provide an electronic device, which is implemented as a work boundary generation device.
  • the implementation structure of the work boundary generation device is the same as or similar to the implementation structure of the work boundary generation device shown in Figure 6.
  • Refer to Figure 6 shows the structural implementation of the job boundary generation device.
  • the difference between the job boundary generation device provided by this embodiment and the job boundary generation device in the embodiment shown in FIG. 6 mainly lies in the different functions implemented by the processor executing the computer program stored in the memory.
  • its processor executes the computer program stored in the memory, It can be used to: generate the working boundary of the workable area on the environment map based on the first trajectory information of the outdoor robot.
  • the first trajectory information is formed by the outdoor robot moving along the boundary of the workable area; obtain the working boundary of the outdoor robot along the workable area.
  • Embodiments of the present application also provide an electronic device, which is implemented as a work boundary generation device.
  • the implementation structure of the work boundary generation device is the same as or similar to the implementation structure of the work boundary generation device shown in Figure 6.
  • Refer to Figure 6 shows the structural implementation of the job boundary generation device.
  • the difference between the job boundary generation device provided by this embodiment and the job boundary generation device in the embodiment shown in FIG. 6 mainly lies in the different functions implemented by the processor executing the computer program stored in the memory.
  • its processor executes the computer program stored in the memory, which can be used to: obtain the first trajectory information and the second trajectory information of the outdoor robot.
  • the first trajectory information is the outdoor robot along the possible path.
  • the boundary movement of the working area is formed, and the second trajectory information is formed by the outdoor robot moving along the boundary of the non-working area; based on the first trajectory information and the second trajectory information of the outdoor robot, the workable area is generated on the environment map.
  • the boundary of the prohibited area between the operating boundary and the non-operating area is located within the operating boundary.
  • the boundary of the prohibited area and the operating boundary jointly define the operable area.
  • the processor is also configured to: control the outdoor robot to traverse the workable area according to the working boundary and the restricted area boundary, and obtain the environment image collected by the outdoor robot during the traversal process; according to the environmental image contained in the The boundary information of the workable area is used to correct the work boundary in the environment map to obtain the corrected work boundary.
  • embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the processor can implement each of the methods shown in Figures 1a, 1b, and 4b. step.
  • the working boundary generation device controls the outdoor robot to move along the boundary of the workable area, obtains the working boundary of the workable area in the environment map, and controls the outdoor robot to traverse the workable area along the working boundary.
  • environmental images of the work boundary are collected.
  • the work boundary in the environmental map is corrected to obtain the corrected work boundary.
  • the operation boundary in the environment map is corrected through the boundary information of the operable area in the environment image, which can reduce the error between the operation boundary and the actual operable area boundary, thereby improving the accuracy of the obtained operation boundary. Accuracy.
  • Embodiments of the present application also provide an electronic device, which is implemented as an environment map generation device.
  • the implementation structure of the environment map generation device is the same as or similar to the implementation structure of the operation boundary generation device shown in Figure 6.
  • Refer to Figure 6 shows the structural implementation of the job boundary generation device.
  • the difference between the environment map generation device provided by this embodiment and the job boundary generation device in the embodiment shown in FIG. 6 mainly lies in the different functions implemented by the processor executing the computer program stored in the memory.
  • the processor executes the computer program stored in the memory, which can be used to: obtain the first trajectory information and the second trajectory information of the outdoor robot.
  • the first trajectory information is the outdoor robot along the operable area.
  • the second trajectory information is formed by the movement of the outdoor robot along the boundary of the non-operating area; based on the first trajectory information and the second trajectory information of the outdoor robot, the operating boundaries of the operable area are generated on the environment map. and the boundary of the restricted area of the non-operation area.
  • the boundary of the restricted area is located within the operational boundary.
  • the boundary of the restricted area and the operational boundary jointly define the operable area.
  • the outdoor robot is controlled to traverse the operable area and collect the surrounding areas during the traversal process. environment image, and add other environmental information in the operable area to the environment map based on the environment image.
  • embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the processor can implement each step in the method shown in Figure 4c.
  • the embodiment of the present application also provides an outdoor robot.
  • the structure of the outdoor robot can be seen in Figure 3.
  • the outdoor robot includes: an equipment body 31 and a positioning module 32.
  • the equipment body 31 includes a memory 33 and a processor 34; the memory 33 is used to store Computer program; processor 34, coupled to memory 33, is used to execute the computer program to: receive operation instructions, which instruct the outdoor robot to perform operation tasks in the operable area; obtain an environment map of the operable area, and the environment map It includes the operating boundary corresponding to the operable area and the restricted area boundary corresponding to the non-operating area.
  • the restricted area boundary is located within the operating boundary, and the restricted area boundary and the operating boundary jointly define the operable area; according to the restricted area boundary and the operating boundary, the outdoor robot is controlled to operate within the operational area. Execute work tasks in the work area.
  • FIG. 3 For the detailed implementation structure of the outdoor robot, please refer to the embodiment shown in FIG. 3 and will not be described again here.
  • embodiments of the present application also provide a computer-readable storage medium storing a computer program.
  • the processor can implement each step in the method shown in Figure 4d.
  • the communication component in FIG. 6 mentioned above is configured to facilitate wired or wireless communication between the device where the communication component is located and other devices.
  • the device where the communication component is located can access wireless networks based on communication standards, such as WiFi, 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof.
  • the communication component receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component further includes a near field communication (NFC) module to facilitate short-range communications.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the display in FIG. 6 above includes a screen, and the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. A touch sensor can not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • Power components in Figure 6 and Figure 3 above provide power to various components of the device where the power supply component is located.
  • Power components may include a power management system, one or more power supplies, and other The components associated with generating, managing, and distributing power to the equipment in which the software resides.
  • the audio components in Figure 6 described above may be configured to output and/or input audio signals.
  • the audio component includes a microphone (MIC), and when the device where the audio component is located is in an operating mode, such as call mode, recording mode, and voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in memory or sent via a communications component.
  • the audio component further includes a speaker for outputting audio signals.
  • embodiments of the present invention may be provided as methods, systems, or computer program products.
  • the invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • the invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes in a flow diagram and/or a block or blocks in a block diagram. steps.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-permanent storage in computer-readable media, random access memory (RAM) and/or non-volatile memory in the form of read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash random access memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • read-only memory read-only memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disc read-only memory CD-ROM
  • DVD digital versatile disc
  • Magnetic tape cassettes tape disk storage or other magnetic storage devices or any other non-transmission medium can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • embodiments of the present application may be provided as methods, systems or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本申请实施例提供一种作业边界生成方法、作业控制方法、设备及存储介质。在本申请实施例中,控制户外机器人沿着可作业区域的边界移动,得到可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可作业区域进行遍历,同时采集作业边界的环境图像,根据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本。

Description

作业边界生成方法、作业控制方法、设备及存储介质
交叉引用
本申请引用于2022年05月13日递交的名称为“作业边界生成方法、作业控制方法、设备及存储介质”的第2022105247946号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请涉及数据处理技术领域,尤其涉及一种作业边界生成方法、作业控制方法、设备及存储介质。
背景技术
随着人工智能技术的发展,出现一些可给用户带来便利性的智能化设备,例如割草机器人、除雪机器人等户外机器人。考虑到户外机器人的作业环境比较复杂,可能出现泳池、水坑、台阶等危险地方,为了保护户外机器人的作业安全,有必要在户外机器人执行作业任务之前,确定工作区域的边界。
目前,预先在工作区域的边界铺设磁条,户外机器人移动至磁条附近时,可与磁条产生感应信号,基于该感应信号的强弱判断是否达到工作区域的边界,若户外机器人移动至工作区域的边界,则在当前行进方向不再继续前进,调整其它行进方向继续执行工作任务。但是通过这种方式来定义工作区域的边界,比较繁琐、效率较低,且实现成本较高。
发明内容
本申请的多个方面提供一种作业边界生成方法、作业控制方法、设备及存储介质,使得在生成作业边界时操作简单,效率高,且无需铺设磁条,节省成本。
本申请实施例提供一种作业边界生成方法,包括:根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;根据作业边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
本申请实施例提供一种作业边界生成方法,包括:根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;获取户外机器人在沿着可作业区域的边界移动过程中采集到的环境图像;根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
本申请实施例提供一种作业边界生成方法,包括:获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界和作业边界共同限定可作业区域。
本申请实施例还提供一种环境地图生成方法,包括:获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域;根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,在遍历过程中采集周围的环境图像,根据环境图像在环境地图中添加可作业区域内的其它环境信息。
本申请实施例还提供一种作业控制方法,应用于户外机器人,该方法包 括:接收作业指令,作业指令指示户外机器人在可作业区域内执行作业任务;获取可作业区域的环境地图,环境地图上包括可作业区域对应的作业边界和非作业区域对应的禁区边界,禁区边界位于作业边界之内,且禁区边界和作业边界共同限定可作业区域;根据禁区边界和作业边界,控制户外机器人在可作业区域内执行作业任务。
本申请实施例提供一种作业边界生成装置,包括:生成模块、控制模块和修正模块;生成模块,用于根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;控制模块,用于根据作业边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;修正模块,用于根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
本申请实施例提供一种作业边界生成装置,包括:生成模块、获取模块和修正模块;生成模块,用于根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;获取模块,用于获取户外机器人在沿着可作业区域的边界移动过程中采集到的环境图像;修正模块,用于根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
本申请实施例提供一种作业边界生成装置,包括:获取模块和生成模块;获取模块,用于获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;生成模块,用于根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域。
本申请实施例提供一种环境地图生成装置,包括:获取模块、生成模块、遍历模块和添加模块;获取模块,用于获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;生成模块,用于根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域;根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,在遍历过程中采集周围的环境图像,根据环境图像在环境地图中添加可作业区域内的其它环境信息。
本申请实施例提供一种作业控制装置,包括:接收模块、获取模块和控制模块;接收模块,用于接收作业指令,作业指令指示户外机器人在可作业区域内执行作业任务;获取模块,用于获取可作业区域的环境地图,环境地图上包括可作业区域对应的作业边界和非作业区域对应的禁区边界,禁区边界位于作业边界之内,且禁区边界和作业边界共同限定可作业区域;控制模块,用于根据禁区边界和作业边界,控制户外机器人在可作业区域内执行作业任务。
本申请实施例提供一种电子设备,包括:存储器和处理器;存储器,用于存储计算机程序;处理器,与存储器耦合,用于执行计算机程序,以实现本申请实施例提供的作业边界生成方法和环境地图生成方法中的步骤。
本申请实施例提供一种户外机器人,包括:定位模块和设备本体,设备本体包括:存储器和处理器;存储器,用于存储计算机程序;处理器,与存储器耦合,用于执行计算机程序,以实现本申请实施例提供的作业控制方法中的步骤。
本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,当计算机程序被处理器执行时,致使处理器实现本申请实施例提供的作业边界生成方法、环境地图生成方法和作业控制方法中的步骤。
在本申请实施例中,控制户外机器人沿着可作业区域的边界移动,得到可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可作业区域进行遍历,同时采集作业边界的环境图像,根据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本,且由于户外机器人沿可作业区域的边界移动时,难以沿着可作业区域的边界精确移动,进而使得得到的作业边界与实际的可作业区域的边界之间存在误差。本申请实施例中,通过环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,可以减少作业边界与实际的可作业区域的边界之间的误差,从而提高所得作业边界的精确度。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1a为本申请示例性实施例提供的一种作业边界生成方法的流程示意图;
图1b为本申请示例性实施例提供的另一种作业边界生成方法的流程示意图;
图2a为本申请示例性实施例提供的一种作业边界进行修正前的示意图;
图2b为本申请示例性实施例提供的一种对作业边界进行修正后的示意图;
图2c为本申请示例性实施例提供的一种对作业边界进行修正的示意图;
图3为本申请示例性实施例提供的一种户外机器人的结构示意图;
图4a为本申请示例性实施例提供的再一种作业边界生成方法的流程示意图;
图4b为本申请示例性实施例提供的又一种作业边界生成方法的流程示意图;
图4c为本申请示例性实施例提供的一种环境地图生成方法的流程示意图;
图4d为本申请示例性实施例提供的一种作业控制方法的流程示意图;
图5a为本申请示例性实施例提供的一种作业边界生成装置的结构示意图;
图5b为本申请示例性实施例提供的一种环境地图生成装置的结构示意图;
图5c为本申请示例性实施例提供的一种作业控制装置的结构示意图;
图6为本申请示例性实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
针对现有技术中作业边界生成效率低的问题,在本申请实施例中,控制户外机器人沿着可作业区域的边界移动,得到可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可作业区域进行遍历,同时采集作业边界的环境图像,根据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业 边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本。
以下结合附图,详细说明本申请各实施例提供的技术方案。
图1a为本申请示例性实施例提供的一种作业边界生成方法的流程示意图。如图1a所示,该方法包括:
101a、根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;
102a、根据作业边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;
103a、根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
在本实施例中,户外机器人是任何能够在户外进行自主移动的设备。户外机器人能够在户外执行作业任务,根据户外机器人类型的不同,户外机器人执行作业任务的种类也有所不同。例如,若户外机器人是铲雪机器人,则铲雪机器人执行的作业任务可以是铲雪任务;若户外机器人是割草机器人,则割草机器人执行的作业任务可以是割草任务;若户外机器人是消毒机器人,则消毒机器人执行的作业任务可以是消毒任务。
在本实施例中,户外机器人可以在可作业区域中执行作业任务,可作业区域是能够执行作业任务的区域,例如,可作业区域是相对平坦的安全区域,可选地,与可作业区域对应的是非作业区域,非作业区域是泳池、水坑或下沉的台阶等危险区域,为保证户外机器人的安全,禁止户外机器人在非作业区域中执行作业任务。
在本实施例中,可以控制户外机器人沿着可作业区域的边界移动。在沿着可作业区域的边界移动过程中,生成户外机器人的第一轨迹信息。其中,控制户外机器人沿着可作业区域的边界移动的方式并不限定,例如,户外机器人具有遥控器,遥控器与户外机器人通信连接,用户可以通过遥控器控制户外机器人沿着可作业区域的边界移动;又例如,用户的终端设备上安装有 与户外机器人对应的应用程序(app),用户可以通过该app控制户外机器人沿着可作业区域的边界移动。其中,户外机器人具有定位传感器模组,可以基于定位传感器模组生成户外机器人的第一轨迹信息,不同定位传感器模组,生成不同的第一轨迹信息,关于定位传感器模组的实现可参见后续实施例,在此暂不详述。
在本实施例中,第一轨迹信息可以是GPS位置信息,也可以是环境信息,例如,环境图像。根据第一轨迹信息有所不同,在环境地图上生成可作业区域的作业边界的方式也不相同。若第一轨迹信息是GPS位置信息,则可以根据该GPS位置信息,在环境地图上标记可作业区域的作业边界。若第一轨迹信息是户外机器人在沿着可作业区域移动过程中采集的环境图像,则可以通过视觉即时定位与地图构建(Visual Simultaneous Localization And Mapping,VSLAM)技术,在环境地图上生成可作业区域的作业边界。其中,可以在户外机器人沿着可作业区域的边界移动过程中,实时在环境地图上生成可作业区域的作业边界,也可以在户外机器人沿着可作业区域的边界移动完毕之后,在环境地图上生成可作业区域的作业边界。
在本实施例中,用户控制户外机器人沿着可作业区域的边界移动过程中,很难完全无误的沿着可作业区域的边界移动,因此,户外机器人沿着可作业区域的边界移动形成的第一轨迹信息可能与可作业区域真实的边界存在误差,基于该第一轨迹信息生成的可作业区域的作业边界与真实边界也可能存在误差。可作业区域的真实边界具有明显的分割特征,例如,可作业区域的真实边界可以是草坪与道路形成的分界线,也可以是草坪与其它植物之间的边界等。为了减少该误差,可以采集可作业区域的环境图像,环境图像中包含的可作业区域的边界信息,该边界信息可以体现可作业区域的真实边界,则可根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,减小环境地图上的作业边界与真实边界之间的误差,提高生成作业边界的准确率。
在本申请实施例中,控制户外机器人沿着可作业区域的边界移动,得到 可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可作业区域进行遍历,同时采集作业边界的环境图像,根据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本。进一步地,控制户外机器人沿着作业边界对可作业区域进行遍历,可以是控制户外机器人在可作业区域内沿弓字形轨迹移动,以遍历该可作业区域。当然,户外机器人在可作业区域内的移动轨迹的形状不限于弓字形,还可以是其他的形状,例如回字形,对此本申请不作规定。
本申请实施例还提供另一种作业边界生成方法,如图1b所示,该方法包括:
101b、根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;
102b、获取户外机器人在沿着可作业区域的边界移动过程中采集到的环境图像;
103b、根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
其中,图1a和图1b所示方式的区别在于:采集环境图像的实施方式不同,关于图1b的其它内容可参见前述实施例,在此不再详述。也即采集可作业区域的环境图像的实施方式并不限定。例如,可以在环境地图上生成可作业区域的作业边界之后,根据可作业区域的作业边界,控制户外机器人对可作业区域进行遍历,在遍历过程采集可作业区域的环境图像,如图1a所示。又例如,可以在户外机器人沿着可作业区域的边界移动过程中,同时采集可作业区域的环境图像,整个过程中,即可以在环境地图上生成可作业区域的作业边界,也能够获取可作业区域的环境图像,如图1b所示。
本申请实施例还提供另一种作业边界生成方法,该方法包括:
根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业 边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;
根据用户指令判断是否需要对该作业边界进行修订;
若获取到不需要对该作业边界进行修订的指令时,将该作业边界作为目标边界,以使户外机器人能根据该目标边界进行移动和工作;
若获取到需要对该作业边界进行修订的指令时,获取作业边界中待修正的边界片段,该待修正边界片段为整个作业边界或局部作业边界,并获取户外机器人所采集到的环境图像,以能根据环境图像中包含的所述可作业区域的边界信息,对所述待修正的边界片段进行修正,得到修正后的作业边界,并将该修正后的作业边界作为目标边界,以使户外机器人能根据该目标边界进行移动和工作。
如此通过上述根据用户指令判断是否需要对作业边界进行修订的实施例,使得能根据用户需求进行确定目标边界,进而提高用户的满意度。
进一步地,对于上述图1a和图1b,在本实施例中,还包括:
获取所述作业边界中待修正的边界片段,所述待修正边界片段为整个作业边界或局部作业边界,以能根据所述环境图像中包含的所述可作业区域的边界信息,对所述待修正的边界片段进行修正,得到修正后的作业边界。
在一个实施方式中,根据用户指令获取所述作业边界中待修正的边界片段。该用户指令可以是开启修正功能的触发操作。该用户指令还可以是用户开启和关闭修正功能的触发操作。
具体地,示例A1:在生成可作业区域的作业边界之后,在环境地图上动态展示作业边界;响应用户开启修正功能的触发操作,在环境地图上获取作业边界中待修正的边界片段的边界起始点和边界结束点,边界起始点和边界结束点限定待修正的边界片段。具体地,在环境地图上展示可作业区域对应的作业边界,用户可以针对环境地图上的作业边界发起修正触发操作,则可响应于用户发起的修正触发操作,获取待修正的边界片段。进而根据环境图像中包含的可作业区域的边界信息,对环境地图中的待修正的边界片段进行修正,得到修正后的作业边界。进一步,用户可直接针对环境地图上的作业 边界选择待修正边界片段,例如,展示环境地图,环境地图上包括可作业区域作业边界,用户可以在作业边界上选择第一轨迹点和第二轨迹点,则可响应于用户的选择触发操作,将第一轨迹点和第二轨迹点之间的边界片段作为待修正的边界片段。或者在生成可作业区域的作业边界之后,例如在用户通过遥控器控制户外机器人沿着可作业区域的边界移动进而生成可作业区域的作业边界之后,用户再重新遥控户外机器人沿着危险区域边界进行禁止修正的边界的设置,进而可作业区域的作业边界位于禁止修正的边界之外的部分即为待修正的边界片段。
示例A2:在户外机器人沿着可作业区域的边界移动过程中,响应用户开启修正功能的触发操作,获取户外机器人当前所在位置对应的边界点作为边界起始点;响应用户关闭修正功能的触发操作,获取户外机器人当前所在位置对应的当前边界点作为边界结束点;边界起始点和边界结束点限定待修正的边界片段。具体地,在生成可作业区域的作业边界过程中,在环境地图上动态展示作业边界;以及在响应于用户针对作业边界发起的修正触发操作,获取待修正的边界片段。
可选地,待修正边界片段具有不同于其它边界片段的第一可视化属性,该第一可视化属性表示该边界片段需要被修正。第一可视化属性可以是边界片段的颜色、粗细或者线性等信息。
进一步可选地,为了避免修正时,修正后的作业边界中包括危险区域,在户外机器人沿着可作业区域的边界移动过程中,用户开启修正功能,响应用户开启修正功能的触发操作,确定户外机器人当前所在位置对应的边界点作为边界起始点;户外机器人继续沿着可作业区域的边界移动,用户关闭修正功能,响应用户关闭修正功能的触发操作,确定户外机器人当前所在位置对应的当前边界点作为边界结束点。如此使得用户能对边界起始点和边界结束点及边界起始点和边界结束点限定的待修正的边界片段进行确认,如此保证待修正的边界片段中不包括危险区域。该危险区域可以是泳池,沙坑,下沉的台阶,花圃,菜地等区域。例如当用户在操作界面(例如APP操作界面) 开启修正功能按钮时,用户需要确认当前户外机器人所在的位置是否为危险区域。若用户确认当前户外机器人所在的位置不是危险区域,此时将当前户外机器人所在的位置作为边界起始点。若用户确认当前户外机器人所在的位置是危险区域,即不将当前户外机器人所在的位置作为边界起始点。进一步地,户外机器人继续沿着可作业区域的边界移动,且在户外机器人移动的过程中,用户可以实时对户外机器人的位置进行确认,当遇到危险区域时,可以通过操作界面关闭修正功能按钮,此时户外机器人所处的位置即为边界结束点。
在一可选实施例中,可以对采集到的环境图像进行语义分割,得到可作业区域对应的图像区域,该图像区域的图像边界对应可作业区域的边界信息,根据图像区域的图像边界,对环境地图中的作业边界进行修正,得到修正后的作业边界。其中,可采用神经网络模型对环境图像进行语义分割,神经网络模型可以包含但不限于:区域卷积神经网络(Region-Convolutional Neural Networks,R-CNN)、快速区域卷积神经网络(Fast Region-Convolutional Neural Networks,Fast-R-CNN)、你只能看一次(You Only Look Once,YOLO)模型或者单镜头多盒检测器(Single Shot MultiBox Detector,SSD)等,对此不做限定。
可选地,结合户外机器人采集环境图像时的位姿数据,将图像区域的图像边界映射到环境地图中,得到参考边界;具体地,可以根据环境图像中的深度信息,计算可作业区域对应的图像区域中的图像边界在机器坐标系下的位置坐标;其中,机器坐标系是户外机器人对应的坐标系;结合户外机器人在采集环境图像时的位姿数据,将图像区域的边界信息在机器坐标系下的位置坐标转换为在地图坐标系下的位置坐标,得到参考边界;其中,地图坐标系是环境地图对应的坐标系;进而,根据作业边界和参考边界上对应边界位置之间的距离,对所述作业边界上的所述待修正的边界片段进行修正,得到修正后的作业边界。
在一可选实施例中,根据作业边界和修正边界上对应边界位置之间的距 离,对所述作业边界上的所述待修正的边界片段进行修正,得到修正后的作业边界,包括:
判断作业边界上的待修正的边界片段与参考边界上对应边界位置之间的距离是否小于或等于第一距离阈值,若待修正的边界片段与参考边界上对应边界位置之间的距离小于或等于第一距离阈值,则将待修正的边界片段修正为参考边界上与其对应的边界位置;
若待修正的边界片段与参考边界上对应边界位置之间的距离大于第一距离阈值,则将待修正的边界片段修正为与第一距离阈值对应的位置。
具体地,情况B1:为了避免参考边界的识别错误而造成的修正误差过大,可以根据作业边界与参考边界之间的距离,将待修正的边界片段上的边界位置分为两种,分别是第一边界位置和第二边界位置,针对第一边界位置和第二边界位置两种情况分别进行修正,减小修正误差。其中,第一边界位置是待修正的边界片段上与参考边界上对应边界位置之间的距离小于或等于设定第一距离阈值的边界位置,第二边界位置是待修正的边界片段上与参考边界上对应边界位置之间的距离大于第一距离阈值的边界位置;第一距离阈值与第二距离阈值可以相同,也可以不相同,第一距离阈值可以是20厘米、50厘米或者1米等。优选地,第一距离阈值可以是户外机器人的机身宽度。
其中,待修正的边界片段上可能存在第一边界位置,也可能存在第二边界位置,还可能同时存在第一边界位置和第二边界位置。若待修正的边界片段上存在第一边界位置,则将第一边界位置修正为参考边界上与其对应的边界位置;若待修正的边界片段上存在第二边界位置,则将第二边界位置修正为与第一距离阈值对应的位置。
在一个实施方式中,可作业区域的作业边界远离非作业区域的禁区边界。其中,禁区边界是位于作业边界之内的非作业区域的边界,禁区边界和作业边界共同限定可作业区域;作业边界远离非禁区边界可以理解为作业边界距离禁区边界大于设定的第二距离阈值,第二距离阈值并不限定,例如,第二距离阈值可以是20厘米、50厘米或者1米等。在这种情况下,可作业区域的 作业边界内部不包含非作业区域,或者可作业区域的作业边界内部包含非作业区域,但是作业边界远离禁区边界。
在另一个实施方式中,可作业区域的作业边界靠近非作业区域的禁区边界,可以理解为作业边界距离禁区边界小于或等于设定的第二距离阈值。这种情况下,可作业区域的作业边界内部包含非作业区域,且作业边界的部分边界片段靠近禁区边界。
其中,考虑到对靠近禁区边界的作业边界进行修正,修正后的作业边界可能靠近禁区边界,甚至位于禁区边界内部,若户外机器人在修正后的作业边界内部执行作业任务,则户外机器人可能需要在非作业区域中执行作业,存在一定的危险性。因此,为了保护户外机器人的安全,需要保证修正后的作业边界远离非作业区域,则可以在不存在用户指令的情况下,从可作业区域中选择待修正区域,待修正区域是指可作业区域中远离禁区边界的部分区域,对于远离禁区边界的待修正区域进行修正,对可作业区域中靠近禁区边界的其它区域不进行修正。
具体地,可以获取参考边界中位于待修正区域内的局部参考边界,以及获取作业边界中位于待修正区域内的局部作业边界;根据局部参考边界和局部作业边界上对应边界位置之间的距离,对局部作业边界进行修正。
可选地,为了避免参考边界的识别错误而造成的修正误差过大,可以根据局部作业边界与局部参考边界之间的距离,将局部作业边界上的边界位置分为两种,分别是第三边界位置和第四边界位置,针对第三边界位置和第四边界位置两种情况分别进行修正,减小修正误差。局部作业边界上的边界位置主要分为两种,分别是第三边界位置和第四边界位置,第三边界位置是局部作业边界上与局部参考边界上对应边界位置之间的距离小于或等于设定第一距离阈值的边界位置;第四边界位置是局部作业边界上与局部参考边界上对应边界位置之间的距离大于第一距离阈值的边界位置。
其中,局部作业边界上可能存在第三边界位置,也可能存在第四边界位置,还可能同时存在第三边界位置和第四边界位置。若局部作业边界上存在 第三边界位置,将第三边界位置修正为局部参考边界上与其对应的边界位置;若局部作业边界上存在第四边界位置,将第四边界位置修正为与第一距离阈值对应的位置。如图2c所示,实线表示修正后的作业边界。
在一可选实施例中,还包括在环境地图上生成禁区边界的过程,其中,禁区边界位于可作业区域的作业边界之内。具体地,获取户外机器人的第二轨迹信息,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;根据户外机器人的第二轨迹信息,在环境地图上生成非作业区域的禁区边界。如图2a和图2b所示,作业边界用实线表示,禁区边界用虚线表示,图2a和图2b中以三个非作业区域和一个作业区域为例进行图示,但并不限于此。其中,三个非作业区域分别为非作业区域D1、非作业区域D2以及非作业区域D3,对于非作业区域D1位于作业边界边缘的情况,不需要对作业边界进行修正。图2a为对待修正区域内的局部作业边界的进行修正之前的示意图,图2b为对待修正区域内的局部作业边界进行修正后的示意图。
如图3所示,户外机器人包括:设备本体31和定位模块32,设备本体31包括存储器33、处理器34、第一电源组件、图像传感器35以及面阵深度传感器36。定位模块32包括:第二电源组件、处理器、存储器、通信组件以及定位传感器模组。进一步,户外机器人还包括:车轮、驱动电机、作业高度调节装置等组件。
其中,电源组件为户外机器人的各种组件提供电力,电源组件可以包括电源管理系统,一个或多个电源,及其他与为电源组件所在设备生成、管理和分配电力相关联的组件。电源组件可以实现为可充电电池,定位传感器模组用于建图与定位,图像传感器用于采集可作业区域的环境图像,面阵深度传感器采集环境图像的深度信息,图像传感器与面阵深度传感器共同确定作业边界对应的参考边界。其中,定位传感器模组中的摄像头可以是全景摄像头,可以拍摄360度的图像,图像传感器可以实现为针孔摄像机,面阵深度传感器可以实现为双目摄像头,或者其它可以测量深度信息的传感器。
其中,定位模块32是可拆卸模组,定位模块安装在户外机器人的固定插 槽上。基于该固定插槽,定位模块可以与户外机器人主板通信连接,基于该通信连接,定位模块与户外机器人可进行信息交互,例如,在户外机器人执行作业任务过程中,定位模块为户外机器人提供定位功能。
基于户外机器人上的固定插槽,户外机器人还可为定位模块的电源充电,当定位模块从固定插槽上被取下来的情况下,该第二电源组件为定位模块的其它组件供电。定位传感器模组用于获取定位模块的定位数据,处理器用于对该定位数据进行计算整合,得到定位模块的位置信息,存储器用于存储定位模块的位置信息,或定位数据,通信模块用于与终端设备进行信息交互。
其中,定位传感器模组的实现方式可以有多种,下面进行举例说明。
示例B1:定位传感器模组包括:摄像头和惯性测量单元(Inertial Measurement Unit,IMU),摄像头主要用于采集定位模块所处环境中的环境图像,IMU主要用来检测定位模块的加速度和角速度;摄像头和IMU检测的数据,确定定位模块的位置信息,并通过视觉SLAM技术进行建图与定位。
示例B2:定位传感器模组包括:超宽带(Ultra Wide Band,UWB)、气压计以及IMU。其中,超宽带技术,通过发送和接收具有纳秒或微秒级以下的极窄脉冲来实现无线传输的。由于脉冲时间宽度极短,因此可以实现频谱上的超宽带,带宽通常在500MHz以上。超宽带(UWB)定位技术利用事先布置好的已知位置的锚节点和桥节点,与新加入的盲节点进行通讯,通过测量出不同基站与定位模块的传输时延差来进行定位。超宽带主要的定位方法包括两大类:复合角度定位法和时间差定位法。复合角度定位法基于无线电测向工作,通过多个无线电监测站点对同一个信号(定位模块发出的信号)进行测向,利用测向射线的交会进行定位。时间差定位法则基于信号到达监测站的时间,通过时间距离换算进行交会定位。时间差定位法具体如下:通过测量信号到达监测站的时间,可以确定信号源(如,定位模块)的距离。利用信号源(如,定位模块)到多个无线电监测站的距离(以无线电监测站为中心,距离为半径作圆),就能确定信号源(定位模块)的位置。
气压计是根据托里拆利(Evangelista Torricelli,1608~1647)的实验原理而制 成,用以测量大气压强的仪器;气压计可以用来测量高度,每升高12米,水银柱即降低大约1毫米;在实际使用中,可以利用UWB、气压计以及IMU组合进行融合建图与定位;再使用环境中先布置几个已知位置的锚节点进行坐标系初始化,建立基础坐标系,然后定位模块上有一个UWB信号接收模块作为盲节点,利用UWB定位方法计算盲节点位置坐标信息;再融合气压计以及IMU测量的信息,计算定位模块更精准的位置坐标。
示例B3:定位传感器模组包括:实时差分定位(Real Time Kinematic,RTK)以及IMU传感器,利用RTK和IMU组合用来同时建图与定位。
在GPS测量中,如静态、快速静态、动态测量都需要事后进行解算才能获得厘米级的精度,而实时差分定位RTK是一种能够在野外实时得到厘米级定位精度的测量方法,RTK是以载波相位观测为根据的实时差分GPS(RTDGPS)技术,它由基准站接收机、数据链、流动站接收机三部分组成。在基准站上安置1台接收机为参考站,对卫星进行连续观测,并将其观测数据和测站信息,通过无线电传输设备,实时地发送给流动站,流动站GPS接收机在接收GPS卫星信号的同时,通过无线接收设备,接收基准站传输的数据,然后根据相对定位的原理,实时解算出流动站的三维坐标及其精度(即基准站和流动站坐标差△X、△Y、△H,加上基准坐标得到的每个点的WGS-84坐标,通过坐标转换参数得出流动站每个点的平面坐标X、Y和海拔高H)。
其中,通过组合RTK和IMU模组,可得到定位模块的位置坐标信息。
示例B4:定位传感器模组还可以是其他的传感器组合来进行同时建图与定位,例如,定位传感器模组包含摄像头、IMU、UWB以及气压计;又例如,定位传感器模组包含摄像头、RTK以及IMU等,这些组合能得到更高精度的定位模块的位置坐标信息。
场景化实施例
户外机器人实现为割草机器人,割草机器人具有定位传感器模组、图像传感器以及面阵深度传感器,可利用定位传感器模组采集的图片信息进行建 图与定位,利用图像传感器采集的环境图像进行语义分割,并且利用面阵深度传感器采集目标区域的作业边界的三维深度信息。
如图4a所示,当割草机器人具有定位功能时,首先由用户遥控割草机器人从充电座出发,移动至可作业区域的作业边界,然后选择起点位置,遥控割草机器人沿着作业边界移动一圈,再次回到起点位置;再由用户遥控割草机器人回到充电座,割草机器人在沿着作业边界移动一圈的同时采集环境图像进行定位与建图,并记录自己的第一轨迹信息,并根据第一轨迹信息在环境地图上生成可作业区域的作业边界;然后用户开始遥控割草机器人再次从充电座出发,移动至非工作区域,即危险区域(如,泳池、池塘或大型障碍物等),沿着危险区域边界移动进行禁区边界的设置。具体设置过程为:用户遥控割草机器人从充电座出发,移动至禁区边界,然后选择禁区边界的起点位置,沿着禁区边界移动一圈,再次回到起点位置,此时确定危险区域X1的禁区边界,然后依次沿着其它危险区域的边界移动,直至全部危险区域的禁区边界均设置完成,用户遥控割草机器人回到充电座。整个过程中,可生成禁区边界对应的第二轨迹信息,基于第二轨迹信息,在环境地图上生成禁区边界。
其中,用户在遥控割草机器人沿着作业边界移动时,很难完全无误的沿着边界线段移动,即根据割草机器人的移动轨迹生成的作业边界与实际草坪边界存在一定的误差,此时在一种实施方式中,可以在用户遥控割草机器人沿着作业边界移动时,用户触发边界位置修正操作,进而获取作业边界中待修正的边界片段。
如图4a所示,因为割草机器人可以进行自我定位并记录自身移动轨迹的世界坐标,并将此坐标轨迹实时同步到手机APP端并实时显示,用户可以根据需要在遥控割草机器人行走时确定是否触发边界位置修正操作,当用户在app端开启触发边界位置修正操作按钮时,app会请用户确认需要进行边界修正部分没有危险区域,比如泳池,沙坑,下沉的台阶,花圃,菜地等危险区域;用户确认安全之后,将会从此坐标点开始轨迹实时同步到手机APP端并 以一种特殊颜色实时显示,使用户明白此段边界轨迹会根据草坪边界进行修正;当用户遥控割草机器人来到危险区域边界时,比如泳池,沙坑,下沉的台阶,花圃,菜地等危险区域,用户可以在app端关闭边界位置修正操作按钮,此时将会从此坐标点开始轨迹实时同步到手机APP端并以一种特殊颜色实时显示,使用户明白此段边界轨迹会严格根据用户遥控边界进行工作,不会进行边界位置修正。
其中,当开始遥控割草机器人沿着工作区域的边界移动,生成工作边界时,割草机器人会通过手机APP端询问用户是否设置为边界定位模式,如果用户选择是,则开始记录定位模块在世界坐标系中的坐标,并生成移动轨迹信息,并将该移动轨迹信息设置为目标区域的作业边界;并将此轨迹信息实时同步到手机APP端并实时显示,使用户能够实时观察割草机器人在环境地图上的移动轨迹信息。
当用户发现遥控割草机器人移动至错误位置,用户可以在APP端选择修改轨迹按钮,此时手机APP会显示请用户遥控割草机器人回到原路径上错误轨迹起点,此时用户可以遥控割草机器人回到错误轨迹起始点,然后手机APP端会显示割草机器人此时所在的位置,并显示从当前点(错误轨迹起始点)到用户在APP端选择修改轨迹按钮时定位模块所在位置之间的错误轨迹线段,并使用与正确轨迹段不同的颜色显示该错误轨迹线段,此时用户可以点击该错误轨迹线段,并删除错误轨迹线段,删除后,用户可以点击重新开始记录边界按钮,用户控制割草机器人继续沿着目标区域的边界移动,并记录与显示。
割草机器人设置禁止修正的边界区域可以在遥控边界定义阶段同时进行,也可以在割草机器人沿着目标区域的边界移动一圈之后,确定边界定义完成之前,用户再重新遥控割草机器人沿着危险区域边界进行禁止修正的边界的设置。
全部禁区边界设置完成之后,用户遥控割草机器人回到充电座;此时会在APP端显示工作边界与禁区边界给用户,用户需要根据APP显示的工作边 界与禁区边界判断是符合自己的预期,如果是,则按确认按钮,然后割草机器人开始进入自动工作模式,在工作区域执行割草任务;如果否,则按取消此次生成的边界,按照之前步骤再重新生成工作边界和禁区边界。
当割草机器人开始进入自动工作模式之后,割草机器人会从充电座重新出发,进入工作边界内,开始“弓字形”建图模式,此时割草机器人不会执行割草任务,并利用图像传感器的摄像头采集环境图像,并利用图像语义分割技术对草坪区域进行识别,然后利用面阵深度传感器测量草坪边界在机器坐标系下的位置,再根据割草机器人的位姿,以及面阵深度传感器与割草机器人的相对位置关系,将草坪边界在机器坐标系下的边界坐标转换到世界坐标系下,根据世界坐标系下的草坪边界对用户遥控生成的作业边界中的待修正的边界片段进行修正使之更加贴近真实边界。另外,为了防止因为图像草坪语义识别错误而造成的边界修正误差过大,设置边界误差修正范围,即允许割草机器人修正待修正的边界片段与遥控生成的作业边界之间的距离不能超过第一距离阈值,比如第一距离阈值设置为50厘米或(-50)厘米,当然也可以设置为其他的值,对此本申请不做规定。
当割草机器人将整个工作区域探索完毕之后,即对整个工作区域进行定位与建图,并对作业边界进行修正,得到修正后的作业边界之后,割草机器人回到充电座,然后从充电座重新出发,开始执行割草任务。
图4b为本申请实施例提供的又一种作业边界生成方法的流程示意图,如图4b所示,该方法包括:
401b、获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;
402b、根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域。
在一可选实施例中,本申请实施例提供的方法还包括:根据作业边界和 禁区边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
图4c为本申请示例性实施例提供的一种环境地图生成方法的流程示意图。如图4c所示,该方法包括:
401c、获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;
402c、根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域;
403c、根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,在遍历过程中采集周围的环境图像,根据环境图像在环境地图中添加可作业区域内的其它环境信息。
图4d为本申请实施例提供的一种作业控制方法的流程示意图,如图4c所示,该方法应用于户外机器人,该方法包括:
401d、接收作业指令,作业指令指示户外机器人在可作业区域内执行作业任务;
402d、获取可作业区域的环境地图,环境地图上包括可作业区域对应的作业边界和非作业区域对应的禁区边界,禁区边界位于作业边界之内,且禁区边界和作业边界共同限定可作业区域;
403d、根据禁区边界和作业边界,控制户外机器人在可作业区域内执行作业任务。
其中,关于图4b-4d的详细内容可参见前述实施例,在此不再详述。
在本申请实施例中,控制户外机器人沿着可作业区域的边界移动,得到可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可 作业区域进行遍历,同时采集作业边界的环境图像,根据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本,且由于户外机器人沿可作业区域的边界移动时,难以沿着可作业区域的边界精确移动,进而使得得到的作业边界与实际的可作业区域的边界之间存在误差。本申请实施例中,通过环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,可以减少作业边界与实际的可作业区域的边界之间的误差,从而提高所得作业边界的精确度。
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同一设备,或者,该方法也由不同设备作为执行主体。比如,步骤101a至步骤103a的执行主体可以为设备A;又比如,步骤101a和102a的执行主体可以为设备A,步骤103a的执行主体可以为设备B;等等。
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如101a、102a等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
图5a为本申请示例性实施例提供的一种作业边界生成装置的结构示意图。如图5a所示,该装置包括:生成模块51a、控制模块52a、获取模块53a和修正模块54a。
生成模块51a,用于根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;
控制模块52a,用于根据作业边界,控制户外机器人对可作业区域进行遍 历,并获取模块53a,用于获取户外机器人在遍历过程中采集到的环境图像;
修正模块54a,用于根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
在一可选实施例中,修正模块54a具体用于:在生成作业边界的过程中,在环境地图上动态展示作业边界;以及响应于用户针对作业边界发起的修正触发操作,根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
可选地,修正模块54a具体用于:响应于用户针对作业边界发起的修正触发操作,确定作业边界中待修正的边界片段,待修正边界片段为整个作业边界或局部作业边界;根据环境图像中包含的可作业区域的边界信息,对待修正的边界片段进行修正,得到修正后的作业边界。
进一步可选地,修正模块54a具体用于:在户外机器人沿着可作业区域的边界移动过程中,响应用户开启修正功能的触发操作,确定户外机器人当前所在位置对应的边界点作为边界起始点;响应用户关闭修正功能的触发操作,确定户外机器人当前所在位置对应的当前边界点作为边界结束点;边界起始点和边界结束点限定待修正的边界片段。
在一可选实施例中,修正模块54a具体用于:对环境图像进行语义分割,得到可作业区域对应的图像区域,图像区域的图像边界对应可作业区域的边界信息;根据图像区域的图像边界,对环境地图中的作业边界进行修正,得到修正后的作业边界。
可选地,修正模块54a具体用于:结合户外机器人采集环境图像时的位姿数据,将图像区域的图像边界映射到环境地图中,得到参考边界;根据作业边界和参考边界上对应边界位置之间的距离,对作业边界进行修正,得到修正后的作业边界。
在一可选实施例中,修正模块54a具体用于:若作业边界上存在第一边界位置,则将第一边界位置修正为参考边界上与其对应的边界位置,第一边界位置是作业边界上与参考边界上对应边界位置之间的距离小于或等于设定 第一距离阈值的边界位置;若作业边界上存在第二边界位置,则将第二边界位置修正为与第一距离阈值对应的位置,第二边界位置是作业边界上与参考边界上对应边界位置之间的距离大于第一距离阈值的边界位置。
在另一可选实施例中,修正模块54a具体用于:获取参考边界中位于待修正区域内的局部参考边界,以及作业边界中位于待修正区域内的局部作业边界;根据局部参考边界和局部作业边界上对应边界位置之间的距离,对局部作业边界进行修正;其中,待修正区域是指可作业区域中远离禁区边界的部分区域,禁区边界是位于作业边界之内的非作业区域的边界,禁区边界和作业边界共同限定可作业区域。
可选地,修正模块54a具体用于:若局部作业边界上存在第三边界位置,将第三边界位置修正为局部参考边界上与其对应的边界位置,第三边界位置是局部作业边界上与局部参考边界上对应边界位置之间的距离小于或等于设定第一距离阈值的边界位置;若局部作业边界上存在第四边界位置,将第四边界位置修正为与第一距离阈值对应的位置,第四边界位置是局部作业边界上与局部参考边界上对应边界位置之间的距离大于第一距离阈值的边界位置。
在一可选实施例中,获取模块还用于获取户外机器人的第二轨迹信息,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;生成模块还用于根据户外机器人的第二轨迹信息,在环境地图上生成非作业区域的禁区边界,禁区边界位于作业边界之内。
本申请实例性实施例提供的另一种作业边界生成装置,该作业边界生成装置的实现结构可参见如图5a所示,该装置包括:生成模块、获取模块以及修正模块;
生成模块,用于根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;
获取模块,用于获取户外机器人在沿着可作业区域的边界移动过程中采 集到的环境图像;
修正模块,用于根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
本申请示例性实施例提供的又一种作业边界生成装置的结构示意图,该作业边界生成装置的实现结构可参见如图5a所示,该装置包括:获取模块和生成模块;
获取模块,用于获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;
生成模块,用于根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域。
在一可选实施例中,作业边界生成装置还包括:控制模块和修正模块;控制模块,用于根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;修正模块,用于根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
图5b为本申请示例性实施例提供的一种环境地图生成装置的结构示意图,如图5b所示,该装置包括:获取模块51b、生成模块52b、遍历模块53b和添加模块54b;
获取模块51b,用于获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;
生成模块52b,用于根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域;
遍历模块53b,用于根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,在遍历过程中采集周围的环境图像,添加模块54b,用于根据环境图像在环境地图中添加可作业区域内的其它环境信息。
图5c为本申请示例性实施例提供的一种作业控制装置的结构示意图,如图5c所示,该装置包括:接收模块51c、获取模块52c和控制模块53c;
接收模块51c,用于接收作业指令,作业指令指示户外机器人在可作业区域内执行作业任务;
获取模块52c,用于获取可作业区域的环境地图,环境地图上包括可作业区域对应的作业边界和非作业区域对应的禁区边界,禁区边界位于作业边界之内,且禁区边界和作业边界共同限定可作业区域;
控制模块53c,用于根据禁区边界和作业边界,控制户外机器人在可作业区域内执行作业任务。
其中,关于图5a-5c的详细内容可参见前述实施例,在此不再详述。
本申请实施例提供的作业边界生成装置,控制户外机器人沿着可作业区域的边界移动,得到可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可作业区域进行遍历,同时采集作业边界的环境图像,根据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本,且由于户外机器人沿可作业区域的边界移动时,难以沿着可作业区域的边界精确移动,进而使得得到的作业边界与实际的可作业区域的边界之间存在误差。本申请实施例中,通过环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,可以减少作业边界与实际的可作业区域的边界之间的误差,从而提高所得作业边界的精确度。
图6为本申请示例性实施例提供的一种电子设备的结构示意图,该电子 设备实现为作业边界生成设备,如图6所示,该设备包括:存储器64和处理器65。
存储器64,用于存储计算机程序,并可被配置为存储其它各种数据以支持在作业边界生成设备上的操作。这些数据的示例包括用于在作业边界生成设备上操作的任何应用程序或方法的指令。
存储器64可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
处理器65,与存储器64耦合,用于执行存储器54中的计算机程序,以用于:根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;根据作业边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
在一可选实施例中,处理器65在根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界时具体用于:在生成作业边界的过程中,在环境地图上动态展示作业边界;以及响应于用户针对作业边界发起的修正触发操作,根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
在一可选实施例中,处理器65在响应于用户针对作业边界发起的修正触发操作,根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界时,具体用于:响应于用户针对作业边界发起的修正触发操作,确定作业边界中待修正的边界片段,待修正边界片段为整个作业边界或局部作业边界;根据环境图像中包含的可作业区域的边界信息,对待修正的边界片段进行修正,得到修正后的作业边界。
在一可选实施例中,处理器65在响应于用户针对作业边界发起的修正触发操作,确定作业边界中待修正的边界片段时,具体用于:在户外机器人沿着可作业区域的边界移动过程中,响应用户开启修正功能的触发操作,确定户外机器人当前所在位置对应的边界点作为边界起始点;响应用户关闭修正功能的触发操作,确定户外机器人当前所在位置对应的当前边界点作为边界结束点;边界起始点和边界结束点限定待修正的边界片段。
在一可选实施例中,处理器65在根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界时,具体用于:对环境图像进行语义分割,得到可作业区域对应的图像区域,图像区域的图像边界对应可作业区域的边界信息;根据图像区域的图像边界,对环境地图中的作业边界进行修正,得到修正后的作业边界。
在一可选实施例中,处理器65在根据图像区域的图像边界,对环境地图中的作业边界进行修正,得到修正后的作业边界时,具体用于:结合户外机器人采集环境图像时的位姿数据,将图像区域的图像边界映射到环境地图中,得到参考边界;根据作业边界和参考边界上对应边界位置之间的距离,对作业边界进行修正,得到修正后的作业边界。
在一可选实施例中,处理器65在根据作业边界和修正边界上对应边界位置之间的距离,对作业边界进行修正,得到修正后的作业边界时,具体用于:若作业边界上存在第一边界位置,则将第一边界位置修正为参考边界上与其对应的边界位置,第一边界位置是作业边界上与参考边界上对应边界位置之间的距离小于或等于设定第一距离阈值的边界位置;若作业边界上存在第二边界位置,则将第二边界位置修正为与第一距离阈值对应的位置,第二边界位置是作业边界上与参考边界上对应边界位置之间的距离大于第一距离阈值的边界位置。
在一可选实施例中,处理器65在根据作业边界和修正边界上对应边界位置之间的距离,对作业边界进行修正,得到修正后的作业边界时,具体用于:获取参考边界中位于待修正区域内的局部参考边界,以及作业边界中位于待 修正区域内的局部作业边界;根据局部参考边界和局部作业边界上对应边界位置之间的距离,对局部作业边界进行修正;其中,待修正区域是指可作业区域中远离禁区边界的部分区域,禁区边界是位于作业边界之内的非作业区域的边界,禁区边界和作业边界共同限定可作业区域。
在一可选实施例中,处理器65在根据局部参考边界和局部作业边界上对应边界位置之间的距离,对局部作业边界进行修正时,具体用于:若局部作业边界上存在第三边界位置,将第三边界位置修正为局部参考边界上与其对应的边界位置,第三边界位置是局部作业边界上与局部参考边界上对应边界位置之间的距离小于或等于设定第一距离阈值的边界位置;若局部作业边界上存在第四边界位置,将第四边界位置修正为与第一距离阈值对应的位置,第四边界位置是局部作业边界上与局部参考边界上对应边界位置之间的距离大于第一距离阈值的边界位置。
在一可选实施例中,处理器65还用于:获取户外机器人的第二轨迹信息,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;根据户外机器人的第二轨迹信息,在环境地图上生成非作业区域的禁区边界,禁区边界位于作业边界之内。
进一步,如图6所示,该作业边界生成设备还包括:通信组件66、显示器67、电源组件68、音频组件69等其它组件。图6中仅示意性给出部分组件,并不意味着作业边界生成设备只包括图6所示组件。需要说明的是,图6中虚线框内的组件为可选组件,而非必选组件,具体可视作业边界生成设备的产品形态而定。
本申请实施例还提供一种电子设备,该电子设备实现为作业边界生成设备,该作业边界生成设备的实现结构与图6所示作业边界生成设备的实现结构相同或类似,可参照图6所示作业边界生成设备的结构实现。本实施例提供的作业边界生成设备与图6所示实施例中作业边界生成设备的区别主要在于,处理器执行存储器中存储的计算机程序所实现的功能不同。对本实施例提供的作业边界生成设备来说,其处理器执行存储器中存储的计算机程序, 可用于:根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的;获取户外机器人在沿着可作业区域的边界移动过程中采集到的环境图像;根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
本申请实施例还提供一种电子设备,该电子设备实现为作业边界生成设备,该作业边界生成设备的实现结构与图6所示作业边界生成设备的实现结构相同或类似,可参照图6所示作业边界生成设备的结构实现。本实施例提供的作业边界生成设备与图6所示实施例中作业边界生成设备的区别主要在于,处理器执行存储器中存储的计算机程序所实现的功能不同。对本实施例提供的作业边界生成设备来说,其处理器执行存储器中存储的计算机程序,可用于:获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域。
在一可选实施例中,处理器还用于:根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,并获取户外机器人在遍历过程中采集到的环境图像;根据环境图像中包含的可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,当计算机程序被处理器执行时,致使处理器能够实现图1a、图1b和图4b所示方法中的各步骤。
本申请实施例提供的作业边界生成设备,控制户外机器人沿着可作业区域的边界移动,得到可作业区域在环境地图中的作业边界,并控制户外机器人沿着作业边界对可作业区域进行遍历,同时采集作业边界的环境图像,根 据该环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,得到修正后的作业边界,相较于铺设磁条的作业边界生成方法,本申请提供的方法操作简单,效率高,且无需铺设磁条,节省成本,且由于户外机器人沿可作业区域的边界移动时,难以沿着可作业区域的边界精确移动,进而使得得到的作业边界与实际的可作业区域的边界之间存在误差。本申请实施例中,通过环境图像中可作业区域的边界信息,对环境地图中的作业边界进行修正,可以减少作业边界与实际的可作业区域的边界之间的误差,从而提高所得作业边界的精确度。
本申请实施例还提供一种电子设备,该电子设备实现为环境地图生成设备,该环境地图生成设备的实现结构与图6所示作业边界生成设备的实现结构相同或类似,可参照图6所示作业边界生成设备的结构实现。本实施例提供的环境地图生成设备与图6所示实施例中作业边界生成设备的区别主要在于:处理器执行存储器中存储的计算机程序所实现的功能不同。对本实施例提供的网络实体来说,其处理器执行存储器中存储的计算机程序,可用于:获取户外机器人的第一轨迹信息和第二轨迹信息,第一轨迹信息是户外机器人沿着可作业区域的边界移动形成的,第二轨迹信息是户外机器人沿着非作业区域的边界移动形成的;根据户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成可作业区域的作业边界和非作业区域的禁区边界,禁区边界位于作业边界之内,禁区边界和作业边界共同限定可作业区域;根据作业边界和禁区边界,控制户外机器人对可作业区域进行遍历,在遍历过程中采集周围的环境图像,根据环境图像在环境地图中添加可作业区域内的其它环境信息。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,当计算机程序被处理器执行时,致使处理器能够实现图4c所示方法中的各步骤。
本申请实施例还提供一种户外机器人,户外机器人的结构可参见图3,该户外机器人包括:设备本体31和定位模块32,设备本体31包括存储器33和处理器34;存储器33,用于存储计算机程序;处理器34,与存储器33耦合,用于执行计算机程序,以用于:接收作业指令,作业指令指示户外机器人在可作业区域内执行作业任务;获取可作业区域的环境地图,环境地图上包括可作业区域对应的作业边界和非作业区域对应的禁区边界,禁区边界位于作业边界之内,且禁区边界和作业边界共同限定可作业区域;根据禁区边界和作业边界,控制户外机器人在可作业区域内执行作业任务。其中,关于户外机器人的详细实现结构参见图3所示的实施例,在此不再赘述。
相应地,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,当计算机程序被处理器执行时,致使处理器能够实现图4d所示方法中的各步骤。
上述图6中的通信组件被配置为便于通信组件所在设备和其他设备之间有线或无线方式的通信。通信组件所在设备可以接入基于通信标准的无线网络,如WiFi,2G、3G、4G/LTE、5G等移动通信网络,或它们的组合。在一个示例性实施例中,通信组件经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
上述图6中的显示器包括屏幕,其屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。
上述图6和图3中的电源组件,为电源组件所在设备的各种组件提供电力。电源组件可以包括电源管理系统,一个或多个电源,及其他与为电源组 件所在设备生成、管理和分配电力相关联的组件。
上述图6中的音频组件,可被配置为输出和/或输入音频信号。例如,音频组件包括一个麦克风(MIC),当音频组件所在设备处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器或经由通信组件发送。在一些实施例中,音频组件还包括一个扬声器,用于输出音频信号。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步 骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (21)

  1. 一种作业边界生成方法,其特征在于,包括:
    根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,所述第一轨迹信息是所述户外机器人沿着所述可作业区域的边界移动形成的;
    根据所述作业边界,控制所述户外机器人对所述可作业区域进行遍历,并获取所述户外机器人在遍历过程中采集到的环境图像;
    根据所述环境图像中包含的所述可作业区域的边界信息,对所述环境地图中的作业边界进行修正,得到修正后的作业边界。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    获取所述作业边界中待修正的边界片段,所述待修正边界片段为整个作业边界或局部作业边界,以能根据所述环境图像中包含的所述可作业区域的边界信息,对所述待修正的边界片段进行修正,得到修正后的作业边界。
  3. 根据权利要求2所述的方法,其特征在于,获取所述作业边界中待修正的边界片段,包括:
    在生成所述作业边界的过程后,在所述环境地图上动态展示所述作业边界;
    响应用户开启修正功能的触发操作,在所述环境地图上获取所述作业边界中待修正的边界片段的边界起始点和边界结束点,所述边界起始点和所述边界结束点限定所述待修正的边界片段。
  4. 根据权利要求2所述的方法,其特征在于,获取所述作业边界中待修正的边界片段,包括:
    在所述户外机器人沿着所述可作业区域的边界移动过程中,响应用户开启修正功能的触发操作,获取所述户外机器人当前所在位置对应的边界点作为边界起始点;
    响应用户关闭修正功能的触发操作,获取所述户外机器人当前所在位置对应的当前边界点作为边界结束点;所述边界起始点和所述边界结束点限定所述待修正的边界片段。
  5. 根据权利要求2-4任一项所述的方法,其特征在于,根据所述环境图像中包含的所述可作业区域的边界信息,对所述环境地图中的作业边界进行修正,得到修正后的作业边界,包括:
    对所述环境图像进行语义分割,得到所述可作业区域对应的图像区域,所述图像区域的图像边界对应所述可作业区域的边界信息;
    根据所述图像区域的图像边界,对所述作业边界上的所述待修正的边界片段进行修正,得到修正后的作业边界。
  6. 根据权利要求5所述的方法,其特征在于,根据所述图像区域的图像边界,对所述作业边界上的所述待修正的边界片段进行修正,得到修正后的作业边界,包括:
    结合所述户外机器人采集所述环境图像时的位姿数据,将所述图像区域的图像边界映射到所述环境地图中,得到参考边界;
    根据所述作业边界上的所述待修正的边界片段和所述参考边界上对应边界位置之间的距离,对所述作业边界上的所述待修正的边界片段进行修正,得到修正后的作业边界。
  7. 根据权利要求6所述的方法,其特征在于,根据所述作业边界上的所述待修正的边界片段和所述参考边界上对应边界位置之间的距离,对所述作业边界上的所述待修正的边界片段进行修正,得到修正后的作业边界,包括:
    判断所述作业边界上的所述待修正的边界片段与所述参考边界上对应边界位置之间的距离是否小于或等于第一距离阈值,若所述待修正的边界片段与所述参考边界上对应边界位置之间的距离小于或等于第一距离阈值,则将所述待修正的边界片段修正为所述参考边界上与其对应的边界位置;
    若所述待修正的边界片段与所述参考边界上对应边界位置之间的距离大于第一距离阈值,则将所述待修正的边界片段修正为与所述第一距离阈值对应的位置。
  8. 根据权利要求7所述的方法,其特征在于,还包括:
    获取所述户外机器人的第二轨迹信息,所述第二轨迹信息是所述户外机器人沿着非作业区域的边界移动形成的;
    根据所述户外机器人的第二轨迹信息,在所述环境地图上生成所述非作业区域的禁区边界,所述禁区边界位于所述作业边界之内。
  9. 一种作业边界生成方法,其特征在于,包括:
    根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,所述第一轨迹信息是所述户外机器人沿着所述可作业区域的边界移动形成的;
    获取所述户外机器人在沿着所述可作业区域的边界移动过程中采集到的环境图像;
    根据所述环境图像中包含的所述可作业区域的边界信息,对所述环境地图中的作业边界进行修正,得到修正后的作业边界。
  10. 一种作业边界生成方法,其特征在于,包括:
    获取户外机器人的第一轨迹信息和第二轨迹信息,所述第一轨迹信息是所述户外机器人沿着可作业区域的边界移动形成的,所述第二轨迹信息是所述户外机器人沿着非作业区域的边界移动形成的;
    根据所述户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成所述可作业区域的作业边界和所述非作业区域的禁区边界,所述禁区边界和所述作业边界共同限定所述可作业区域。
  11. 根据权利要求10所述的方法,其特征在于,还包括:
    根据所述作业边界和所述禁区边界,控制所述户外机器人对所述可作业区域进行遍历,并获取所述户外机器人在遍历过程中采集到的环境图像;
    根据所述环境图像中包含的所述可作业区域的边界信息,对所述环境 地图中的作业边界进行修正,得到修正后的作业边界。
  12. 一种环境地图生成方法,其特征在于,包括:
    获取户外机器人的第一轨迹信息和第二轨迹信息,所述第一轨迹信息是所述户外机器人沿着可作业区域的边界移动形成的,所述第二轨迹信息是所述户外机器人沿着非作业区域的边界移动形成的;
    根据所述户外机器人的第一轨迹信息和第二轨迹信息,在所述环境地图上分别生成所述可作业区域的作业边界和所述非作业区域的禁区边界,所述禁区边界位于所述作业边界之内,所述禁区边界和所述作业边界共同限定所述可作业区域;
    根据所述作业边界和所述禁区边界,控制所述户外机器人对所述可作业区域进行遍历,在遍历过程中采集周围的环境图像,根据所述环境图像在所述环境地图中添加所述可作业区域内的其它环境信息。
  13. 一种作业控制方法,应用于户外机器人,其特征在于,所述方法包括:
    接收作业指令,所述作业指令指示所述户外机器人在可作业区域内执行作业任务;
    获取所述可作业区域的环境地图,所述环境地图上包括所述可作业区域对应的作业边界和非作业区域对应的禁区边界,所述禁区边界位于所述作业边界之内,且所述禁区边界和所述作业边界共同限定所述可作业区域;
    根据所述禁区边界和所述作业边界,控制所述户外机器人在所述可作业区域内执行作业任务。
  14. 一种作业边界生成装置,其特征在于,包括:生成模块、控制模块和修正模块;
    所述生成模块,用于根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,所述第一轨迹信息是所述户外机器人沿着所述可作业区域的边界移动形成的;
    所述控制模块,用于根据所述作业边界,控制所述户外机器人对所述 可作业区域进行遍历,并获取所述户外机器人在遍历过程中采集到的环境图像;
    所述修正模块,用于根据所述环境图像中包含的所述可作业区域的边界信息,对所述环境地图中的作业边界进行修正,得到修正后的作业边界。
  15. 一种作业边界生成装置,其特征在于,包括:生成模块、获取模块和修正模块;
    生成模块,用于根据户外机器人的第一轨迹信息,在环境地图上生成可作业区域的作业边界,所述第一轨迹信息是所述户外机器人沿着所述可作业区域的边界移动形成的;
    获取模块,用于获取所述户外机器人在沿着所述可作业区域的边界移动过程中采集到的环境图像;
    修正模块,用于根据所述环境图像中包含的所述可作业区域的边界信息,对所述环境地图中的作业边界进行修正,得到修正后的作业边界。
  16. 一种作业边界生成装置,其特征在于,包括:获取模块和生成模块;
    所述获取模块,用于获取户外机器人的第一轨迹信息和第二轨迹信息,所述第一轨迹信息是所述户外机器人沿着可作业区域的边界移动形成的,所述第二轨迹信息是所述户外机器人沿着非作业区域的边界移动形成的;
    所述生成模块,用于根据所述户外机器人的第一轨迹信息和第二轨迹信息,在环境地图上分别生成所述可作业区域的作业边界和所述非作业区域的禁区边界,所述禁区边界位于所述作业边界之内,所述禁区边界和所述作业边界共同限定所述可作业区域。
  17. 一种环境地图生成装置,其特征在于,包括:获取模块、生成模块、遍历模块和添加模块;
    所述获取模块,用于获取户外机器人的第一轨迹信息和第二轨迹信息,所述第一轨迹信息是所述户外机器人沿着可作业区域的边界移动形成的,所述第二轨迹信息是所述户外机器人沿着非作业区域的边界移动形成的;
    所述生成模块,用于根据所述户外机器人的第一轨迹信息和第二轨迹信息,在所述环境地图上分别生成所述可作业区域的作业边界和所述非作业区域的禁区边界,所述禁区边界位于所述作业边界之内,所述禁区边界和所述作业边界共同限定所述可作业区域;
    根据所述作业边界和所述禁区边界,控制所述户外机器人对所述可作业区域进行遍历,在遍历过程中采集周围的环境图像,根据所述环境图像在所述环境地图中添加所述可作业区域内的其它环境信息。
  18. 一种作业控制装置,其特征在于,包括:接收模块、获取模块和控制模块;
    所述接收模块,用于接收作业指令,所述作业指令指示户外机器人在可作业区域内执行作业任务;
    所述获取模块,用于获取所述可作业区域的环境地图,所述环境地图上包括所述可作业区域对应的作业边界和非作业区域对应的禁区边界,所述禁区边界位于所述作业边界之内,且所述禁区边界和所述作业边界共同限定所述可作业区域;
    所述控制模块,用于根据所述禁区边界和所述作业边界,控制所述户外机器人在所述可作业区域内执行作业任务。
  19. 一种电子设备,其特征在于,包括:存储器和处理器;所述存储器,用于存储计算机程序;所述处理器,与所述存储器耦合,用于执行所述计算机程序,以用于实现权利要求1-12任一项所述方法中的步骤。
  20. 一种户外机器人,其特征在于,包括:设备本体,所述设备本体包括:存储器和处理器;所述存储器,用于存储计算机程序;所述处理器,与所述存储器耦合,用于执行所述计算机程序,以用于实现权利要求13所述方法中的步骤。
  21. 一种存储有计算机程序的计算机可读存储介质,其特征在于,当所述计算机程序被处理器执行时,致使所述处理器实现权利要求1-13任一项所述方法中的步骤。
PCT/CN2023/093536 2022-05-13 2023-05-11 作业边界生成方法、作业控制方法、设备及存储介质 WO2023217231A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210524794.6 2022-05-13
CN202210524794.6A CN117095044A (zh) 2022-05-13 2022-05-13 作业边界生成方法、作业控制方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023217231A1 true WO2023217231A1 (zh) 2023-11-16

Family

ID=88729760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/093536 WO2023217231A1 (zh) 2022-05-13 2023-05-11 作业边界生成方法、作业控制方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN117095044A (zh)
WO (1) WO2023217231A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117859500B (zh) * 2024-03-12 2024-05-24 锐驰激光(深圳)有限公司 割草机防出边界方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9939814B1 (en) * 2017-05-01 2018-04-10 Savioke, Inc. Computer system and method for automated mapping by robots
CN109410290A (zh) * 2017-08-16 2019-03-01 广州极飞科技有限公司 确定作业区域边界的方法和装置
CN109658432A (zh) * 2018-12-27 2019-04-19 南京苏美达智能技术有限公司 一种移动机器人的边界生成方法及系统
CN114077249A (zh) * 2021-10-22 2022-02-22 陕西欧卡电子智能科技有限公司 一种作业方法、作业设备、装置、存储介质
CN114442642A (zh) * 2022-04-02 2022-05-06 深圳市普渡科技有限公司 路径规划方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9939814B1 (en) * 2017-05-01 2018-04-10 Savioke, Inc. Computer system and method for automated mapping by robots
CN109410290A (zh) * 2017-08-16 2019-03-01 广州极飞科技有限公司 确定作业区域边界的方法和装置
CN109658432A (zh) * 2018-12-27 2019-04-19 南京苏美达智能技术有限公司 一种移动机器人的边界生成方法及系统
CN114077249A (zh) * 2021-10-22 2022-02-22 陕西欧卡电子智能科技有限公司 一种作业方法、作业设备、装置、存储介质
CN114442642A (zh) * 2022-04-02 2022-05-06 深圳市普渡科技有限公司 路径规划方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN117095044A (zh) 2023-11-21

Similar Documents

Publication Publication Date Title
AU2019208265B2 (en) Moving robot, method for controlling the same, and terminal
US11266067B2 (en) Moving robot, method for controlling moving robot, and moving robot system
CN112584697B (zh) 使用视觉系统的自主机器导航和训练
CN108398944B (zh) 自移动设备的作业方法、自移动设备、存储器和服务器
US11561554B2 (en) Self-moving device, working system, automatic scheduling method and method for calculating area
EP4043984A1 (en) Map building method, self-moving device, and automatic working system
CN106535614A (zh) 分离的草坪区块的机器人割刈
WO2023217231A1 (zh) 作业边界生成方法、作业控制方法、设备及存储介质
CN111766862A (zh) 避障控制方法、装置、电子设备和计算机可读存储介质
EP3919237A2 (en) Mobile robot and control method therefor
KR102304304B1 (ko) 인공지능 이동 로봇 및 이의 제어 방법
US20200238531A1 (en) Artificial intelligence moving robot and method for controlling the same
KR102421519B1 (ko) 이동 로봇 시스템 및 이동 로봇 시스템의 경계 정보 생성 방법
KR102385611B1 (ko) 이동 로봇 시스템 및 이동 로봇 시스템의 경계 정보 생성 방법
KR102378270B1 (ko) 이동 로봇 시스템 및 이동 로봇 시스템의 경계 정보 생성 방법
EP4250041A1 (en) Method for determining information, remote terminal, and mower
CN116414112A (zh) 自移动设备的控制方法、自移动设备及存储介质
SE2351480A1 (en) Robotic lawnmower system
CN117055543A (zh) 作业边界生成方法、作业控制方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23803000

Country of ref document: EP

Kind code of ref document: A1