CN118259658A - Working boundary determination method, intelligent device, storage medium and program product - Google Patents
Working boundary determination method, intelligent device, storage medium and program product Download PDFInfo
- Publication number
- CN118259658A CN118259658A CN202211661061.3A CN202211661061A CN118259658A CN 118259658 A CN118259658 A CN 118259658A CN 202211661061 A CN202211661061 A CN 202211661061A CN 118259658 A CN118259658 A CN 118259658A
- Authority
- CN
- China
- Prior art keywords
- boundary
- straddlable
- information
- physical
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000008447 perception Effects 0.000 claims abstract description 31
- 230000003993 interaction Effects 0.000 claims abstract description 15
- 230000002452 interceptive effect Effects 0.000 claims description 28
- 244000025254 Cannabis sativa Species 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 8
- 239000000523 sample Substances 0.000 claims 1
- 230000001953 sensory effect Effects 0.000 claims 1
- 239000000758 substrate Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000009966 trimming Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007688 edging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Guiding Agricultural Machines (AREA)
- Harvester Elements (AREA)
Abstract
The embodiment of the invention provides a working boundary determining method, intelligent equipment, a storage medium and a program product. The method comprises the following steps: receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area; generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph; and receiving a first interaction operation of a user on the boundary graph, and determining a straddlable boundary part and a non-straddlable boundary part in the boundary graph according to the first interaction operation, wherein the straddlable boundary part is used for indicating the boundary part, and the mowing robot can perform boundary straddling mowing operation. The method has better interactivity and can improve mowing effect.
Description
Technical Field
The embodiment of the invention relates to the technical field of intelligent equipment, in particular to a working boundary determining method, intelligent equipment, a computer storage medium and a computer program product.
Background
With the development of artificial intelligence technology, more and more autonomous mobile robot devices are widely applied to life and work of people, and a mowing robot is one of important devices.
The mowing robot can automatically trim the grasslands under the condition of unattended or controlled, so that the time of a user is saved, and the labor of the user is reduced. For a grass cutting robot, it is common to use cutterhead to trim the grass. Generally, the cutterhead is disposed at the bottom of the robot body, and is spaced a certain distance from the sidewall of the bottom in the forward projection area of the bottom due to the requirements of safety regulations. Although the current mowing robot can mow the grass in the working area set by a user, the mowing robot cannot trim the grass at the boundary of the working area due to the interval between the cutter disc and the bottom side wall.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a working boundary determining method, an intelligent device, a computer storage medium, and a computer program product, so as to at least solve the problem that the existing mowing robot has a poor mowing effect on the boundary.
According to an aspect of an embodiment of the present invention, there is provided a working boundary determining method including: receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area; generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph; and receiving a first interaction operation of a user on the boundary graph, and determining a straddlable boundary part and a non-straddlable boundary part in the boundary graph according to the first interaction operation, wherein the straddlable boundary part is used for indicating the boundary part, and the mowing robot can perform boundary straddling mowing operation.
According to another aspect of an embodiment of the present invention, there is provided a smart device including a processor and a display; the processor is used for receiving the sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area; generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph; the display is used for displaying the boundary graph and receiving a first interactive operation of a user on the boundary graph; the processor is further configured to determine a straddlable boundary portion and a non-straddlable boundary portion in the boundary graph according to the first interaction, where the straddlable boundary portion is used to indicate that the mowing robot is capable of performing boundary straddling mowing operations at the boundary portion.
According to still another aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described above.
According to yet another aspect of embodiments of the present invention, there is provided a computer program product comprising computer instructions that instruct a computing device to perform a method as described above.
According to the embodiment of the invention, when the working boundary of the mowing robot is determined, the boundary graph determined based on the perception information is the interactive graph, and the interactive graph has the interactive function, so that a user can operate the boundary graph, the boundary graph determined by the method carries the information of the straddlable boundary part and the non-straddlable boundary part, and when the mowing robot performs mowing operation according to the boundary graph, the straddling mowing robot can perform riding mowing on the straddling boundary part, and therefore, the mowing robot can trim a lawn at the boundary of the working area, and the mowing effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an exemplary mowing robot in accordance with an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating steps of a method for determining a working boundary according to a first embodiment of the present invention;
FIG. 3A is a flowchart illustrating a working boundary determining method according to a second embodiment of the present invention;
FIG. 3B is a flow chart illustrating sub-steps of step 304 in the embodiment of FIG. 3A;
FIG. 3C is a schematic diagram of the mowing robot in the embodiment shown in FIG. 3A when sensing information is collected;
FIG. 3D is a schematic diagram of a three-dimensional boundary pattern in the embodiment shown in FIG. 3A;
FIG. 4A is a flowchart illustrating a working boundary determining method according to a third embodiment of the present invention;
FIG. 4B is a schematic diagram of a control interface for controlling the mowing robot to perform sensing information collection in the embodiment shown in FIG. 4A;
FIG. 4C is a schematic diagram of a control interface with a first operation option in the embodiment of FIG. 4A;
FIG. 5A is a flowchart illustrating a working boundary determining method according to a fourth embodiment of the present invention;
FIG. 5B is a schematic diagram of an acquired boundary pattern in the embodiment of FIG. 5A;
Fig. 6 is a block diagram of an intelligent device according to a fifth embodiment of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the embodiments of the present invention, the following description will clearly and completely describe the technical solutions of the embodiments of the present invention with reference to the accompanying drawings. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
For convenience of explanation and understanding, before explaining the method, an application scenario and a working process of the method are briefly described as follows:
the scheme provided by the embodiment of the invention can be applied to the mowing robot, and the straddlable and non-straddlable working boundaries are determined for the mowing robot through intelligent equipment independent of the mowing robot such as a mobile terminal (a mobile phone, a tablet, a notebook computer and the like).
For the mowing robot, an exemplary example is a mowing robot 100, as shown in fig. 1, the mowing robot 100 includes a robot body, wheels, a mowing blade, and the like. A grass cutter head is typically provided at the bottom of the grass cutting robot body to cut and trim grass during movement of the grass cutting robot 100. The area R in fig. 1 shows the area covered by the grass cutter head when in operation.
In general, when the mowing robot 100 is used, the mowing robot 100 needs to perform working area mapping first, and after mapping, the mowing robot 100 can perform autonomous movement and mowing based on the created working area map. In some examples, one important step in mapping the lawn mowing robot 100 is to establish a map boundary, for example, the user may control the lawn mowing robot 100 to move along a physical boundary of a lawn area to be mowed through a manipulation device (e.g., a smart device in data connection with the lawn mowing robot 100), collect data during the movement, and establish the map boundary according to the collected data.
Of course, in other embodiments, the mowing robot 100 may also perform mapping by other methods, such as full-automatic mapping, etc., and only one mapping method is illustrated in the present embodiment, which is not limited thereto.
Autonomous movement and trimming of the lawn mowing robot 100 can be achieved based on the established map, and in order to ensure safety in the working process, the end of the lawn mowing cutterhead is at a certain distance from the side edge of the lawn mowing robot 100, so that when the lawn mowing robot 100 performs lawn trimming near the boundary of the map, some grass in the boundary area can not be trimmed, and the trimming effect is affected. It should be noted that, as can be seen from the working coverage area of the cutterhead in fig. 1, the cutterhead of the mowing robot is set at a side of the bottom of the mowing robot in this example, but the cutterhead is not limited thereto, and in practical application, the cutterhead may be set centrally at the bottom of the mowing robot.
In order to solve the problem, the embodiment of the invention provides a working boundary determining method, and the implementation process of the method is described as follows:
Example 1
As shown in fig. 2, the working boundary determining method includes the steps of:
step S202: and receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area.
The mowing robot may travel along a physical boundary under the control of a user, or may travel along a physical boundary under the control of an algorithm for automatic travel, without limitation. During traveling, the mowing robot can detect information by using the mounted sensor and take the information as sensing information. As the sensors carried on different mowing robots may be different, the sensing information collected by the different mowing robots may also be different.
For example, the perceived information may include position information of the mowing robot (e.g., position information acquired by a satellite positioning device such as GPS, or position information acquired based on UWB, etc.).
For another example, the perceived information may include position information of the mowing robot and identification information of the boundary object. The identifying information of the boundary object may be identifying information (such as category, position, etc. of the boundary object) of the boundary object (such as stone, tree, road surface, step, etc. of the boundary accessory) collected by a sensor (such as a camera or a laser radar, etc.) in the process that the mowing robot moves along the physical boundary.
Step S204: and generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph.
Taking the example that the sensing information includes the position information of the mowing robot, the boundary pattern of the lawn area can be formed based on the position information in the process that the mowing robot runs along the physical boundary. For example, a digital boundary is generated for the grass area based on the position information of the grass robot.
The digital boundary differs from a conventional boundary graphic in that the boundary graphic is an interactable graphic, i.e., the boundary graphic can interact with a user. For example, the user may select or edit some or all of the boundary graphic to make it straddlable or non-straddlable.
Step S206: a first interactive operation of a user on the boundary graph is received, and a straddlable boundary portion and a non-straddlable boundary portion in the boundary graph are determined according to the first interactive operation.
Wherein the straddlable boundary portion is for indicating at which boundary portion the mowing robot is operable to perform boundary straddling mowing operations. Accordingly, the non-straddlable boundary portion indicates that the mowing robot is unable to perform the boundary-straddling mowing operation.
In some cases, the first interactive operation may be that the user selects part or all of the boundary and edits the selected boundary portion to a straddlable boundary portion or a non-straddlable boundary portion.
In other cases, the first interactive operation may be a user confirmation operation of the boundary pattern that has been determined to be a straddlable boundary portion or a non-straddlable boundary portion.
Or in other cases, the first interactive operation may be other operations, without limitation.
According to the method, when the working boundary of the mowing robot is determined, the boundary graph determined based on the perception information is the interactive graph, and the interactive graph has the interactive function, so that a user can operate the boundary graph, the boundary graph determined by the method carries information of the straddlable boundary part and the non-straddlable boundary part, and when the mowing robot mows according to the boundary graph, the mowing robot can perform straddling mowing on the straddlable boundary part, and therefore the mowing robot can trim a lawn at the boundary of the working area, and the mowing effect is improved.
Example two
Fig. 3A is a flowchart showing steps of a working boundary determining method according to a second embodiment of the present invention. The method comprises the following steps:
Step S302: and receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area.
In this embodiment, the mowing robot is mounted with a sensor such as a camera or a laser radar, and the acquired sensing information includes, in addition to position information of the mowing robot, identification information (including but not limited to information such as terrain information, positions of boundary objects, specific object types (such as trees, roads, etc.) of boundary objects of a physical boundary, which is obtained based on data acquired by the camera or the laser radar.
Step S304: and generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph.
In some examples, as shown in fig. 3B, where the perception information includes location information and identification information of a boundary object, step S304 includes sub-step 3041 and sub-step S3044.
Substep 3041: and generating a boundary graph of the grassland area according to the perception information.
For example, based on position information (e.g., GPS position information) of the lawn mowing robot at different times, the position information is connected in time sequence to generate a digital boundary of the lawn area.
For another example, substep S3041 includes process A1 and process B1.
Process A1: generating a candidate straddlable boundary portion and a candidate non-straddlable boundary portion for the grass area based on the perception information of the robot.
The following describes exemplary embodiments of determining candidate straddlable boundary portions and candidate non-straddlable boundary portions, taking a mowing robot with a camera and a laser radar mounted thereon as examples.
When the camera is mounted, the position information of the mowing robot can be recorded through the position acquisition device such as a GPS in the running process of the mowing robot, and meanwhile, the physical environment image at the physical position indicated by the position information is acquired through the camera. As shown in fig. 3C, the mowing robot may collect a physical environment image in a certain area around the mowing robot while driving. For example, in fig. 3C, the middle white part is a grassland area G, the thick solid line is a physical boundary of grassland and non-grassland, the white area outside the thick solid line is a non-grassland area B, the thin line is a corresponding position of the digital boundary on the grassland determined based on the GPS data, and the broken line corresponds to an area C corresponding to the physical environment image.
Based on the result, the depth of field estimation is carried out on the physical environment image, and the physical environment information of the mowing robot in the current running direction is obtained according to the depth of field estimation result. In some examples, the depth of field estimation may be performed on the physical environment image by pre-training a completed depth of field estimation model to obtain a depth of field estimation result output by the depth of field estimation model. For example, the depth of field estimation model is converted into depth of field information according to an image with prior function in the advancing direction of the mowing robot, which is acquired by the camera, so as to obtain a depth of field estimation result of the mowing robot in the direction.
The depth estimation model may be implemented as a machine learning model, which may employ an encoder-decoder structure, for example, and in one possible approach may be a encoder-decoder based on a transform structure. Visual feature extraction is performed by an encoder of the depth estimation model, and the extracted visual feature processing is output as a required result by a decoder. Wherein the depth of field estimation result comprises at least one of the following: terrain information at the physical location, obstacle information at the physical location. According to the terrain information, the obstacle information and the like in the depth of field estimation result and the running direction of the mowing robot, the obstacle and the height change condition in the current running direction can be determined as physical environment information.
Further, a passable portion and an unvented portion in the physical boundary are determined based on the physical environment information. For example, when the physical environment information includes terrain information, it is determined whether or not there is a difference in height between the inside and outside of the physical boundary in the current traveling direction of the robot lawnmower, based on the terrain information at the physical location indicated by the physical environment information. For another example, when the physical environment information includes obstacle information, it is determined whether an obstacle exists within a certain range of a physical boundary in the current traveling direction of the robot lawnmower according to the obstacle information at the physical position indicated by the physical environment information, but is not limited thereto.
In particular, the depth-of-field estimation model may determine whether the passage is possible within a distance outside the boundary in the direction sufficient to accommodate the riding edging of the mowing robot based on the inferred result of the monocular depth estimation model or the result of the binocular depth estimation model.
And determining the passable part and the non-passable part in the physical boundary according to the physical environment information. For example, if it is determined that there is a height difference and the height difference is greater than a preset threshold, it is determined that a physical boundary portion greater than the preset threshold is an unvented portion. That is, in the case of an obstacle that cannot be surmounted, the area where the robot may strike or where the robot may not normally travel due to a steep ascending or sinking terrain (e.g., a step or a pool on the side of the lawn) is an unvented area. The passable portion is, for example, a flat area (such as a road surface or a land with a relatively flat side) where the mowing robot can normally mow, and if the area is a passable area, the current boundary state is determined to be a straddlable state, otherwise, the current boundary state is determined to be a non-straddlable state.
An interactable candidate straddlable boundary portion and a candidate non-straddlable boundary portion are generated for the grass area based on the passable portion and the non-passable portion. Subsequently, the user can confirm that the candidate straddlable boundary portion is the final straddlable boundary and the candidate non-straddlable boundary portion is the final non-straddlable boundary on the smart device such as a mobile phone. Or after the user performs fine tuning on the candidate straddlable boundary part and the candidate non-straddlable boundary part on the intelligent device, the user confirms that the final straddlable boundary and the final non-straddlable boundary are obtained.
In another possible manner, if the laser radar is mounted on the mowing robot, the determination of the candidate straddlable boundary portion and the candidate non-straddlable boundary portion may be implemented as:
And recording the position information of the mowing robot, and collecting physical environment point cloud data at a physical position indicated by the position information. For example, physical environment point cloud data at a physical location is acquired by a lidar.
And determining the passable part and the non-passable part in the physical boundary according to the identification result of the physical environment point cloud data. Because the point cloud data acquired by the laser radar contains information such as the position and depth of each object in the physical environment, the height difference and/or barrier information in the current running direction of the mowing robot can be analyzed according to the point cloud data of the physical environment, so that the passable part and the non-passable part of the physical boundary can be determined.
An interactable candidate straddlable boundary portion and a candidate non-straddlable boundary portion are generated for the grass area based on the passable portion and the non-passable portion. For example, the boundary portion corresponding to the passable portion is determined as a straddlable boundary portion, and the boundary portion corresponding to the non-passable portion is determined as a non-straddlable boundary portion.
Process B1: and generating a boundary graph of the grassland area according to the candidate straddlable boundary part and the candidate non-straddlable boundary part.
For example, the boundary pattern is generated from the locations of the candidate straddlable boundary portions and the candidate non-straddlable boundary portions.
Substep 3042: and determining a three-dimensional object corresponding to the identification information, texture data of the three-dimensional object and relative position information of the three-dimensional object relative to a digital boundary of the grassland area according to the identification information of the boundary object.
For example, the identification information includes the type of the identified object, the position of the object with respect to the mowing robot, and the like. Corresponding three-dimensional objects (such as trees, walls, fences, steps, cobble pavements and the like) can be determined according to the types of the objects in the identification information, corresponding texture data can be obtained based on the three-dimensional objects, and the texture data corresponding to different types of three-dimensional objects can be preset without limitation.
Since the digital boundary is generated based on the position information of the mowing robot, the relative position information of the three-dimensional object relative to the digital boundary can be determined according to the position of the three-dimensional object relative to the mowing robot.
Substep 3043: and drawing the three-dimensional object by using texture data of the three-dimensional object at the corresponding position of the digital boundary according to the relative position information.
The three-dimensional object can be accurately drawn at the corresponding position based on the relative position information, and the three-dimensional object of different types can be more close to reality due to the fact that the three-dimensional object of different types can adopt corresponding texture data, so that a user can more intuitively and conveniently see the digital boundary and the environment at the digital boundary.
Substep 3044: and generating a three-dimensional boundary graph of the grassland area according to the digital boundary and the drawn three-dimensional object.
Compared with a two-dimensional boundary graph, the three-dimensional boundary graph is more visual and is close to a real environment, and a user can more conveniently determine whether a three-dimensional object affecting normal movement of the mowing robot exists near the boundary through the three-dimensional boundary graph, so that the user can be helped to quickly judge whether a certain boundary part is a straddlable boundary part. Fig. 3D shows a schematic view of a three-dimensional boundary graph, i.e. showing a lawn and its boundary area in three dimensions, and fig. 3D shows a tree in three dimensions only outside one boundary, but it should be clear to those skilled in the art that in practical applications, all areas including the inner area of the lawn may be shown in three dimensions. If the boundary graph includes candidate straddlable boundary portions and candidate non-straddlable boundary portions, the candidate straddlable boundary portions and the candidate non-straddlable boundary portions may be presented in different manners to distinguish them so that the user can more intuitively learn different types of boundary portions. For example, in fig. 3D, the thick solid line portion represents the user-selected straddlable boundary portion, while the dotted line portion represents the candidate straddlable boundary portion that has not been selected by the user, but that can be selected.
However, in practical applications, the display form of the two-dimensional digital boundary is also applicable to the scheme of the embodiment of the present application.
Step S306: a first interactive operation of a user on the boundary graph is received, and a straddlable boundary portion and a non-straddlable boundary portion in the boundary graph are determined according to the first interactive operation.
The generated boundary graph is displayed on the intelligent terminal or other displayable equipment, so that the user can conveniently view the boundary graph. And meanwhile, a user can operate the boundary graph according to the requirement.
For example, if the boundary pattern includes a candidate straddlable boundary portion and a candidate non-straddlable boundary portion, the user may confirm the candidate straddlable boundary portion as a straddlable boundary portion and determine the candidate non-straddlable boundary portion as a non-straddlable boundary portion through a first interactive operation (e.g., a determining operation).
For another example, if the boundary graphic includes a boundary portion that is not determined to be a straddlable boundary portion, the user may assign the undetermined boundary portion as a straddlable boundary portion or a non-straddlable boundary portion through a first interactive operation (e.g., a selecting and assigning operation). For example, in fig. 3D, the thick solid line portion represents the confirmed straddlable boundary portion, while the thin line portion in the digital boundary represents the confirmed non-straddlable boundary portion.
The first interactive operation is not limited thereto, but may be other suitable operations, and is not limited thereto.
The straddlable boundary portion is used for indicating that the boundary portion is straddling mowing operation by the mowing robot, and when the subsequent mowing robot performs mowing operation, the boundary portion is straddled mowing. Correspondingly, at the non-straddlable boundary part, the subsequent mowing robot does not perform straddling mowing operation.
The map established based on the boundary graph determined by the user can be sent to the mowing robot through intelligent equipment such as a mobile phone and the like, and can be stored in a local memory of the mowing robot or can be stored in other positions for being used when the mowing robot works.
According to the method, when the working boundary of the mowing robot is determined, the boundary graph determined based on the perception information is the interactive graph, so that a user can operate the boundary graph, the boundary graph determined by the method carries information of the straddlable boundary part and the non-straddlable boundary part, and when the mowing robot performs mowing operation according to the boundary graph, the mowing robot can perform straddling mowing on the straddlable boundary part, and therefore the mowing robot can trim a lawn at the boundary of the working area, and mowing effect is improved.
Optionally, in case a depth estimation model is used, the method may further comprise step S308.
Step S308: receiving a physical environment image acquired by the mowing robot when the mowing robot performs boundary riding mowing operation on a physical boundary indicated by a straddlable boundary part in the determined boundary graph; and updating the depth estimation model using the physical environment image.
In order to improve the accuracy of the depth of field estimation model, when the mowing robot performs boundary riding mowing operation, a sensor carried on the mowing robot can continuously acquire physical environment images, and the physical environment images are analyzed to realize self-optimization of the depth of field estimation model and optimization iteration of the riding boundary. The acquired physical environment image is used for training and testing the scene depth estimation model in real time so as to optimize the scene depth estimation model.
Optionally, since the physical environment may change, in order to better adapt to the change of the environment in time, the safety of the work of the mowing robot is ensured, and the training result of the depth estimation model is fully utilized, and the method may further include step S310.
Step S310: and performing depth of field estimation again based on the physical environment image acquired by the mowing robot again when the mowing robot performs mowing operation on the physical boundary by using the updated depth of field estimation model, so as to re-determine a passable part and a passable part for the physical boundary according to the physical environment information obtained by the depth of field estimation.
For example, depth, point cloud information in the depth estimate of the more recent timestamp inferred by the trained depth estimate model may be converted to new terrain information and/or obstacle information, which may be used to assist in updating passable and non-passable regions in the boundary pattern. For example, if it is found that the originally passable portion no longer satisfies the passing condition based on the depth of view estimation result obtained in the actual operation of the mowing robot, the straddlable boundary portion corresponding to the originally passable area may be automatically set as the non-straddlable boundary portion and updated and saved in the memory.
By the method, optimization and iteration of the depth of field estimation model are realized, the output result in the iteration process of the depth of field estimation model is fully utilized, and rapid updating of the boundary graph is realized.
Example III
Fig. 4A is a flowchart showing steps of a working boundary determining method according to a third embodiment of the present invention. The method comprises the following steps:
Step S402: and receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area.
This step may be the same as or similar to the step S202 or the step S302 described above, and will not be described again.
Step S404: and generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph.
For example, in some possible ways, step S404 may include the sub-steps of:
substep S4041: and in the process of acquiring the perception information, acquiring the perception information of the passable part in the physical boundary of the grassland area through a second interactive operation instruction.
In some examples, sub-step S4041 may include processes A2-C2 described below.
Process A2: and receiving triggering operation of the user on the first operation option displayed in the display interface.
Fig. 4B shows a schematic view of a display interface when a user controls the mowing robot to move along a physical boundary through the intelligent terminal.
In one alternative, a first operational option may be displayed in the presentation interface. The first operational option is for indicating a start of collection of information of a straddlable physical boundary (i.e., a trafficable portion) of the lawn area.
For example, while the mowing robot is traveling along a physical boundary. When the user triggers the first operation option, the information of the physical boundary collected by the subsequent mowing robot is indicated to be the information corresponding to the straddlable physical boundary.
Process B2: and driving the mowing robot to run in the grassland area according to the operation indicated by the first operation option, and recording the position information in the running process until receiving the triggering operation of the user on the second operation option displayed in the display interface.
When collection of the non-straddlable physical boundary is desired, the user may trigger a second operation option in the presentation interface for indicating ending collection of the information of the straddlable physical boundary.
Process C2: a straddlable boundary portion of the digital boundary is generated for the grass area based on the position information recorded between the operation of starting acquisition and the operation of ending acquisition.
The boundary portion corresponding to the position information recorded between the start of the acquisition operation (i.e., triggering the first operation option) and the end of the acquisition operation (i.e., triggering the second operation option) may be regarded as a straddlable boundary portion.
The procedure will be exemplarily described with reference to fig. 4C, in which a switch button is disposed at the upper left corner of fig. 4C, and when the switch button is in an off state, the switch button is used as a first operation option, and the user triggers the first operation option to indicate that the collection of the straddlable physical boundary is started, and accordingly, the switch button is switched to an on state, and the switch option is used as a second operation option. When the second operational option is triggered by the user, the end of the acquisition is indicated to straddle the physical boundary. However, in practical applications, the first operation option and the second operation option may be set independently, such as "start" and "end" buttons, or "on" and "off" buttons, which are set independently, and the like, which are applicable to the scheme of the embodiment of the present invention.
Substep S4042: and generating boundary patterns of the grassland area according to the perception information of the passable part and the perception information of other parts.
The process of generating the boundary graph based on the perception information is similar to the foregoing step S202 and step S302, and the difference is that the perceivable part of the boundary graph corresponds to the generation of the straddlable boundary part, and the boundary part generated by the perceivable part of the other parts can be determined by other ways, so that the generation process is not repeated.
Step S406: and receiving a first interaction operation of a user on the boundary graph, and determining a straddlable boundary part and a non-straddlable boundary part in the boundary graph according to the first interaction operation, wherein the straddlable boundary part is used for indicating the boundary part, and the mowing robot can perform boundary straddling mowing operation.
The implementation process of this step is the same as or similar to the aforementioned step S206 or step S306, and thus will not be repeated.
Through the mode, a user can control the mowing robot to collect the straddlable boundary part according to the needs in the collection process, so that the collection accuracy is improved.
Example IV
As shown in fig. 5A, a step flow chart of a working boundary determining method according to a fourth embodiment of the present invention is shown. The method comprises the following steps:
step S502: and receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area.
This step may be the same as or similar to the aforementioned step S202 or step S302, and will not be described again.
Step S504: and generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph.
Taking the example that the sensing information includes location information, step S504 may be implemented as: and generating a digital boundary for the grass area according to the position information of the mowing robot.
Step S506: a first interactive operation of the user on a part of boundaries in a boundary graph displayed in a display interface is received, the part of boundaries operated by the first interactive operation are determined to be straddlable boundary portions in the boundary graph, and other parts are determined to be non-straddlable boundary portions.
After the digital boundary is generated, the digital boundary may be presented in a presentation interface, as shown in fig. 5B, and a user may perform a first interaction with a portion or all of the digital boundary in the presentation interface, for example, selecting a portion of the digital boundary and configuring it as a strainable boundary portion (as a bold solid line portion in fig. 5B), or selecting a portion of the digital boundary and configuring it as a non-strainable boundary portion, to determine the strainable boundary portion and the non-strainable boundary portion in the boundary graph based on the first interaction.
According to the method, when the working boundary of the mowing robot is determined, the boundary graph determined based on the perception information is the interactive graph, so that a user can operate the boundary graph, the boundary graph determined by the method carries information of the straddlable boundary part and the non-straddlable boundary part, and when the mowing robot performs mowing operation according to the boundary graph, the mowing robot can perform straddling mowing on the straddlable boundary part, and therefore grasslands at the boundary of the working area can be trimmed, and mowing effect is improved.
The method can be applied to the situation that the mowing robot builds a map or automatically mows according to the built map, and is not limited. Thus, the method can automatically judge candidate straddlable boundary parts when the map is built, and provide boundary graphics capable of interacting with a user, so that the user can confirm the straddlable boundary parts or configure new straddlable boundary parts (or delete the existing straddlable boundary parts), and can automatically execute straddling mowing on the straddlable boundary parts when mowing, thereby improving mowing performance.
Example five
In this embodiment, there is also provided an intelligent device including a processor and a display;
the processor is used for receiving the sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area; generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph;
The display is used for displaying the boundary graph and receiving a first interactive operation of a user on the boundary graph;
The processor is further configured to determine a straddlable boundary portion and a non-straddlable boundary portion in the boundary graph according to the first interaction, where the straddlable boundary portion is used to indicate that the mowing robot is capable of performing boundary straddling mowing operations at the boundary portion.
For example, the smart device may be a smart phone or tablet or notebook or other smart terminal, etc., without limitation. The intelligent device can realize the methods described in the method embodiments, form a boundary which can be used for the mowing robot to mow based on the method, and send the formed boundary to the mowing robot. Subsequently, the mowing robot can perform mowing operation according to the boundary, including riding mowing operation in a straddlable boundary area, so as to improve mowing effect.
Furthermore, in an embodiment of the present invention, there is also provided a computer program product comprising computer instructions that instruct a computing device to perform a method as described above.
Or a computer storage medium having stored thereon a computer program which when executed by a processor implements a method as described above
It should be noted that in the description of embodiments of the present invention, the terms "first," "second," and the like, are merely used for convenience in describing the various components or names and are not to be construed as indicating or implying a sequential relationship, relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
It should be noted that, although specific embodiments of the present invention have been described in detail with reference to the accompanying drawings, the present invention should not be construed as limiting the scope of the present invention. Various modifications and variations which may be made by those skilled in the art without the creative effort fall within the protection scope of the present invention within the scope described in the claims.
Examples of embodiments of the present invention are intended to briefly illustrate technical features of embodiments of the present invention so that those skilled in the art may intuitively understand the technical features of the embodiments of the present invention, and are not meant to be undue limitations of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (17)
1. A method of operating boundary determination, comprising:
Receiving sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area;
Generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph;
And receiving a first interaction operation of a user on the boundary graph, and determining a straddlable boundary part and a non-straddlable boundary part in the boundary graph according to the first interaction operation, wherein the straddlable boundary part is used for indicating the boundary part, and the mowing robot can perform boundary straddling mowing operation.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The sensing information comprises position information of the mowing robot;
Or alternatively
The sensing information includes position information of the mowing robot and identification information of a boundary object.
3. The method of claim 2, wherein when the perceived information includes location information of the lawn mowing robot, the generating a boundary pattern of the lawn area from the perceived information includes:
And generating a digital boundary for the grass area according to the position information of the mowing robot.
4. The method of claim 3, wherein when the perception information further includes identification information of the boundary object, the generating the boundary pattern of the lawn area according to the perception information further includes:
according to the identification information of the boundary object, determining a three-dimensional object corresponding to the identification information, texture data of the three-dimensional object and relative position information of the three-dimensional object relative to a digital boundary of the grassland area;
drawing the three-dimensional object by using texture data of the three-dimensional object at a corresponding position of the digital boundary according to the relative position information;
And generating a three-dimensional boundary graph of the grassland area according to the digital boundary and the drawn three-dimensional object.
5. The method of any of claims 2-4, wherein the generating a boundary pattern of the grass area from the perception information comprises:
Generating a candidate straddlable boundary portion and a candidate non-straddlable boundary portion for the grass area according to the perception information of the robot;
and generating a boundary graph of the grassland area according to the candidate straddlable boundary part and the candidate non-straddlable boundary part.
6. The method of claim 5, wherein the generating for the grass area a candidate straddlable boundary portion and a candidate non-straddlable boundary portion comprising:
recording the position information of the mowing robot, and collecting a physical environment image at a physical position indicated by the position information;
Performing depth of field estimation on the physical environment image, and obtaining physical environment information of the mowing robot in the current running direction according to a depth of field estimation result;
determining a passable part and an unvented part in the physical boundary according to the physical environment information;
an interactable candidate straddlable boundary portion and a candidate non-straddlable boundary portion are generated for the grass area based on the passable portion and the non-passable portion.
7. The method of claim 6, wherein the depth estimation result comprises at least one of: terrain information at the physical location, obstacle information at the physical location.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
The obtaining physical environment information of the mowing robot in the current driving direction according to the depth of field estimation result comprises the following steps: determining whether the height difference exists between the internal and external terrains of the physical boundary in the current running direction of the mowing robot according to the topographic information at the physical position;
Determining a passable portion and an unvented portion in the physical boundary according to the physical environment information, including: if the height difference exists and is larger than a preset threshold value, determining that the physical boundary part larger than the preset threshold value is an unvented part.
9. The method of claim 6, wherein the step of providing the first layer comprises,
The estimating the depth of field of the physical environment image includes: performing depth of field estimation on the physical environment image through a depth of field estimation model;
The method further comprises the steps of: receiving a physical environment image acquired by the mowing robot when the mowing robot performs boundary riding mowing operation on a physical boundary indicated by a straddlable boundary part in the determined boundary graph; and updating the depth estimation model using the physical environment image.
10. The method according to claim 9, wherein the method further comprises:
And performing depth of field estimation again based on the physical environment image acquired by the mowing robot again when the mowing robot performs mowing operation on the physical boundary by using the updated depth of field estimation model, so as to re-determine a passable part and a passable part for the physical boundary according to the physical environment information obtained by the depth of field estimation.
11. The method of claim 5, wherein the generating for the grass area a candidate straddlable boundary portion and a candidate non-straddlable boundary portion comprising:
Recording the position information of the mowing robot, and collecting physical environment point cloud data at a physical position indicated by the position information;
determining a passable part and an unvented part in the physical boundary according to the identification result of the physical environment point cloud data;
an interactable candidate straddlable boundary portion and a candidate non-straddlable boundary portion are generated for the grass area based on the passable portion and the non-passable portion.
12. The method of claim 1, wherein the generating a boundary pattern of the grass area from the perception information comprises:
In the process of collecting the perception information, the second interactive operation is used for indicating to collect the perception information of the passable part in the physical boundary of the grassland area;
and generating boundary patterns of the grassland area according to the perception information of the passable part and the perception information of other parts.
13. The method of claim 12, wherein the indicating, by a second interactive operation, to gather sensory information of the trafficable portion in the physical boundary of the grass area comprises:
receiving triggering operation of a user on a first operation option displayed in a display interface, wherein the first operation option is used for indicating to start collecting information of the straddlable physical boundary of the grassland area;
Driving the mowing robot to run in the grassland area according to the operation indicated by the first operation option, and recording position information in the running process until receiving triggering operation of the user on a second operation option displayed in the display interface, wherein the second operation option is used for indicating that the collection of the information of the straddlable physical boundary is finished;
A straddlable boundary portion of the digital boundary is generated for the grass area based on the position information recorded between the operation of starting acquisition and the operation of ending acquisition.
14. The method of claim 3, wherein the receiving a first user interaction with the boundary graphic and determining the straddlable boundary portion and the non-straddlable boundary portion of the boundary graphic based on the first interaction comprises:
A first interactive operation of the user on a part of boundaries in a boundary graph displayed in a display interface is received, the part of boundaries operated by the first interactive operation are determined to be straddlable boundary portions in the boundary graph, and other parts are determined to be non-straddlable boundary portions.
15. An intelligent device is characterized by comprising a processor and a display;
Wherein,
The processor is used for receiving the sensing information acquired by the mowing robot when the mowing robot runs on the physical boundary of the grassland area; generating a boundary graph of the grassland area according to the perception information, wherein the boundary graph is an interactable graph;
The display is used for displaying the boundary graph and receiving a first interactive operation of a user on the boundary graph;
The processor is further configured to determine a straddlable boundary portion and a non-straddlable boundary portion in the boundary graph according to the first interaction, where the straddlable boundary portion is used to indicate that the mowing robot is capable of performing boundary straddling mowing operations at the boundary portion.
16. A computer storage medium having stored thereon a computer program, which when executed by a processor performs the method of any of claims 1-14.
17. A computer program product comprising computer instructions that instruct a computing device to perform the method of any one of claims 1-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211661061.3A CN118259658A (en) | 2022-12-22 | 2022-12-22 | Working boundary determination method, intelligent device, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211661061.3A CN118259658A (en) | 2022-12-22 | 2022-12-22 | Working boundary determination method, intelligent device, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118259658A true CN118259658A (en) | 2024-06-28 |
Family
ID=91607945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211661061.3A Pending CN118259658A (en) | 2022-12-22 | 2022-12-22 | Working boundary determination method, intelligent device, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118259658A (en) |
-
2022
- 2022-12-22 CN CN202211661061.3A patent/CN118259658A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109491397B (en) | Mowing robot and mowing area defining method thereof | |
CN113126613B (en) | Intelligent mowing system and autonomous image building method thereof | |
JP5667731B1 (en) | Field guidance system, field guidance method, software, and storage medium storing software | |
CN112673799B (en) | Self-walking mowing system and outdoor walking equipment | |
KR101513050B1 (en) | Lawn mower robot and Controlling Method for the same | |
WO2016098023A2 (en) | Multi-sensor, autonomous robotic vehicle | |
EP3158409B1 (en) | Garden visualization and mapping via robotic vehicle | |
CN113115621B (en) | Intelligent mowing system and autonomous image building method thereof | |
JP2017060524A (en) | Field guidance system and field guidance method as well as software and storage medium with software stored therein | |
JP6088104B2 (en) | Field guidance system, field guidance method, software, and storage medium storing software | |
JP2015167562A (en) | Farm field guidance system and farm field guidance method, and software and storage medium storing software | |
US20230008134A1 (en) | Robotic work tool system and method for defining a working area perimeter | |
CN114937258B (en) | Control method for mowing robot, and computer storage medium | |
CN114898205A (en) | Information determination method, equipment and computer readable storage medium | |
US20230320263A1 (en) | Method for determining information, remote terminal, and mower | |
CN111401337B (en) | Lane following exploration map building method, storage medium and robot | |
CN116088533B (en) | Information determination method, remote terminal, device, mower and storage medium | |
CN118259658A (en) | Working boundary determination method, intelligent device, storage medium and program product | |
JP2018139604A (en) | Field guidance system and field guidance method, and software and recording medium having software stored therein | |
CN116736865A (en) | Information determination method, remote terminal, device, mower and storage medium | |
CN114995444A (en) | Method, device, remote terminal and storage medium for establishing virtual working boundary | |
CN114489083A (en) | Working area construction method and related device | |
JP7462642B2 (en) | Autonomous work machine, control device, control method for autonomous work machine, operation method of control device, and program | |
CN116466693A (en) | Map processing method, self-moving gardening equipment and automatic mower | |
CN116360458A (en) | Robot behavior planning method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |