WO2022188333A1 - 一种行走方法、装置和计算机存储介质 - Google Patents

一种行走方法、装置和计算机存储介质 Download PDF

Info

Publication number
WO2022188333A1
WO2022188333A1 PCT/CN2021/107607 CN2021107607W WO2022188333A1 WO 2022188333 A1 WO2022188333 A1 WO 2022188333A1 CN 2021107607 W CN2021107607 W CN 2021107607W WO 2022188333 A1 WO2022188333 A1 WO 2022188333A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
image
target
walking
walking direction
Prior art date
Application number
PCT/CN2021/107607
Other languages
English (en)
French (fr)
Inventor
范泽宣
何博
陈远
Original Assignee
美智纵横科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110256084.5A external-priority patent/CN113158779B/zh
Application filed by 美智纵横科技有限责任公司 filed Critical 美智纵横科技有限责任公司
Publication of WO2022188333A1 publication Critical patent/WO2022188333A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Definitions

  • the present application relates to home appliance control technology, and in particular to a walking method, device and computer storage medium.
  • Self-cleaning devices provide great convenience for modern life and reduce labor intensity.
  • Household sweeping robots are common self-cleaning devices. Such robots can travel on their own in a family room while inhaling surrounding dust or impurities to complete the floor cleaning.
  • autonomous mapping and cleaning planning have become indispensable functions of sweeping robots.
  • SLAM simultaneous localization and mapping
  • AI Artificial Intelligence
  • AR Augmented Reality
  • SLAM simultaneous localization and mapping
  • GPS Global Positioning System
  • SLAM based on laser sensors is the most stable positioning technology, and has been successfully commercialized in products such as sweeping robots, and the constructed maps are highly robust and accurate.
  • the current visual SLAM only contains a small part of the information in the image, and the high-level semantic information such as what objects are contained in the picture and which scene is currently in is completely unused. Perceiving the content information in the environment, allowing the machine to understand the surrounding environment from both geometric and semantic aspects, and perform corresponding evasive actions is an important development direction of visual path planning.
  • the embodiments of the present application provide a walking method, a device, and a computer storage medium.
  • the embodiment of the present application provides a walking method, which is applied to cleaning equipment, and the method includes:
  • the first segmented image includes at least a target area of a walkable path;
  • a target point of the target area is determined, and a walking path is determined based on the target point.
  • the said target image is segmented to obtain at least one segmented image, including:
  • the target image is segmented to obtain at least one segmented image
  • the image segmentation method is implemented based on a preset segmentation model.
  • performing image recognition on the at least one segmented image to determine the first segmented image includes:
  • Image recognition is performed on each segmented image in the at least one segmented image using a preset image recognition model, and a segmented image including a target area with a walkable path is determined as the first segmented image;
  • the image recognition model is obtained by training a preset neural network with a training sample set; the training sample set includes: at least one training sample and a label corresponding to each training sample; the label represents whether the corresponding training sample has walkable The target area of the path.
  • the target point is the centroid of the target area
  • the determining the target point of the target area includes:
  • the centroid of the target region is determined based on the at least one connected region.
  • the determining of the walking path based on the target point includes:
  • the method further includes: acquiring a globally planned path; the globally planned path includes: at least one sub-path and a cost value corresponding to each of the sub-paths; the sub-path includes at least a second sub-path walking direction;
  • the determining of the walking path based on the target point includes:
  • determining a local path of the cleaning device from the first position to the target point as a reference path querying the globally planned path according to the reference path, and determining a reference sub-path corresponding to the reference path;
  • the target walking direction is determined according to the reference second walking direction and the first walking direction corresponding to the reference sub-path, including:
  • the weight set at least includes: a first weight corresponding to a local path and a second weight corresponding to a globally planned path;
  • the reference second walking direction and the first walking direction are weighted according to the first weight and the second weight to obtain a target walking direction.
  • the method also includes:
  • the weight set table at least includes: different weight sets corresponding to different distances;
  • An embodiment of the present application provides a walking device, which is applied to cleaning equipment, and the device includes: a first processing module, a second processing module, and a third processing module; wherein,
  • the first processing module is configured to obtain a target image; segment the target image to obtain at least one segmented image;
  • the second processing module is configured to perform image recognition on the at least one segmented image to determine a first segmented image; the first segmented image includes at least a target area of a walkable path;
  • the third processing module is configured to determine a target point of the target area, and determine a walking path based on the target point.
  • the first processing module is configured to use an image segmentation method to segment the target image to obtain at least one segmented image
  • the image segmentation method is implemented based on a preset segmentation model.
  • the second processing module is configured to perform image recognition on each segmented image in the at least one segmented image by using a preset image recognition model, and determine the segmented image including the target area with the walkable path, as the first segmented image;
  • the image recognition model is obtained by training a preset neural network with a training sample set; the training sample set includes: at least one training sample and a label corresponding to each training sample; the label represents whether the corresponding training sample has walkable The target area of the path.
  • the target point is the centroid of the target area
  • the third processing module is configured to convert the target area into a binary image
  • the centroid of the target region is determined based on the at least one connected region.
  • the third processing module is configured to determine the direction from the first position of the cleaning device to the target point as the first walking direction;
  • the third processing module is configured to obtain a globally planned path;
  • the globally planned path includes: at least one sub-path and a cost value corresponding to each of the sub-paths;
  • the sub-path includes at least one : the second walking direction;
  • the third processing module is configured to determine the direction of the cleaning device from the first position to the target point as the first walking direction
  • determining a local path of the cleaning device from the first position to the target point as a reference path querying the globally planned path according to the reference path, and determining a reference sub-path corresponding to the reference path;
  • the third processing module is configured to obtain a preset weight set; the weight set at least includes: a first weight corresponding to a local path and a second weight corresponding to a globally planned path;
  • the reference second walking direction and the first walking direction are weighted according to the first weight and the second weight to obtain a target walking direction.
  • Embodiments of the present application further provide a walking device, the device comprising: a processor and a memory configured to store a computer program that can be run on the processor; wherein, when the processor is configured to run the computer program, Perform the steps of any of the walking methods described above.
  • Embodiments of the present application further provide a computer storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, implement the steps of any of the above walking methods.
  • the walking method, device, and computer storage medium provided by the embodiments of the present application include: acquiring a target image; segmenting the target image to obtain at least one segmented image; performing image recognition on the at least one segmented image, and determining a first segmented image; the first segmented image includes at least a target area of a walkable path; a target point of the target area is determined, and a walking path is determined based on the target point.
  • FIG. 1 is a schematic diagram of a path planning system
  • FIG. 2 is a schematic flowchart of a walking method according to an embodiment of the application.
  • FIG. 3 is a schematic flowchart of another walking method according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of an image segmentation provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a centroid-based walking provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a walking device according to an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of another walking device according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a route planning system framework; as shown in FIG. 1 , the route planning system includes the following modules: a detection (Sensing) module, a localization (Localization) module, a map (Mapping) module, and a planning (Planning) module. module, control (Control) module; wherein,
  • the Sensing module incorporates multiple sensors, including at least lidar, odometer, gyroscope, and accelerometer;
  • the Localization module is configured to determine the current location according to the data and maps collected by the sensor;
  • Mapping module configured to create a map based on the current location, radar data and the status of cleaning equipment (such as a sweeping robot);
  • Planning module configured to plan movement patterns and goals
  • the Control module is configured to control motion, including straight walking, rotating U-turn, edge and dynamic obstacle avoidance, etc.
  • the methods used by the existing Planning module include: global planner and local planner; the global plan often uses the A* (A-Star) algorithm, which is the most effective way to solve the shortest path in a static road network.
  • A-Star A*
  • the direct search method is also a common heuristic algorithm for many other problems; after completing the area segmentation, how to cover all areas one by one, the key is: how to go from one area to another area, and find the next uncovered area, which is is a local plan, and now the solution such as depth-first traversal (DFS, Depth First Search) is as follows:
  • Local path planning based on gyroscope, detects obstacles ahead by infrared, and avoids them by collision around obstacles (with collision detection switch);
  • the obstacle ahead is identified by radar, and the obstacle is avoided by the dynamic window method of laser beam sampling.
  • Infrared obstacle avoidance greatly affected by the shape, size and color of the object, the recognition is inaccurate, and continuous collision detection is required, with low efficiency and slow response;
  • Laser Obstacle Avoidance The control method is complex, and due to the limitation of installation height, it cannot identify shorter obstacles (such as shoelaces, wire harnesses, etc.), and the scope of use is limited.
  • a target image is acquired; the target image is segmented to obtain at least one segmented image; image recognition is performed on the at least one segmented image to determine a first segmented image; the The first segmented image includes at least a target area of a walkable path; a target point of the target area is determined, and a walking path is determined based on the target point.
  • FIG. 2 is a schematic flowchart of a walking method provided by an embodiment of the application; as shown in FIG. 2 , applied to cleaning equipment, the method includes:
  • Step 201 acquiring a target image
  • Step 202 segment the target image to obtain at least one segmented image
  • Step 203 Perform image recognition on the at least one segmented image to determine a first segmented image; the first segmented image includes at least a target area of a walkable path;
  • Step 204 Determine a target point of the target area, and determine a walking path based on the target point.
  • the acquiring the target image includes:
  • the cleaning device obtains the image of the surrounding environment as the target image through the camera module it has or is connected to.
  • the segmenting the target image to obtain at least one segmented image includes:
  • the target image is segmented to obtain at least one segmented image
  • the image segmentation method is implemented based on a preset segmentation model.
  • the neural network can be trained in advance, and a segmentation model must be used to achieve the segmentation of the target image.
  • model training can be performed in any manner to obtain a segmentation model; of course, other image segmentation methods can also be used to obtain at least one segmented image, which is not limited here.
  • the performing image recognition on the at least one segmented image to determine the first segmented image includes:
  • Image recognition is performed on each segmented image in the at least one segmented image using a preset image recognition model, and a segmented image including a target area with a walkable path is determined as the first segmented image;
  • the image recognition model is obtained by training a preset neural network with a training sample set;
  • the training sample set includes: at least one training sample and a label corresponding to each training sample; the label represents whether the corresponding training sample has a target area with a walkable path.
  • the training sample set can be pre-screened by developers.
  • the target point is the centroid of the target area
  • the determining the target point of the target area includes:
  • the centroid of the target region is determined based on the at least one connected region.
  • control can be performed only based on the local path, that is, the walking direction is determined only according to the determined centroid.
  • the determining a walking path based on the target point includes:
  • the first position refers to the current position of the cleaning device itself, which can be determined by its own positioning function.
  • the linear direction refers to the running direction of the cleaning device (specifically, it can be understood as the forward direction and the driving direction before adjustment).
  • the local path in addition to the local path, it can also be controlled in combination with the preset global planning path, so as to avoid erroneous calculation of the local path and improve the accuracy.
  • the method further includes: acquiring a globally planned path; the globally planned path includes: at least one sub-path and a cost value corresponding to each of the sub-paths; the sub-path includes at least: the second direction of travel;
  • the determining of the walking path based on the target point includes:
  • determining a local path of the cleaning device from the first position to the target point as a reference path querying the globally planned path according to the reference path, and determining a reference sub-path corresponding to the reference path;
  • determining the target walking direction according to the reference second walking direction and the first walking direction corresponding to the reference sub-path includes:
  • the weight set at least includes: a first weight corresponding to a local path and a second weight corresponding to a globally planned path;
  • the reference second walking direction and the first walking direction are weighted according to the first weight and the second weight to obtain a target walking direction.
  • the weighting process refers to the sum of multiplying the first weight by the first angle and the second weight by the second angle.
  • the first angle refers to the angle difference between the first walking direction and the current walking direction
  • the second angle refers to the angular difference between the reference second travel direction and the current travel direction.
  • the walking direction at the current moment refers to the current walking direction of the cleaning device when the target image is acquired, that is, the walking direction before adjustment.
  • control parameters such as the angle difference between the driving direction and the connection between the robot's current pose and the destination pose, the difference between the robot's forward speed and the maximum speed, the difference between obstacles and the minimum braking distance for safe deceleration
  • Different control parameters correspond to different proportional integrals; therefore, after determining the target walking direction, you can also combine other control parameters and the corresponding proportional integrals to perform corresponding calculations (such as weighting) to obtain the final walking direction and walking speed. , walking paths, etc.
  • the method further includes:
  • the weight set table at least includes: different weight sets corresponding to different distances;
  • the globally planned path includes: at least one segment of reference sub-paths.
  • the path of the global planning can be predetermined, such as using the A-star algorithm, Djst and other methods, the map can be obtained by the cleaning equipment walking through the area in advance, or the user can determine and send it to the cleaning equipment through other terminals. of.
  • the setting of the weight set table is based on the consideration of the distance relationship with the obstacle and the realization of precise control. Therefore, after the reference distance from the first position to the obstacle is determined, the weight set of the reference distance is determined. Different weights are used in different situations to affect the final target walking direction and achieve more precise control.
  • the weight set table, the weight set therein and the corresponding reference distance are preset by the developer and stored in the cleaning device.
  • the method can be applied to cleaning equipment; for example, a sweeping robot, an intelligent sweeping machine, a sweeping machine, and the like.
  • the cleaning device has a shooting module, such as a camera.
  • the camera may be a single camera or a dual camera; the single camera or the dual camera is used to capture an image of the target to obtain an image of the surrounding environment of the area to be cleaned, and then determine obstacle avoidance during operation.
  • the area to be cleaned can also be marked.
  • the area to be cleaned can be marked with room names, such as master bedroom, second bedroom, room one, room two, and so on.
  • the cleaning device may have a processing module, a camera, a positioning module, etc., for performing the above steps.
  • the camera is configured to acquire environmental information
  • the processing module is configured to divide intervals and perform cleaning operations
  • the positioning module is configured to determine the location of the cleaning equipment.
  • the above functional division of the processing module, the camera, and the positioning module is only an example, rather than a limitation on the division of specific functional modules. In practical applications, different modules can be set as required to implement the above method.
  • FIG. 3 is a schematic flowchart of another walking method provided by an embodiment of the present application; as shown in FIG. 3 , the method can be applied to cleaning equipment, such as a cleaning robot; the method includes:
  • Step 301 obtaining surrounding environment information through a camera
  • the surrounding environment information at least includes: an image of the surrounding environment
  • Step 302 using the image segmentation method of deep learning, and determining a first segmented image according to surrounding environment information; the first segmented image includes: a walkable first image area without obstacles;
  • a segmentation model can be pre-trained by using deep learning, and the image of the surrounding environment can be recognized by the segmentation model; wherein, the segmentation model can use point cloud segmentation, and after segmenting the image of the surrounding environment, different colors or different labels are used to indicate different areas. piece.
  • an image recognition model is used to identify each block, and a first image area that can walk without obstacles is determined.
  • the training method of the image recognition model has been described in the method shown in FIG. 2, and will not be repeated here.
  • FIG. 4 is a schematic diagram of an image segmentation provided by an embodiment of the present application; as shown in FIG. 4 , the image of the surrounding environment on the left is segmented and identified, and a first image area that can walk without obstacles is determined.
  • Step 303 Determine the centroid of the first image region (equivalent to the above-mentioned target point);
  • the step 303 includes:
  • FIG. 5 is a schematic diagram of a centroid-based walking provided by an embodiment of the present application; as shown in FIG. 5 , the first image area is processed to determine the centroid; then, walking along the centroid direction (referring to the direction from oneself to the centroid), Align the direction of the centroid with the center line of the image; during the walking process, calculate the centroid at regular intervals, that is, the image for the new surrounding environment and determine the centroid of the new walkable area, and output the gear train control to ensure the walking direction.
  • the center line of the image represents the direction of travel (also called the direction of a straight line, the direction of travel);
  • the image center line can be understood as a straight line from the current image center to the front of the image pixel.
  • the above-mentioned aligning the direction of the centroid with the center line of the image means that the center line of the image is adjusted based on the direction of the centroid, that is, the walking direction is adjusted to the direction of the centroid.
  • the pose camera (Camera) No. 2 in the figure is an example, and the cleaning device may have multiple cameras.
  • Step 304 Based on the centroid of the first image area, adjust the direction of the straight line to align with the image center line of the currently observed image, and after adjustment, avoid obstacles and move forward.
  • the direction of the straight line that is, the walking direction
  • the center line of the currently observed image is the image center aligned with the direction of the centroid. line
  • the step 304 may further include: calculating the declination angle between the direction of the center of mass and the current walking direction, as one of the input conditions for the control of the gear train;
  • the cost value of the orientation can also be given through the globally planned path, and after weighting, it is given to the controller to drive the direction to avoid obstacles and avoid obstacles.
  • the global planning can be a given path (using Astar, Djst, etc.), calculate the vector angle difference between the current local planning path and the global path, and determine the final goal (such as going straight to a certain point at a certain angle, that is, centroid), which guides the cleaning equipment forward based on the ultimate goal.
  • the final goal such as going straight to a certain point at a certain angle, that is, centroid
  • it may be: the sum of the first weight multiplied by the first angle and the second weight multiplied by the second angle.
  • the first angle refers to the angle difference between the locally planned path (ie, the first travel direction) and the travel direction at the current moment;
  • the second angle refers to the angle difference between the globally planned path (that is, referring to the second travel direction) and the travel direction at the current moment.
  • the walking direction at the current moment refers to the current walking direction of the cleaning device when the target image is acquired, that is, the walking direction before adjustment.
  • the controller can store the input parameters of the controller (such as the distance from the obstacle, the angle difference between the driving direction and the robot's current pose and the end pose, the difference between the robot's forward speed and the maximum speed, The difference between the obstacle and the minimum braking distance for safe deceleration), the corresponding proportional integral (weighted result, the integral term given to this result), etc., are weighted on these parameters.
  • the input parameters of the controller such as the distance from the obstacle, the angle difference between the driving direction and the robot's current pose and the end pose, the difference between the robot's forward speed and the maximum speed, The difference between the obstacle and the minimum braking distance for safe deceleration), the corresponding proportional integral (weighted result, the integral term given to this result), etc.
  • FIG. 6 is a schematic diagram of the result of a walking device provided by an embodiment of the application; the device is applied to cleaning equipment, as shown in FIG. 6 , the device includes: a first processing module, a second processing module and a third processing module ;in,
  • the first processing module is configured to obtain a target image; segment the target image to obtain at least one segmented image;
  • the second processing module is configured to perform image recognition on the at least one segmented image to determine a first segmented image; the first segmented image includes at least a target area of a walkable path;
  • the third processing module is configured to determine a target point of the target area, and determine a walking path based on the target point.
  • the first processing module is configured to use an image segmentation method to segment the target image to obtain at least one segmented image
  • the image segmentation method is implemented based on a preset segmentation model.
  • the second processing module is configured to perform image recognition on each segmented image in the at least one segmented image by using a preset image recognition model, and determine the segmented image including the target area with the walkable path, as the the first segmented image;
  • the image recognition model is obtained by training a preset neural network with a training sample set; the training sample set includes: at least one training sample and a label corresponding to each training sample; the label represents whether the corresponding training sample has walkable The target area of the path.
  • the target point is the centroid of the target area
  • the third processing module is configured to convert the target area into a binary image
  • the centroid of the target region is determined based on the at least one connected region.
  • the third processing module is configured to determine the direction from the first position of the cleaning device to the target point as the first walking direction;
  • the third processing module is configured to obtain a globally planned path;
  • the globally planned path includes: at least one sub-path and a cost value corresponding to each of the sub-paths;
  • the sub-path includes at least: the second direction of travel;
  • the third processing module is configured to determine the direction of the cleaning device from the first position to the target point as the first walking direction
  • determining a local path of the cleaning device from the first position to the target point as a reference path querying the globally planned path according to the reference path, and determining a reference sub-path corresponding to the reference path;
  • the third processing module is configured to obtain a preset weight set; the weight set at least includes: a first weight corresponding to a local path and a second weight corresponding to a globally planned path;
  • the reference second walking direction and the first walking direction are weighted according to the first weight and the second weight to obtain a target walking direction.
  • the walking device provided in the above-mentioned embodiment implements the corresponding walking method
  • only the division of the above-mentioned program modules is used as an example for illustration. That is, the internal structure of the server is divided into different program modules to complete all or part of the processing described above.
  • the apparatus provided in the above-mentioned embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment, which will not be repeated here.
  • FIG. 7 is a schematic diagram of the result of another walking device provided by an embodiment of the present application; as shown in FIG. 7 , the device 70 is applied to a server, and includes: a processor 701 and a processor 701 configured to store data that can run on the processor.
  • the memory 702 of a computer program wherein, when the processor 701 is configured to run the computer program, execute: acquiring a target image; segmenting the target image to obtain at least one segmented image; Image recognition, determining a first segmented image; the first segmented image at least includes a target area of a walkable path; determining a target point of the target area, and determining a walking path based on the target point.
  • the processor 701 is further configured to, when running the computer program, execute: use an image segmentation method to segment the target image to obtain at least one segmented image; the image segmentation method is based on a preset segmentation model implementation.
  • the processor 701 is further configured to, when running the computer program, execute: use a preset image recognition model to perform image recognition on each segmented image in the at least one segmented image, and determine that there is a A segmented image of the target area of the walking path, as the first segmented image;
  • the image recognition model is obtained by training a preset neural network with a training sample set; the training sample set includes: at least one training sample and a label corresponding to each training sample; the label represents whether the corresponding training sample has walkable The target area of the path.
  • the processor 701 is further configured to, when running the computer program, execute: convert the target area into a binary image;
  • the centroid of the target region is determined based on the at least one connected region.
  • the processor 701 is further configured to, when running the computer program, execute: determining the direction from the first position of the cleaning device to the target point as the first walking direction;
  • the processor 701 is further configured to, when running the computer program, execute: obtaining a globally planned path; the globally planned path includes: at least one sub-path corresponding to each of the sub-paths The cost value of ; the sub-path includes at least: the second walking direction;
  • determining a local path of the cleaning device from the first position to the target point as a reference path querying the globally planned path according to the reference path, and determining a reference sub-path corresponding to the reference path;
  • the processor 701 is further configured to, when running the computer program, execute: acquiring a preset weight set; the weight set at least includes: a first weight corresponding to a local path, a path corresponding to a global plan. the second weight;
  • the reference second walking direction and the first walking direction are weighted according to the first weight and the second weight to obtain a target walking direction.
  • the processor 701 is further configured to, when running the computer program, execute: acquiring a preset weight set table; the weight set table at least includes: different weight sets corresponding to different distances;
  • the walking device provided in the above embodiments and the walking method embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, which will not be repeated here.
  • the apparatus 70 may further include: at least one network interface 703 .
  • the various components in device 70 are coupled together by bus system 704 .
  • the bus system 704 is used to implement the connection communication between these components.
  • the bus system 704 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 704 in FIG. 7 .
  • the number of the processors 701 may be at least one.
  • the network interface 703 is used for wired or wireless communication between the apparatus 70 and other devices.
  • the memory 702 in this embodiment of the present application is used to store various types of data to support the operation of the device 70 .
  • the methods disclosed in the above embodiments of the present application may be applied to the processor 701 or implemented by the processor 701 .
  • the processor 701 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 701 or an instruction in the form of software.
  • the above-mentioned processor 701 may be a general-purpose processor, a digital signal processor (DSP, DiGital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor 701 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, and the storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702, and completes the steps of the foregoing method in combination with its hardware.
  • apparatus 70 may be implemented by one or more Application Specific Integrated Circuit (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array), general-purpose processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), or other electronic components implementation for performing the aforementioned method.
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • controller controller
  • microcontroller MCU, Micro Controller Unit
  • microprocessor Microprocessor
  • Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, executes: acquiring a target image; segmenting the target image to obtain at least one segment image; perform image recognition on the at least one segmented image to determine a first segmented image; the first segmented image includes at least a target area of a walkable path; determine a target point of the target area, and determine walking based on the target point path.
  • the computer program when the computer program is run by the processor, execute: using an image segmentation method to segment the target image to obtain at least one segmented image; the image segmentation method is implemented based on a preset segmentation model.
  • the computer program when the computer program is run by the processor, execute: use a preset image recognition model to perform image recognition on each segmented image in the at least one segmented image, and determine a target area including a walkable path.
  • the segmented image of as the first segmented image;
  • the image recognition model is obtained by training a preset neural network with a training sample set; the training sample set includes: at least one training sample and a label corresponding to each training sample; the label represents whether the corresponding training sample has walkable The target area of the path.
  • the centroid of the target region is determined based on the at least one connected region.
  • the processor when the computer program is run by the processor, execute: determine the direction from the first position of the cleaning device to the target point as the first walking direction;
  • the computer program when the computer program is run by the processor, execute: obtain a globally planned path; the globally planned path includes: at least one sub-path and a cost value corresponding to each of the sub-paths;
  • the sub-path includes at least: a second travel direction;
  • determining a local path of the cleaning device from the first position to the target point as a reference path querying the globally planned path according to the reference path, and determining a reference sub-path corresponding to the reference path;
  • the computer program when the computer program is run by the processor, execute: acquiring a preset weight set; the weight set at least includes: a first weight corresponding to a local path and a second weight corresponding to a globally planned path;
  • the reference second walking direction and the first walking direction are weighted according to the first weight and the second weight to obtain a target walking direction.
  • the computer program when the computer program is run by the processor, execute: acquiring a preset weight set table; the weight set table at least includes: different weight sets corresponding to different distances;
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
  • the unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.
  • the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, execute Including the steps of the above method embodiment; and the aforementioned storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk and other various A medium on which program code can be stored.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk and other various A medium on which program code can be stored.
  • the above-mentioned integrated units of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic disk or an optical disk and other mediums that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

一种行走方法、装置和计算机存储介质,所述方法包括:获取目标图像(201);对所述目标图像进行分割,得到至少一个分割图像(202);对所述至少一个分割图像进行图像识别,确定第一分割图像(203);确定所述目标区域的目标点,基于所述目标点确定行走路径(204)。

Description

一种行走方法、装置和计算机存储介质
相关申请的交叉引用
本申请基于申请号为202110256084.5、申请日为2021年03月09日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及家电控制技术,具体涉及一种行走方法、装置和计算机存储介质。
背景技术
自主清洁装置为现代生活提供巨大的便利,降低了劳动强度。家用扫地机器人为常见自主清洁设备,这样的机器人能在家庭房间内自行地行进的同时吸入周边的灰尘或杂质从而完成地面清洁。随着扫地机器人智能化程度的不断发展,自主建图和清扫规划已经成为扫地机器人必不可少的功能。
随着近年来人工智能(AI,Artificial Intelligence)的快速发展,解决机器人感知问题的同时定位和构建地图(SLAM,simultaneous localization and mapping)技术在诸如自动驾驶、机器人导航、增强现实(AR,Augmented Reality)等领域有着广泛的应用。由于全球定位系统(GPS,Global Positioning System)定位精度差,目前,主流的SLAM主要分为基于激光传感器的SLAM和基于视觉传感器的SLAM。基于激光传感器的SLAM是最稳定的定位技术,已经被成功商用化在如扫地机器人等产品上,构建的地图鲁棒性高和精度很高。
但是目前的视觉SLAM只包含图像中很少一部分信息,而诸如图片中包含什么物体,当前处于哪个场景中等高级语义信息完全没有利用。感知环境中的内容信息,让机器从几何和语义两个方面去理解周围环境,并执行相应的规避动作是视觉路径规划的重要发展方向。
发明内容
为解决现有存在的技术问题,本申请实施例提供一种行走方法、装置和计算机存储介质。
为达到上述目的,本申请实施例的技术方案是这样实现的:
本申请实施例提供了一种行走方法,应用于清洁设备,所述方法包括:
获取目标图像;
对所述目标图像进行分割,得到至少一个分割图像;
对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;
确定所述目标区域的目标点,基于所述目标点确定行走路径。
较佳地,所述对所述目标图像进行分割,得到至少一个分割图像,包括:
运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;
所述图像分割方法基于预设的分割模型实现。
较佳地,所述对所述至少一个分割图像进行图像识别,确定第一分割图像,包括:
运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训 练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
较佳地,所述目标点为所述目标区域的质心;
所述确定所述目标区域的目标点,包括:
将所述目标区域转换为二值图像;
运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
较佳地,所述基于所述目标点确定行走路径,包括:
确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
将所述清洁设备的直线方向调整为所述第一行走方向;
按照所述第一行走方向从所述第一位置运行到所述目标点。
较佳地,所述方法还包括:获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
所述基于所述目标点确定行走路径,包括:
确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
将所述清洁设备的直线方向调整为所述目标行走方向;
按照所述目标行走方向从所述第一位置运行到所述目标点。
较佳地,所述根据所述参考子路径对应的参考第二行走方向和所述第 一行走方向,确定目标行走方向,包括:
获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第一行走方向进行加权处理,得到目标行走方向。
较佳地,所述方法还包括:
获取预设的权重集表;所述权重集表至少包括:不同距离对应的不同权重集;
确定所述第一位置到障碍物的参考距离,根据所述参考距离查询所述预设的权重集表,得到所述参考距离对应的权重集。
本申请实施例提供了一种行走装置,应用于清洁设备,所述装置包括:第一处理模块、第二处理模块和第三处理模块;其中,
所述第一处理模块,配置为获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;
所述第二处理模块,配置为对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;
所述第三处理模块,配置为确定所述目标区域的目标点,基于所述目标点确定行走路径。
较佳地,所述第一处理模块,配置为运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;
所述图像分割方法基于预设的分割模型实现。
较佳地,所述第二处理模块,配置为运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训 练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
较佳地,所述目标点为所述目标区域的质心;
所述第三处理模块,配置为将所述目标区域转换为二值图像;
运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
较佳地,所述第三处理模块,配置为确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
将所述清洁设备的直线方向调整为所述第一行走方向;
按照所述第一行走方向从所述第一位置运行到所述目标点。
较佳地,所述第三处理模块,配置为获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
相应的,所述第三处理模块,配置为确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
将所述清洁设备的直线方向调整为所述目标行走方向;
按照所述目标行走方向从所述第一位置运行到所述目标点。
较佳地,所述第三处理模块,配置为获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第一行走方向进行加权处理,得到目标行走方向。
本申请实施例还提供了一种行走装置,所述装置包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器;其中,所述处理器配置为运行所述计算机程序时,执行以上任一项所述行走方法的步骤。
本申请实施例还提供了一种计算机存储介质,其上存储有计算机指令,该指令被处理器执行时实现以上任一项所述行走方法的步骤。
本申请实施例提供的行走方法、装置和计算机存储介质,所述方法包括:获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;确定所述目标区域的目标点,基于所述目标点确定行走路径。采用本申请实施例的技术方案,可以快速实现避障,提高避障效率,且应用的避障场景广。
附图说明
图1为一种路径规划系统的示意图;
图2为本申请实施例的一种行走方法的流程示意图;
图3为本申请实施例的另一种行走方法的流程示意图;
图4为本申请实施例提供的一种图像分割的示意图;
图5为本申请实施例提供的一种基于质心行走的示意图;
图6为本申请实施例的一种行走装置的结构示意图;
图7为本申请实施例的另一种行走装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请实施例方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述, 显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。
本申请的说明书实施例和权利要求书及上述附图中的术语“第一”、“第二”、和“第三”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
在结合实施例对本申请再作进一步详细的说明,先对相关技术进行说明。
图1为一种路径规划系统框架示意图;如图1所示,所述路径规划系统包括以下模块:检测(Sensing)模块、定位(Localization)模块、绘制地图(Mapping)模块、计划制定(Planning)模块、控制(Control)模块;其中,
Sensing模块融合有多个传感器,至少包括激光雷达、里程计、陀螺仪和加速度计等;
Localization模块,配置为根据传感器采集的数据和地图,确定当前位置;
Mapping模块,配置为根据当前位置、雷达数据和清洁设备(如扫地机器人)状态,创建地图;
Planning模块,配置为规划运动方式和目标;
Control模块,配置为控制运动,包括直线行走、旋转掉头、沿边和动态避障等。
现有Planning模块采用的方法,包括:全局规划(global planner)和局部规划(local planner);全局规划常采用A*(A-Star)算法,是一种静态路 网中求解最短路径最有效的直接搜索方法,也是许多其他问题的常用启发式算法;当完成区域分割后,如何逐个覆盖所有区域,其中关键为:如何从一个区域去到另一个区域,并寻找下一个未覆盖区域,这便是局部规划,现在解决方案如深度优先遍历(DFS,Depth First Search)如下:
首先以一个未被访问过的顶点作为起始顶点,沿当前顶点的边走到未访问过的顶点;当没有未访问过的顶点时,则回到上一个顶点,继续试探别的顶点,直至所有的顶点都被访问过。
当区域内存在障碍物,尤其是动态障碍物时,不仅要躲避障碍物,还要快速有效完成局部规划,成为一个难题,则需要检测障碍物的传感器,并根据传感器数据执行规避动作,现在解决方案如下:
基于陀螺仪的局部路径规划,靠红外检测到前方障碍物,以绕障碍物碰撞方法规避(配有碰撞检测开关);
基于激光直接成型技术(LDS,Laser Direct Structuring)的局部路径规划,通过雷达识别前方障碍物,以激光线束采样的动态窗口方法避障。
上述方法存在以下问题:
红外避障:受物体形状大小、颜色影响大,识别不准确,且需持续碰撞检测,效率低,反应慢;
激光避障:控制方式复杂,且因安装高度限制,无法识别较矮障碍物(如鞋带、线束等),使用范围有限。
基于此,在本申请的各种实施例中,获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;确定所述目标区域的目标点,基于所述目标点确定行走路径。
下面结合附图及具体实施例对本申请作进一步详细的说明。
图2为本申请实施例提供的一种行走方法的流程示意图;如图2所示, 应用于清洁设备,所述方法包括:
步骤201、获取目标图像;
步骤202、对所述目标图像进行分割,得到至少一个分割图像;
步骤203、对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;
步骤204、确定所述目标区域的目标点,基于所述目标点确定行走路径。
在一实施例中,所述获取目标图像,包括:
清洁设备通过自身具有或连接的拍摄模块,获取周围环境的图像,作为目标图像。
在一实施例中,所述对所述目标图像进行分割,得到至少一个分割图像,包括:
运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;
所述图像分割方法基于预设的分割模型实现。
即可以预先训练对神经网络进行训练,得打一个分割模型,以实现对目标图像的分割。这里可以采用任意方式进行模型训练,以得到分割模型;当然,还可以采用其他图像分割方法,以获得至少一个分割图像即可,这里不做限定。
在一实施例中,所述对所述至少一个分割图像进行图像识别,确定第一分割图像,包括:
运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
所述图像识别模型运用训练样本集对预设神经网络训练得到;
所述训练样本集包括:至少一个训练样本和每个所述训练样本对应的 标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
所述训练样本集可以由开发人员预先筛选得到。
在一实施例中,所述目标点为所述目标区域的质心;
所述确定所述目标区域的目标点,包括:
将所述目标区域转换为二值图像;
运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
实际应用时,可以仅基于局部路径进行控制,即仅根据确定的质心确定行走方向。
在一实施例中,所述基于所述目标点确定行走路径,包括:
确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
将所述清洁设备的直线方向调整为所述第一行走方向;
按照所述第一行走方向从所述第一位置运行到所述目标点。
这里,所述第一位置指清洁设备自身当前的位置,可以通过自身具有的定位功能确定。
所述直线方向指清洁设备的行走方向(具体可以理解为调整之前的前进方向、行驶方向)。
实际应用时,除了局部路径,还可以结合预设的全局规划的路径一同进行控制,以避免局部路径的错误计算,提高准确性。
在一实施例中,所述方法还包括:获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
相应的,所述基于所述目标点确定行走路径,包括:
确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
将所述清洁设备的直线方向调整为所述目标行走方向;
按照所述目标行走方向从所述第一位置运行到所述目标点。
其中,所述根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向,包括:
获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第一行走方向进行加权处理,得到目标行走方向。
所述加权处理指,第一权重乘以第一角度、与第二权重乘以第二角度之和。
第一角度指第一行走方向与当前时刻的行走方向的角度差;
第二角度指参考第二行走方向和当前时刻的行走方向的角度差。
这里,当前时刻的行走方向是指获取目标图像时清洁设备当前行走的方向,也即未调整前的行走方向。
当然,行走时还需参考其他控制参数(如行驶方向与机器人当前位姿及终点位姿连线的角度差值,机器人前进速度与最高速度的差值,障碍物与安全减速最小制动距离的差值等),不同控制参数对应不同比例积分;因此,还可以在确定目标行走方向后再结合其他控制参数和对应的比例积分,进行相应计算(如加权),得到最终的行走方向、行走速度、行走路径等。
具体地,所述方法还包括:
获取预设的权重集表;所述权重集表至少包括:不同距离对应的不同权重集;
确定所述第一位置到障碍物的参考距离,根据所述参考距离查询所述预设的权重集表,得到所述参考距离对应的权重集。
具体来说,所述全局规划的路径,包括:至少一段参考子路径。所述全局规划的路径,可以是预先确定的,如采用A-star算法、Djst等方法,所述地图可以是清洁设备预先行走一遍区域得到的,或者是用户通过其他终端确定并发送给清洁设备的。
这里,对于权重集表的设定是考虑到与障碍物的远近关系与实现精准控制相关,因此,在确定第一位置到障碍物的参考距离后,确定参考距离的权重集,在不同距离的情况下使用不同权重,以影响最终目标行走方向,实现更为精准的控制。
所述权重集表、其中的权重集及对应的参考距离,由开发人员预先设定并保存在清洁设备中。
这里,所述方法可以应用于清洁设备;例如:扫地机器人、智能扫地机、扫地机等。
所述清洁设备具有拍摄模块,如摄像头。所述摄像头可以为单摄像头或双摄像头;所述单摄像头或双摄像头用于拍摄目标图像,以得到待清扫区域的周围环境的图像,进而在运行过程中确定避障。还可以对待清扫区域进行标记,举例来说,待清扫区域可以用房间名进行标记,如,主卧、次卧、房间一、房间二等。
需要说明的是,所述清洁设备可以具有处理模块、摄像头、定位模块等,用于执行上述步骤。例如,摄像头配置为获取环境信息,处理模块配置为划分区间并执行清洁操作;定位模块配置为确定清洁设备所在位置。以上对处理模块、摄像头、定位模块的功能划分仅一种示例,而不是对具 体功能模块的划分的限定,实际应用时可以根据需要设定不同的模块,以实现上述方法即可。
图3为本申请实施例提供的另一种行走方法的流程示意图;如图3所示,所述方法可以应用于清洁设备,如扫地机器人;所述方法包括:
步骤301、通过摄像头获取周围环境信息;
这里,所述周围环境信息至少包括:周围环境的图像;
步骤302、运用深度学习的图像分割方法,根据周围环境信息确定第一分割图像;所述第一分割图像包括:可行走的没有障碍物的第一图像区域;
这里,具体可以运用深度学习预先训练一个分割模型,通过分割模型识别周围环境的图像;其中,所述分割模型可以采用点云分割,对周围环境的图像分割后采用不同颜色或不同标记表示不同区块。
然后,利用一个图像识别模型识别每个区块,确定出可行走没有障碍物的第一图像区域。图像识别模型的训练方法已在图2所示方法中说明,这里不再赘述。
图4为本申请实施例提供的一种图像分割的示意图;如图4所示,对左侧的周围环境的图像进行分割并识别,确定出可行走没有障碍物的第一图像区域。
步骤303、确定所述第一图像区域的质心(相当于上述目标点);
所述步骤303,包括:
将第一图像区域转换为二值图像;
通过连通组件标记算法,确定所述二值图像中所有的连通区域,并分别标记;
运用计算几何距算法,根据每个连通区域,得到第一图像区域的质心;
用不同颜色绘制连通区域与质心,输出处理后图像。
图5为本申请实施例提供的一种基于质心行走的示意图;如图5所示, 对第一图像区域进行处理,确定质心;然后,沿着质心方向(指自身到质心的方向)行走,将质心方向与图像中心线对齐;行走过程中,每隔一定时间计算下质心、即针对新的周围环境的图像并确定新的可行走区域的质心,并输出轮系控制,保证行走方向。
这里,从清洁设备角度来说,图像中心线代表行走方向(也可称直线方向、行驶方向);
而从图像角度来说,图像中心线可以理解为当前图像中心为起点指向图像像素正前方的一条直线。
上述将质心方向与图像中心线对齐,表示,以质心方向为准调整图像中心线,也即将行走方向调整为质心方向。
需要说明的是,图中的2号位姿摄像头(Camera)是一个示例,清洁设备可以具有多个Camera。
步骤304、基于第一图像区域的质心,将直线方向调整与当前观察图像的图像中心线对齐,调整后实现避开障碍物前行。
具体结合图5来说,确定出第一图像区域的质心后,将直线方向(即行走方向)调整为与当前观察的图像中心线(当前观察的图像中心线为与质心方向对齐后的图像中心线)对齐,作为第一行走方向。也就是说,以质心方向作为行走方向。
具体地,所述步骤304还可以包括:计算质心方向与当前行走方向的偏角,做为轮系控制输入条件之一;
还可以通过全局规划的路径给出朝向的代价值,加权后给到控制器进行方向驱动,以避开障碍物规避障碍物。
其中,所述全局规划可以是已经给出的路径(使用Astar、Djst等),计算当前局部规划路径与全局路径的矢量角度差值,确定最终目标(如以某个角度直行到某一点、即质心),基于最终目标指引清洁设备前进。例如, 可以是:第一权重乘以第一角度、与第二权重乘以第二角度之和。
其中,第一角度指局部规划的路径(即第一行走方向)与当前时刻的行走方向的角度差;
第二角度指全局规划的路径(即参考第二行走方向)和当前时刻的行走方向的角度差。
这里,当前时刻的行走方向是指获取目标图像时清洁设备当前行走的方向,也即未调整前的行走方向。
具体过程已在图2所示方法中说明,这里不多赘述。
需要说明的是,控制器内部可以保存有控制器输入参数(如距离障碍物距离、行驶方向与机器人当前位姿及终点位姿连线的角度差值、机器人前进速度与最高速度的差值、障碍物与安全减速最小制动距离的差值),对应比例积分(加权结果,给到这个结果的积分项)等,在这些参数上进行加权。
图6为本申请实施例提供的一种行走装置的结果示意图;所述装置应用于清洁设备,如图6所示,所述装置包括:第一处理模块、第二处理模块和第三处理模块;其中,
所述第一处理模块,配置为获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;
所述第二处理模块,配置为对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;
所述第三处理模块,配置为确定所述目标区域的目标点,基于所述目标点确定行走路径。
具体地,所述第一处理模块,配置为运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;
所述图像分割方法基于预设的分割模型实现。
具体地,所述第二处理模块,配置为运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
具体地,所述目标点为所述目标区域的质心;
所述第三处理模块,配置为将所述目标区域转换为二值图像;
运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
具体地,所述第三处理模块,配置为确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
将所述清洁设备的直线方向调整为所述第一行走方向;
按照所述第一行走方向从所述第一位置运行到所述目标点。
具体地,所述第三处理模块,配置为获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
相应的,所述第三处理模块,配置为确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
将所述清洁设备的直线方向调整为所述目标行走方向;
按照所述目标行走方向从所述第一位置运行到所述目标点。
具体地,所述第三处理模块,配置为获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第一行走方向进行加权处理,得到目标行走方向。
需要说明的是:上述实施例提供的行走装置在实现相应行走方法时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将服务器的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的装置与相应方法的实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图7为本申请实施例提供的另一种行走装置的结果示意图;如图7所示,所述装置70应用于服务器,包括:处理器701和配置为存储能够在所述处理器上运行的计算机程序的存储器702;其中,所述处理器701配置为运行所述计算机程序时,执行:获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;确定所述目标区域的目标点,基于所述目标点确定行走路径。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执行:运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;所述图像分割方法基于预设的分割模型实现。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执行:运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第 一分割图像;
所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执行:将所述目标区域转换为二值图像;
运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执行:确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
将所述清洁设备的直线方向调整为所述第一行走方向;
按照所述第一行走方向从所述第一位置运行到所述目标点。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执行:获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
以及,确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
将所述清洁设备的直线方向调整为所述目标行走方向;
按照所述目标行走方向从所述第一位置运行到所述目标点。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执 行:获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第一行走方向进行加权处理,得到目标行走方向。
在一实施例中,所述处理器701还配置为运行所述计算机程序时,执行:获取预设的权重集表;所述权重集表至少包括:不同距离对应的不同权重集;
确定所述第一位置到障碍物的参考距离,根据所述参考距离查询所述预设的权重集表,得到所述参考距离对应的权重集。
需要说明的是:上述实施例提供的行走装置与行走方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
实际应用时,所述装置70还可以包括:至少一个网络接口703。装置70中的各个组件通过总线系统704耦合在一起。可理解,总线系统704用于实现这些组件之间的连接通信。总线系统704除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图7中将各种总线都标为总线系统704。其中,所述处理器701的个数可以为至少一个。网络接口703用于装置70与其他设备之间有线或无线方式的通信。
本申请实施例中的存储器702用于存储各种类型的数据以支持装置70的操作。
上述本申请实施例揭示的方法可以应用于处理器701中,或者由处理器701实现。处理器701可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器701中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器701可以是通用处理器、数字信号处理器(DSP,DiGital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器701可以实现 或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器702,处理器701读取存储器702中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,装置70可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)、通用处理器、控制器、微控制器(MCU,Micro Controller Unit)、微处理器(Microprocessor)、或其他电子元件实现,用于执行前述方法。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行:获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;确定所述目标区域的目标点,基于所述目标点确定行走路径。
在一实施例中,所述计算机程序被处理器运行时,执行:运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;所述图像分割方法基于预设的分割模型实现。
在一实施例中,所述计算机程序被处理器运行时,执行:运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述 标签表征相应训练样本是否具有可行走路径的目标区域。
在一实施例中,所述计算机程序被处理器运行时,执行:将所述目标区域转换为二值图像;
运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
在一实施例中,所述计算机程序被处理器运行时,执行:确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
将所述清洁设备的直线方向调整为所述第一行走方向;
按照所述第一行走方向从所述第一位置运行到所述目标点。
在一实施例中,所述计算机程序被处理器运行时,执行:获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
以及,确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
将所述清洁设备的直线方向调整为所述目标行走方向;
按照所述目标行走方向从所述第一位置运行到所述目标点。
在一实施例中,所述计算机程序被处理器运行时,执行:获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第 一行走方向进行加权处理,得到目标行走方向。
在一实施例中,所述计算机程序被处理器运行时,执行:获取预设的权重集表;所述权重集表至少包括:不同距离对应的不同权重集;
确定所述第一位置到障碍物的参考距离,根据所述参考距离查询所述预设的权重集表,得到所述参考距离对应的权重集。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光 盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种行走方法,应用于清洁设备,所述方法包括:
    获取目标图像;
    对所述目标图像进行分割,得到至少一个分割图像;
    对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;
    确定所述目标区域的目标点,基于所述目标点确定行走路径。
  2. 根据权利要求1所述的方法,其中,所述对所述目标图像进行分割,得到至少一个分割图像,包括:
    运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;
    所述图像分割方法基于预设的分割模型实现。
  3. 根据权利要求1所述的方法,其中,所述对所述至少一个分割图像进行图像识别,确定第一分割图像,包括:
    运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
    所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
  4. 根据权利要求1所述的方法,其中,所述目标点为所述目标区域的质心;
    所述确定所述目标区域的目标点,包括:
    将所述目标区域转换为二值图像;
    运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
    运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
  5. 根据权利要求1至3任一项所述的方法,其中,所述基于所述目标点确定行走路径,包括:
    确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
    将所述清洁设备的直线方向调整为所述第一行走方向;
    按照所述第一行走方向从所述第一位置运行到所述目标点。
  6. 根据权利要求1至3任一项所述的方法,其中,所述方法还包括:获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
    所述基于所述目标点确定行走路径,包括:
    确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
    确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
    根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
    将所述清洁设备的直线方向调整为所述目标行走方向;
    按照所述目标行走方向从所述第一位置运行到所述目标点。
  7. 根据权利要求6所述的方法,其中,所述根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向,包括:
    获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
    根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第 一行走方向进行加权处理,得到目标行走方向。
  8. 根据权利要求7所述的方法,其中,所述方法还包括:
    获取预设的权重集表;所述权重集表至少包括:不同距离对应的不同权重集;
    确定所述第一位置到障碍物的参考距离,根据所述参考距离查询所述预设的权重集表,得到所述参考距离对应的权重集。
  9. 一种行走装置,应用于清洁设备,所述装置包括:第一处理模块、第二处理模块和第三处理模块;其中,
    所述第一处理模块,配置为获取目标图像;对所述目标图像进行分割,得到至少一个分割图像;
    所述第二处理模块,配置为对所述至少一个分割图像进行图像识别,确定第一分割图像;所述第一分割图像至少包括可行走路径的目标区域;
    所述第三处理模块,配置为确定所述目标区域的目标点,基于所述目标点确定行走路径。
  10. 根据权利要求9所述的装置,其中,所述第一处理模块,配置为运用图像分割方法,对所述目标图像进行分割,得到至少一个分割图像;
    所述图像分割方法基于预设的分割模型实现。
  11. 根据权利要求9所述的装置,其中,所述第二处理模块,配置为运用预设的图像识别模型对所述至少一个分割图像中每个分割图像进行图像识别,确定包括有可行走路径的目标区域的分割图像,作为所述第一分割图像;
    所述图像识别模型运用训练样本集对预设神经网络训练得到;所述训练样本集包括:至少一个训练样本和每个所述训练样本对应的标签;所述标签表征相应训练样本是否具有可行走路径的目标区域。
  12. 根据权利要求9所述的装置,其中,所述目标点为所述目标区域 的质心;
    所述第三处理模块,配置为将所述目标区域转换为二值图像;
    运用连通组件标记算法,确定所述二值图像中的至少一个连通区域;
    运用几何距算法,根据所述至少一个连通区域,确定所述目标区域的质心。
  13. 根据权利要求9至12任一项所述的装置,其中,所述第三处理模块,配置为确定清洁设备的第一位置到目标点的方向,作为第一行走方向;
    将所述清洁设备的直线方向调整为所述第一行走方向;
    按照所述第一行走方向从所述第一位置运行到所述目标点。
  14. 根据权利要求9至12任一项所述的装置,其中,所述第三处理模块,配置为获取全局规划的路径;所述全局规划的路径,包括:至少一个子路径和每个所述子路径对应的代价值;所述子路径至少包括:第二行走方向;
    相应的,所述第三处理模块,配置为确定清洁设备从第一位置到目标点的方向,作为第一行走方向;
    确定所述清洁设备从第一位置到目标点的局部路径,作为参考路径;根据所述参考路径查询所述全局规划的路径,确定所述参考路径对应的参考子路径;
    根据所述参考子路径对应的参考第二行走方向和所述第一行走方向,确定目标行走方向;
    将所述清洁设备的直线方向调整为所述目标行走方向;
    按照所述目标行走方向从所述第一位置运行到所述目标点。
  15. 根据权利要求14所述的装置,其中,所述第三处理模块,配置为获取预设的权重集;所述权重集至少包括:局部路径对应的第一权重、全局规划的路径对应的第二权重;
    根据所述第一权重和所述第二权重对所述参考第二行走方向和所述第一行走方向进行加权处理,得到目标行走方向。
  16. 一种行走装置,所述装置包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器;其中,
    所述处理器配置为运行所述计算机程序时,执行权利要求1至8任一项所述方法的步骤。
  17. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至8任一项所述方法的步骤。
PCT/CN2021/107607 2021-03-09 2021-07-21 一种行走方法、装置和计算机存储介质 WO2022188333A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110256084.5 2021-03-09
CN202110256084.5A CN113158779B (zh) 2021-03-09 一种行走方法、装置和计算机存储介质

Publications (1)

Publication Number Publication Date
WO2022188333A1 true WO2022188333A1 (zh) 2022-09-15

Family

ID=76886688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107607 WO2022188333A1 (zh) 2021-03-09 2021-07-21 一种行走方法、装置和计算机存储介质

Country Status (1)

Country Link
WO (1) WO2022188333A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114147725A (zh) * 2021-12-21 2022-03-08 乐聚(深圳)机器人技术有限公司 机器人的零点调整方法、装置、设备及存储介质
CN117173415A (zh) * 2023-11-03 2023-12-05 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106828A1 (en) * 2010-11-03 2012-05-03 Samsung Electronics Co., Ltd Mobile robot and simultaneous localization and map building method thereof
CN102789234A (zh) * 2012-08-14 2012-11-21 广东科学中心 基于颜色编码标识的机器人导航方法及系统
CN109746909A (zh) * 2017-11-08 2019-05-14 深圳先进技术研究院 一种机器人运动控制方法及设备
CN112183476A (zh) * 2020-10-28 2021-01-05 深圳市商汤科技有限公司 一种障碍检测方法、装置、电子设备以及存储介质
CN112363494A (zh) * 2020-09-24 2021-02-12 深圳优地科技有限公司 机器人前进路径的规划方法、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120106828A1 (en) * 2010-11-03 2012-05-03 Samsung Electronics Co., Ltd Mobile robot and simultaneous localization and map building method thereof
CN102789234A (zh) * 2012-08-14 2012-11-21 广东科学中心 基于颜色编码标识的机器人导航方法及系统
CN109746909A (zh) * 2017-11-08 2019-05-14 深圳先进技术研究院 一种机器人运动控制方法及设备
CN112363494A (zh) * 2020-09-24 2021-02-12 深圳优地科技有限公司 机器人前进路径的规划方法、设备及存储介质
CN112183476A (zh) * 2020-10-28 2021-01-05 深圳市商汤科技有限公司 一种障碍检测方法、装置、电子设备以及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114147725A (zh) * 2021-12-21 2022-03-08 乐聚(深圳)机器人技术有限公司 机器人的零点调整方法、装置、设备及存储介质
CN114147725B (zh) * 2021-12-21 2024-04-02 乐聚(深圳)机器人技术有限公司 机器人的零点调整方法、装置、设备及存储介质
CN117173415A (zh) * 2023-11-03 2023-12-05 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统
CN117173415B (zh) * 2023-11-03 2024-01-26 南京特沃斯清洁设备有限公司 用于大型洗地机的视觉分析方法及系统

Also Published As

Publication number Publication date
CN113158779A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
CN107145578B (zh) 地图构建方法、装置、设备和系统
CN107144285B (zh) 位姿信息确定方法、装置和可移动设备
JP6882296B2 (ja) 自律視覚ナビゲーション
JP7082545B2 (ja) 情報処理方法、情報処理装置およびプログラム
CN110388931A (zh) 将对象的二维边界框转换成自动驾驶车辆的三维位置的方法
US20190278273A1 (en) Odometry system and method for tracking traffic lights
CN109215433A (zh) 用于自动驾驶仿真的基于视觉的驾驶场景生成器
US11231283B2 (en) Localization with neural network based image registration of sensor data and map data
CN112740268B (zh) 目标检测方法和装置
US10210411B2 (en) Method and apparatus for establishing feature prediction accuracy
WO2022188333A1 (zh) 一种行走方法、装置和计算机存储介质
CN110390240B (zh) 自动驾驶车辆中的车道后处理
WO2017008454A1 (zh) 一种机器人的定位方法
EP4050449A1 (en) Method and device for robot positioning, smart robot, and storage medium
US11656090B2 (en) Method and system for generating navigation data for a geographical location
CN109521767A (zh) 自主导航机器人系统
CN109491378A (zh) 自动驾驶车辆的基于道路分段的路线引导系统
CN118020038A (zh) 两轮自平衡机器人
CN110069058A (zh) 一种机器人室内导航控制方法
CN112907625B (zh) 应用于四足仿生机器人的目标跟随方法及系统
CN114127738A (zh) 自动地图制作和定位
WO2023274270A1 (zh) 机器人术前导航方法、系统、存储介质及计算机设备
CN109901589B (zh) 移动机器人控制方法和装置
CN113158779B (zh) 一种行走方法、装置和计算机存储介质
US11312380B2 (en) Corner negotiation method for autonomous driving vehicles without map and localization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929802

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929802

Country of ref document: EP

Kind code of ref document: A1