CN115562305A - Self-walking equipment movement control method and device and self-walking equipment - Google Patents

Self-walking equipment movement control method and device and self-walking equipment Download PDF

Info

Publication number
CN115562305A
CN115562305A CN202211358918.4A CN202211358918A CN115562305A CN 115562305 A CN115562305 A CN 115562305A CN 202211358918 A CN202211358918 A CN 202211358918A CN 115562305 A CN115562305 A CN 115562305A
Authority
CN
China
Prior art keywords
image
self
interest
attention
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211358918.4A
Other languages
Chinese (zh)
Inventor
张海龙
周峰
牛犇
鲍亮
殷凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN202211358918.4A priority Critical patent/CN115562305A/en
Publication of CN115562305A publication Critical patent/CN115562305A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)

Abstract

The application provides a self-walking device movement control method and device, electronic equipment, a computer storage medium and a self-walking device. Acquiring a region image of a region to be analyzed and a depth image of the region to be analyzed, detecting whether a focus object is included based on the region image, and if so, acquiring the focus object image and the depth image of the focus object based on the focus object, the region image and the depth image of the region to be analyzed; then, judging whether the attention object is a first attention object; if so, a mutual avoidance strategy between the self-propelled device and the first object of interest is determined. The method can be used for accurately judging whether the concerned object is the first concerned object, so that objects of similar types in the concerned object can be accurately distinguished, when the concerned object is determined to be the first concerned object, a mutual avoidance strategy between the first concerned object and the self-walking equipment is determined, and further, the self-walking equipment is prevented from detouring, and meanwhile, the safety of the first concerned object is guaranteed.

Description

Self-walking equipment movement control method and device and self-walking equipment
Technical Field
The application relates to the field of cleaning equipment, in particular to a self-walking equipment movement control method and device, electronic equipment, a computer storage medium and self-walking equipment.
Background
With the continuous improvement of the quality of life level at present, cleaning machines people are more and more favored. Meanwhile, the application scenarios of the cleaning robot are increasing, for example, the cleaning robot may be used in a common home, or may also be used in an office, or may also be used in a mall or a supermarket to clean or sweep the environment.
No matter which scene the cleaning robot is used in, the cleaning robot needs to identify obstacles or people in the cleaning environment during the working process of the cleaning robot, and for some obstacles, such as electric wires, socks or small toys, the cleaning robot needs to bypass during the cleaning process, so that the cleaning robot is prevented from being damaged due to the fact that the obstacles are sucked into the cleaning robot; for some obstacles, such as walls, table legs or sofa legs, the cleaning robot needs to lightly touch such obstacles during cleaning, and further clean the environment or the ground around the obstacles in a short distance. Of course, for a person, in general, it is desirable that the cleaning robot bypass, thereby avoiding the cleaning robot from knocking over the person. The above listed obstacle cleaning robots are easily distinguished, and after the cleaning robot identifies the obstacles, the obstacles can be avoided based on corresponding obstacle avoidance strategies, such as bypassing, tapping or passing over the obstacles. However, when some obstacles similar to human figures exist in the cleaning scene, the cleaning robot is difficult to distinguish the human from the obstacles, so that the cleaning robot may make an erroneous obstacle avoidance strategy. Therefore, how to enable the cleaning robot to accurately identify the type of the object in such a scene, and further accurately avoid the obstacle becomes a technical problem which needs to be solved at present.
Disclosure of Invention
The application provides a self-walking equipment movement control method, and aims to solve the technical problem of how to enable a cleaning robot to accurately identify the type of an object and further accurately avoid an obstacle. Simultaneously, this application still provides one kind from walking equipment.
The application provides a self-walking equipment movement control method, which comprises the following steps:
acquiring a region image of a region to be analyzed and a depth image of the region to be analyzed;
performing object detection on the area image, and judging whether the detected object contains an attention object;
if the detected object comprises an attention object, obtaining an attention object image and a depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed;
judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object;
and if the object of interest is a first object of interest, determining a mutual avoidance strategy between the self-walking equipment and the first object of interest, wherein the avoidance strategy is used for controlling the self-walking equipment to move.
Optionally, the determining a mutual avoidance strategy between the self-propelled device and the first object of interest includes:
controlling the self-walking equipment to carry out voice reminding avoidance on the first object of interest;
and if the first object of interest does not avoid within the preset time, replanning the traveling path of the self-walking device to avoid the first object of interest.
Optionally, the method further includes: and if the object of interest is a second object of interest, determining a movement control strategy for the self-walking device to avoid the second object of interest.
Optionally, the determining a mutual avoidance strategy between the self-propelled device and the first object of interest includes:
and judging whether the first concerned object is on the path traveled by the self-walking equipment, and if so, determining a mutual avoidance strategy between the self-walking equipment and the first concerned object.
Optionally, the determining a mutual avoidance strategy between the self-propelled device and the first object of interest includes:
and judging whether the distance between the first concerned object and the self-walking equipment is smaller than a preset first distance threshold value, and if so, determining a mutual avoidance strategy between the self-walking equipment and the first concerned object.
Optionally, the performing object detection on the region image, and determining whether the detected object includes an attention object, includes:
obtaining an object detection result of the region image by using the region image as input data of an image object detection model, wherein the image object detection model is a model for obtaining the object detection result of the image according to the image;
and judging whether the detected object contains the attention object or not based on the object detection result of the area image.
Optionally, the obtaining the image of the attention object and the depth image of the attention object based on the attention object, the region image, and the depth image of the region to be analyzed includes:
according to the attention object, the region image is cut to obtain an attention object image;
and screening the depth image of the attention object from the depth image of the region to be analyzed according to the attention object.
Optionally, the depth image of the region to be analyzed is obtained by using a depth sensor mounted on the self-walking device.
Optionally, the determining, based on the attention object image and the depth image of the attention object, whether the attention object is a first attention object includes:
taking the attention object image and the depth image of the attention object as input data of a target convolution neural network model to obtain attribute feature information of the attention object, wherein the target convolution neural network model is a model used for obtaining object attribute feature information of an image according to the image;
and judging whether the attention object is a first attention object or not according to the attribute feature information of the attention object.
Optionally, the determining a movement control strategy for the self-walking device to avoid the second object of interest includes:
replanning the traveling path of the self-traveling equipment, and determining a replanning path; the re-planned path is used for controlling the self-walking equipment to move.
Optionally, the avoidance strategy includes at least one of the following:
performing voice reminding avoidance on the first concerned object;
controlling the self-walking equipment to move in a decelerating way or stop moving;
changing a travel path of the self-propelled device.
The application provides a from walking equipment movement control device includes:
the system comprises an original image acquisition unit, a depth image acquisition unit and a data analysis unit, wherein the original image acquisition unit is used for acquiring a region image of a region to be analyzed and a depth image of the region to be analyzed;
a first judgment unit, configured to perform object detection on the region image, and judge whether the detected object includes an attention object;
a focused image obtaining unit, configured to obtain a focused object image and a depth image of the focused object based on the focused object, the region image, and the depth image of the region to be analyzed if the detected object includes the focused object;
a second determination unit configured to determine whether the object of interest is a first object of interest based on the object of interest image and a depth image of the object of interest;
a mutual avoidance strategy determining unit, configured to determine a mutual avoidance strategy between the self-walking device and the first object of interest if the object of interest is the first object of interest, where the avoidance strategy is used to control the self-walking device to move.
The application provides an electronic device, including:
a processor;
a memory for storing a computer program that is executed by the processor to perform the self-propelled device movement control method.
A computer storage medium stores a computer program that is executed by a processor to perform a self-propelled device movement control method.
The application provides a from walking equipment includes: the system comprises an image acquisition device, a depth sensor and a processor;
the image acquisition equipment is used for acquiring a region image of a region to be analyzed;
the depth sensor is used for acquiring a depth image of a region to be analyzed;
the processor is used for receiving the area image of the area to be analyzed transmitted by the image acquisition equipment and the depth image of the area to be analyzed transmitted by the depth sensor, detecting an object of the area image and judging whether the detected object contains an attention object; if the detected object comprises an attention object, obtaining an attention object image and a depth image of the attention object based on the attention object, the area image and the depth image of the area to be analyzed; judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object; and if the object of interest is a first object of interest, determining a mutual avoidance strategy between the self-walking equipment and the first object of interest, wherein the avoidance strategy is used for controlling the self-walking equipment to move.
Compared with the prior art, the embodiment of the application has the following advantages:
the application provides a self-walking equipment movement control method, which comprises the steps of firstly obtaining a region image of a region to be analyzed and a depth image of the region to be analyzed, detecting whether the region to be analyzed contains a focus object or not based on the region image of the region to be analyzed, and if the region to be analyzed contains the focus object, obtaining a focus object image and the depth image of the focus object based on the focus object, the region image and the depth image of the region to be analyzed; judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object; and if the attention object is the first attention object, determining a mutual avoidance strategy between the self-walking equipment and the first attention object, wherein the avoidance strategy is used for controlling the self-walking equipment to move. In fact, after the region image detects that the region to be analyzed contains the attention object, the method obtains the attention object image and the depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed, and can accurately judge whether the attention object is the first attention object based on the depth image of the attention object, so that objects of similar types in the attention object can be accurately distinguished, and when the attention object is determined to be the first attention object, a mutual avoidance strategy between the first attention object and the self-walking device is determined. And then avoid the extravagant energy of self-propelled equipment detour, guaranteed that self-propelled equipment can not collide with first concern object each other, and then guaranteed the safety of first concern object.
In a further technical scheme, the first concerned object and the self-walking equipment can avoid each other, so that when the first concerned object avoids the self-walking equipment, the self-walking equipment can avoid detouring; when the first concerned object does not avoid the self-walking equipment, the self-walking equipment selects to avoid the first concerned object, so that the self-walking equipment cannot collide with the first concerned object, and the safety of the first concerned object is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flowchart of a method for controlling movement of a self-propelled device according to a first embodiment of the present application;
fig. 2 is a schematic diagram illustrating a first embodiment of the present application, which is used for detecting whether a region to be analyzed includes an object of interest based on a region image;
fig. 3 is a schematic diagram illustrating a first embodiment of determining whether an object of interest is a first object of interest based on an image of the object of interest and a depth image of the object of interest;
FIG. 4 is a schematic view of a self-propelled device movement control apparatus provided in a second embodiment of the present application;
fig. 5 is a schematic diagram of an electronic device provided in a third embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The following description is given of a method and an apparatus for controlling movement of a self-propelled device, an electronic device, a computer storage medium, and a self-propelled device by using specific embodiments. It should be noted that the embodiments described below are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First embodiment
A first embodiment of the present application provides a method for controlling movement of a self-propelled device, which is described below with reference to fig. 1. Please refer to fig. 1, which is a flowchart illustrating a method for controlling movement of a self-propelled device according to a first embodiment of the present disclosure.
The self-walking equipment movement control method comprises the following steps.
Step S101: and acquiring a region image of the region to be analyzed and a depth image of the region to be analyzed.
In the present embodiment, the self-walking apparatus movement control method is applied to overcoming various obstacles during walking or moving of the self-walking apparatus. The self-walking apparatus may be a self-walking robot in the field of cleaning apparatuses, and the cleaning robot is exemplified in this embodiment. It is understood that the self-propelled device may be a device capable of autonomous movement, which is applied in other fields, and the present application is intended to be protected.
In this embodiment, the self-walking device is provided with an image acquisition device and a depth sensor, the image acquisition device can acquire the area image of the area to be analyzed in real time, and the depth sensor can acquire the depth image of the area to be analyzed in real time. As an example, the region image of the region to be analyzed may be an RGB image, and the depth sensor may include an RGBD camera. In practice, the area image of the area to be analyzed and the depth image of the area to be analyzed may also be acquired by using an RGBD camera, that is, the RGBD camera is used as an example of the image acquisition device and the depth sensor at the same time.
The area to be analyzed may be an area around the working environment or the walking environment of the self-walking device, for example, an area of a circle having a radius of a preset certain distance from the self-walking device. Therefore, the region image of the region to be analyzed and the depth image of the region to be analyzed are to be obtained, and the purpose is to identify different types of objects in the region to be analyzed based on the region image and the depth image of the region to be analyzed, so that the obstacle avoidance is performed in the process of moving from the walking equipment, and the safety of the self walking equipment and other objects or pedestrians is guaranteed.
In general, the region image of the region to be analyzed acquired in this step may be a two-dimensional image, and may be an RGB image, for example. Specifically, when the image acquisition apparatus mounted on the self-walking apparatus is a camera for taking a two-dimensional image, the two-dimensional image may be acquired with the camera.
Various objects in the to-be-analyzed region can be preliminarily identified through the region image, and in order to further accurately identify similar objects or pedestrians in the to-be-analyzed region, the depth information of the similar objects or pedestrians is acquired through the depth image of the to-be-analyzed region so as to distinguish the similar objects or pedestrians.
Specifically, in order to obtain the depth image of the area to be analyzed, the depth image may be obtained by a depth sensor installed on the self-walking device, in this embodiment, the depth sensor is a depth camera installed on the self-walking device, and a possible object in the area to be analyzed in the working environment of the self-walking device or the walking environment is photographed in real time by the depth camera, so as to obtain the depth image of the object in the area to be analyzed, and further obtain the depth information of the object.
More specifically, the depth camera may be installed in front of the cleaning robot, and the depth camera of the embodiment may be a depth camera of TOF, structured light, RGBD, or the like type, in other words, the depth camera may capture a depth image as long as it can acquire depth information of an object around the working environment of the cleaning robot relative to the cleaning robot, for example, distance information between the object and the cleaning robot, or specific stereo space information of the object. Namely: the distance between the cleaning robot and the obstacle can be acquired through the depth image, and the three-dimensional space structure information (such as height, width, length or thickness) of the obstacle can be acquired.
As an example, in the present embodiment, mainly an RGBD image is explained as the depth image.
Step S102: and performing object detection on the area image, and judging whether the detected object contains the attention object.
In fact, in the working environment of the cleaning robot, there may be various obstacles for the cleaning robot, for example, there may be many pedestrians or objects in the working environment of the cleaning robot in a shop. In the embodiment, the obstacle avoidance strategy is mainly used for distinguishing pedestrians or objects similar to the pedestrians, and then the obstacle avoidance strategy is determined when the self-walking equipment walks based on the recognition result. Whether pedestrians or other objects, may be considered as the subject of interest in the region to be analyzed in the region image.
In the present embodiment, a pedestrian and an object similar to the pedestrian in the area to be analyzed, such as a human-shaped signboard or a large human-shaped doll, are taken as the objects of interest. Of course, it is understood that other objects with similar appearances may also be taken as the objects of interest to accurately identify the objects with similar appearances.
In this embodiment, as the object detection for the region image, and the determination of whether the detected object includes the attention object, the following method may be used: firstly, an area image is taken as input data of an image object detection model, and an object detection result of the area image is obtained, wherein the image object detection model is a model used for obtaining the object detection result of the image according to the image; then, it is determined whether or not the object of interest is included in the detected objects based on the object detection result of the region image.
Specifically, the image object detection model is a trained deep learning network model, which can identify each object in the image according to the image, and may be, for example, a YOLO (you only look once) network. By inputting an image to the image object detection model, all objects in the image can be obtained.
As one way to obtain the image object detection model, an initial deep learning network model may be trained, and specifically, the process of training the initial deep learning network model is as follows.
The method comprises the steps of firstly obtaining a first image sample used for training an initial deep learning network model, and then training the initial deep learning network model by adopting the first image sample until the obtained trained deep learning network model can accurately identify an object in an image.
Specifically, the first image samples may be images in a mall scene, and when the initial deep learning network model is trained, the first image samples are used as input data of the initial deep learning network model, and object information corresponding to the first image samples is used as an output result to adjust parameters in the initial deep learning network model, and the image object detection model is finally obtained by training the first image samples by using a large number of first image samples and continuously adjusting the parameters in the initial deep learning network model.
After obtaining the image object detection model, the object in the region image may be recognized by using the image object detection model, and the object detection result of the region image may be obtained, for example, when the region image captured by the cleaning robot includes the standing plate of the person a, the pedestrian B, and one box, the object detection result of the region image necessarily includes the standing plate of the person a, the pedestrian B, and the one box.
Then, it is determined whether or not the object of interest is included in the detected objects based on the object detection result of the region image. For example, if an object having a humanoid structure is set as the object of interest, since the detected object includes a humanoid plate of the person a and a pedestrian B each having a humanoid structure, it is determined that the object of interest is included in the detected objects, that is: a human-shaped signboard including the person a and the pedestrian B may be the attention target.
In the present embodiment, since the human-shaped standing plate of the person a and the pedestrian B each include a human-shaped structure, whether these objects are true persons or "false" persons cannot be accurately identified in the identified object detection result, and further distinction is required.
Since the recognition method is mainly used for recognizing the objects with similar shape and structure in the embodiment, in order to reduce the recognition amount and improve the recognition efficiency, the objects with similar shape can be used as the objects of interest, and other objects except the objects of interest are not recognized in the fine recognition process in the subsequent step. In a shop scene, an object containing a humanoid structure can be directly used as an attention object.
With respect to object detection on the region image, a process of determining whether the detected object includes the attention object may refer to fig. 2, which is a schematic diagram of detecting whether the region to be analyzed includes the attention object based on the region image in the first embodiment of the present application.
For example, two area images, i.e., an image 1 (an image of a pedestrian B) and an image 2 (an image of a figure standing plate of a person a), are input into the YOLO network for human figure structure detection, and when it is detected that both the image 1 and the image 2 include a human figure structure, a part of the human figure structure in the image (i.e., an object of interest) is recognized and marked. The identification and marking of the parts of the humanoid structure is performed in order to facilitate the subsequent acquisition of an image of the object of interest on the basis of the marked object of interest.
Step S103: and if the detected object comprises the attention object, obtaining an attention object image and a depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed.
After the attention object is detected based on the region image, obtaining the attention object image and the depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed, so that the attention object image and the depth image of the attention object are obtained for further accurately identifying different objects in the attention object, for example: distinguishing between real persons and "fake" persons. It should be noted that, in this embodiment, the attention object image and the depth image of the attention object have a corresponding relationship, that is, for each pixel point in the attention object image, the depth data of the attention object image can be found in the depth image of the attention object. Of course, the region image of the region to be analyzed and the depth image of the region to be analyzed also have this correspondence.
Specifically, as obtaining the depth image of the attention object image and the attention object based on the attention object, the region image, and the depth image of the region to be analyzed, it may refer to: firstly, according to an attention object, cutting an area image to obtain an attention object image; and simultaneously, screening the depth image of the attention object from the depth image of the region to be analyzed according to the attention object.
Please refer to fig. 3 for an image of the attention object obtained after the image 1 and the image 2 are cut, which is a schematic diagram of determining whether the attention object is the first attention object based on the image of the attention object and the depth image of the attention object in the first embodiment of the present application, and fig. 3 also includes the depth image of the attention object.
Meanwhile, based on the depth images of the attention object and the region to be analyzed, the depth image of the attention object is screened from the depth image of the region to be analyzed. After the attention object is determined, a depth image of the attention object may be screened from the depth image of the region to be analyzed, for example, a depth image including only the pedestrian B as the attention object may be screened from the depth image of the region to be analyzed, or a depth image including only the human figure standing plate of the person a as the attention object may be screened. The depth image of the attention object is screened from the depth image of the region to be analyzed, and actually, the depth image of the attention object can also be obtained in a mode of cutting the depth image of the region to be analyzed.
In fact, as already mentioned in the foregoing, the region image of the region to be analyzed has a corresponding relationship with the depth image of the region to be analyzed. The attention object image also has a correspondence with the depth image of the attention object, and naturally, the depth image having the correspondence with the attention object image may be screened as the depth image of the attention object in the depth image of the region to be analyzed based on the obtained attention object image.
Step S104: based on the attention object image and the depth image of the attention object, it is determined whether the attention object is a first attention object.
After obtaining the attention object image and the depth image of the attention object, it may be determined whether the attention object is the first attention object based on the attention object image and the depth image of the attention object.
Specifically, as an embodiment of determining whether or not the object of interest is the first object of interest based on the object of interest image and the depth image of the object of interest: firstly, taking an image of an attention object and a depth image of the attention object as input data of a target convolution neural network model to obtain attribute characteristic information of the attention object, wherein the target convolution neural network model is a model for obtaining object attribute characteristic information of the image according to the image; and then, judging whether the attention object is a first attention object according to the attribute feature information of the attention object.
The target convolutional neural network model is a trained feature extraction model and can extract attribute features of objects in the image based on the image. Such as: VGG16 network model. The VGG16 network model is a convolution neural network model with a greater depth. The target convolutional neural network model is obtained by training the initial convolutional neural network model, and the obtaining process of the image object detection model can be referred to for the principle of the training process.
As one way to obtain the target convolutional neural network model, an initial convolutional neural network model may be trained, and specifically, the process of training the initial convolutional neural network model is as follows.
Firstly, a second image sample used for training the initial convolutional neural network model is obtained, then the initial convolutional neural network model is trained by adopting the second image sample until the obtained trained convolutional neural network model can accurately extract the attribute characteristics of the object in the image.
Specifically, the second image samples may be images and corresponding depth images in a market scene, and attribute features of objects in the second image samples are known, when an initial convolutional neural network model is specifically trained, the second image samples are used as input data of the initial convolutional neural network model, and the attribute features of the objects corresponding to the second image samples are used as output results to adjust parameters in the initial convolutional neural network model, and a target convolutional neural network model is finally obtained by training the initial convolutional neural network model by using a large number of second image samples and continuously adjusting the parameters in the initial convolutional neural network model.
In the present embodiment, the reason why the attribute feature information of the attention object is obtained based on the attention object image and the depth image of the attention object is because the attribute feature information of different kinds of attention objects obtained based on the depth image of the attention object is significantly different. Further, it is possible to determine whether or not the attention object is the first attention object based on the attribute feature information of the attention object.
For example, pedestrians have a distinct humanoid outline and 3D structure, whereas the humanoid signboard is in most cases a planar structure, without the humanoid outline and 3D structure. For example, some people may have a more regular circle or square shape with edges that are not cut to a shape having the contour of the human body. And these human body contour information and 3D structure information are an example of attribute feature information. The above-mentioned human body contour information and 3D structure information may be obtained based on a depth image of the object of interest.
After obtaining the attribute feature information of the attention object, it is determined whether the attention object is the first attention object according to the attribute feature information of the attention object. In the present embodiment, as an example of the first object of interest, a pedestrian is taken as the first object of interest.
Specifically, in order to facilitate understanding of how to determine whether the attention object is the first attention object based on the attention object image and the depth image of the attention object, referring to fig. 3, in fig. 3, a clipped image (the clipped image includes an image of a pedestrian B and a depth image of a pedestrian B, or the clipped image includes an image of a portrait standing brand of a person a and a depth image of a portrait standing brand including a person a) is input into the VGG16 network model to obtain attribute feature information thereof, and classification recognition is performed based on the attribute feature information, that is: whether the person is a real person or a false person is distinguished, in the cut image, the depth image of the pedestrian is greatly different from the depth image of the humanoid signboard, so that the person can be accurately identified and distinguished based on the depth image.
Step S105: and if the attention object is the first attention object, determining a mutual avoidance strategy between the self-walking equipment and the first attention object, wherein the avoidance strategy is used for controlling the self-walking equipment to move.
In this embodiment, when the object of interest is determined to be the first object of interest, a mutual avoidance strategy between the self-propelled device and the first object of interest is determined.
In this embodiment, the first object of interest may be a pedestrian, but may actually be another intelligent device capable of actively avoiding a self-walking device. In the present embodiment, description is made mainly with a pedestrian as the first attention object. Of course, it can be understood that the pedestrian of the embodiment does not refer to the walking person alone, but refers to the person capable of walking autonomously, and then avoids from the self-walking device.
Since the first object of interest is an object that can actively avoid the self-propelled device, after determining that the object of interest is the first object of interest, determining a mutual avoidance strategy between the self-propelled device and the first object of interest may be as follows: firstly, controlling self-walking equipment to carry out voice reminding avoidance on a first concerned object; then, if the first object of interest is not avoided within the preset time, the travel path from the walking device is re-planned to avoid the first object of interest. The re-planned path is used to control movement from the walking device.
Therefore, the first concerned object is preferably reminded of avoidance through voice, and the avoidance strategy is adopted to avoid that the self-walking equipment bypasses more in the cleaning process, so that the situation that the area where the first concerned object is located cannot be cleaned due to repeated cleaning of some areas while the electric quantity of the self-walking equipment is avoided being wasted. It is most important to be able to protect pedestrians and self-propelled equipment. For example, in a shopping mall scene, there may be some pedestrians walking or children running, so as to avoid the self-walking device colliding with the pedestrians or children, the self-walking device may be controlled to actively prompt the pedestrians or children to avoid or to remind the safety of the pedestrians or children by voice when the self-walking device is within a certain distance range from the pedestrians or children.
Of course, if the first object of interest is not actively avoided within a predetermined time, for example, it may be that a pedestrian is sitting at a certain position or a child is interested in the self-propelled device for a period of time, and the self-propelled device is not actively avoided within a period of time, then the travel path of the self-propelled device may be changed in order to improve the cleaning efficiency of the self-propelled device. For example, the path around the first object of interest may be re-planned.
In this embodiment, if it is determined that the object of interest is the second object of interest, a movement control strategy for avoiding the second object of interest from the traveling apparatus is determined. The second object of interest may refer to an object that is not actively avoiding the self-propelled device, and may be, for example, the above-mentioned anthropomorphic standing plate containing the person a.
As a way of determining a movement control strategy for avoiding the second object of interest from the walking device: the method can be used for replanning the traveling path of the self-walking equipment and determining the replanned path; the re-planned path is used to control the movement from the walking device. For example, when the second object of interest is a herringbone, the path may be re-planned for the self-propelled device, which in turn controls the self-propelled device to bypass the herringbone to avoid knocking the herringbone down. If the attention object is a second attention object, when the distance between the self-walking device and the second attention object is smaller than a preset second distance threshold, the traveling path of the self-walking device can be re-planned, and the re-planned path is determined. The second distance threshold may be a predetermined reasonable value, such as half a meter.
Of course, it is also possible to choose to tap the object for cleaning when other objects are identified in the area to be analyzed, e.g. when a table is in front of the self-propelled device, the self-propelled device can be controlled to tap the table leg for cleaning the surrounding ground environment for thorough cleaning.
Of course, it may be understood that, when determining the mutual avoidance policy between the self-walking device and the first object of interest, it may also be determined in advance whether the first object of interest is on a path traveled by the self-walking device, and if so, the mutual avoidance policy between the self-walking device and the first object of interest is determined.
In addition, when the mutual avoidance strategy between the self-walking equipment and the first concerned object is determined, whether the distance between the first concerned object and the self-walking equipment is smaller than a preset first distance threshold value or not can be judged, and if yes, the mutual avoidance strategy between the self-walking equipment and the first concerned object is determined. In order to ensure that the self-propelled device does not collide with the pedestrian, the first distance threshold may be a predetermined reasonable value, such as one meter.
Generally speaking, when the condition that a pedestrian exists on the traveling path of the self-traveling equipment is confirmed, the pedestrian can be reminded of avoiding or paying attention to safety at a far distance through voice, and when the condition that an object exists on the traveling path of the self-traveling equipment is confirmed, the traveling path of the self-traveling equipment can be replanned at a near distance.
For example, when pedestrian B is in the direction of travel from the traveling equipment and when at certain distance from traveling equipment dead ahead, in order to avoid bumping the pedestrian from traveling equipment, can carry out pronunciation warning to pedestrian B and dodge. Therefore, when the pedestrian is at a certain distance from the self-walking equipment, the voice reminding is carried out to avoid, and the purpose is to draw the attention of the pedestrian. If the distance between pedestrian and the self-walking equipment is greater than the preset first distance threshold value, possible pedestrian is far away from the self-walking equipment, and the pedestrian is reminded to dodge through voice and possibly disturb the pedestrian.
For another example, when children C is in the direction of travel from the walking equipment and at a certain distance directly in front of the walking equipment, to avoid bumping into the children from the walking equipment, children C may be reminded of attention by voice.
Meanwhile, when the distance between the first object of interest and the self-walking device is smaller than the preset first distance threshold, the self-walking device can be controlled to move in a decelerating manner or stop moving.
The application provides a self-walking equipment movement control method, which comprises the steps of firstly obtaining an area image of an area to be analyzed and a depth image of the area to be analyzed, detecting whether an attention object is contained in the area to be analyzed based on the area image of the area to be analyzed, and if the attention object is contained, obtaining an attention object image and a depth image of the attention object based on the attention object, the area image and the depth image of the area to be analyzed; judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object; and if the attention object is the first attention object, determining a mutual avoidance strategy between the self-walking equipment and the first attention object, wherein the avoidance strategy is used for controlling the self-walking equipment to move. In practice, after the region image detects the attention object in the region to be analyzed, the method obtains the attention object image and the depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed, and can accurately judge whether the attention object is the first attention object based on the depth image of the attention object, so that objects of relatively similar types in the attention object can be accurately distinguished, and when the attention object is determined to be the first attention object, a mutual avoidance strategy between the first attention object and the self-walking device is determined. And then avoid the extravagant energy of the equipment detour of walking certainly, guaranteed that can not collide each other with first concern object from the equipment of walking, and then guaranteed the safety of first concern object.
Second embodiment
Corresponding to the method for controlling movement of the self-walking device provided in the first embodiment of the present application, a second embodiment of the present application also provides a device for controlling movement of the self-walking device. Since the apparatus embodiment is substantially similar to the first embodiment, the description is relatively simple, and reference may be made to the partial description of the first embodiment for relevant points. The device embodiments described below are merely illustrative.
Please refer to fig. 4, which is a schematic diagram of a movement control apparatus for a self-propelled device according to a second embodiment of the present application.
The self-propelled device movement control apparatus 400 includes:
an original image obtaining unit 401, configured to obtain a region image of a region to be analyzed and a depth image of the region to be analyzed;
a first determination unit 402 configured to perform object detection on the region image, and determine whether a target object is included in the detected object;
a focused image obtaining unit 403, configured to obtain a focused object image and a depth image of the focused object based on the focused object, the region image, and the depth image of the region to be analyzed if the detected object includes the focused object;
a second determination unit 404, configured to determine whether the object of interest is a first object of interest based on the object of interest image and the depth image of the object of interest;
a mutual avoidance strategy determining unit 405, configured to determine a mutual avoidance strategy between the self-traveling device and the first object of interest if the object of interest is the first object of interest, where the avoidance strategy is used to control the self-traveling device to move.
Optionally, the mutual avoidance strategy determining unit is specifically configured to:
controlling the self-walking equipment to carry out voice reminding avoidance on the first concerned object;
and if the first object of interest is not avoided within the preset time, replanning the traveling path of the self-traveling equipment to avoid the first object of interest.
Optionally, the method further includes: and the movement control strategy determination unit is used for determining a movement control strategy for avoiding the second attention object from the self-walking equipment if the attention object is the second attention object.
Optionally, the mutual avoidance strategy determining unit is specifically configured to:
and judging whether the first object of interest is on a path traveled by the self-walking equipment, and if so, determining a mutual avoidance strategy between the self-walking equipment and the first object of interest.
Optionally, the mutual avoidance strategy determining unit is specifically configured to:
and judging whether the distance between the first concerned object and the self-walking equipment is smaller than a preset first distance threshold value, and if so, determining a mutual avoidance strategy between the self-walking equipment and the first concerned object.
Optionally, the first determining unit is specifically configured to:
obtaining an object detection result of the region image by using the region image as input data of an image object detection model, wherein the image object detection model is a model for obtaining the object detection result of the image according to the image;
and judging whether the detected object contains the attention object or not based on the object detection result of the area image.
Optionally, the focused image obtaining unit is specifically configured to:
according to the attention object, the region image is cut to obtain an attention object image;
and screening the depth image of the attention object from the depth image of the region to be analyzed according to the attention object.
Optionally, the depth image of the region to be analyzed is acquired by using a depth sensor mounted on the self-walking device.
Optionally, the second determining unit is specifically configured to:
taking the attention object image and the depth image of the attention object as input data of a target convolution neural network model to obtain attribute feature information of the attention object, wherein the target convolution neural network model is a model used for obtaining object attribute feature information of an image according to the image;
and judging whether the attention object is a first attention object or not according to the attribute feature information of the attention object.
Optionally, the mobile control policy determining unit is specifically configured to:
replanning the traveling path of the self-traveling equipment, and determining a replanning path; the re-planned path is used for controlling the self-walking equipment to move.
Optionally, the avoidance strategy includes at least one of the following:
performing voice reminding avoidance on the first object of interest;
controlling the self-walking equipment to move in a deceleration way or stop moving;
changing a travel path of the self-propelled device.
Third embodiment
Corresponding to the method of the first embodiment of the present application, a third embodiment of the present application further provides an electronic device.
As shown in fig. 5, fig. 5 is a schematic view of an electronic device provided in a third embodiment of the present application.
In this embodiment, an alternative hardware structure of the electronic device 500 may be as shown in fig. 5, including: at least one processor 501, at least one memory 502, and at least one communication bus 505; the memory 502 contains a program 503 and data 504.
The bus 505 may be a communication device that transfers data between components within the electronic device 500, such as an internal bus (e.g., a CPU-memory bus, a Central Processing Unit (CPU), for short), an external bus (e.g., a universal serial bus port, a peripheral component interconnect express port), and so forth.
In addition, the electronic device further includes: at least one network interface 506, and at least one peripheral interface 507. A network interface 506 to provide wired or wireless communication with respect to an external network 508 (e.g., the internet, an intranet, a local area network, a mobile communications network, etc.); in some embodiments, the network interface 506 may include any number of Network Interface Controllers (NIC), radio Frequency (RF) modules, repeaters, transceivers, modems, routers, gateways, any combination of wired network adapters, wireless network adapters, bluetooth adapters, infrared adapters, near Field Communication (NFC) adapters, cellular network chips, and the like.
Peripheral interface 507 is used to interface with peripherals, such as peripheral 1 (509 in FIG. 5), peripheral 2 (510 in FIG. 5), and peripheral 3 (511 in FIG. 5) in the figure. Peripherals are peripheral devices that may include, but are not limited to, cursor control devices (e.g., a mouse, touchpad, or touch screen), keyboards, displays (e.g., cathode ray tube displays, liquid crystal displays). A display or light emitting diode display, a video input device (e.g., a camera or an input interface communicatively coupled to a video archive), etc.
The processor 501 may be a CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present Application.
The Memory 502 may comprise a high-speed RAM (Random Access Memory) Memory, and may further comprise a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The processor 501 calls the program and data stored in the memory 502 to execute the method provided in the first embodiment of the present application.
Fourth embodiment
In correspondence with the method of the first embodiment of the present application, a fourth embodiment of the present application also provides a computer storage medium storing a computer program to be executed by a processor to perform the method provided by the first embodiment of the present application.
Fifth embodiment
A fifth embodiment of the present application, which corresponds to the first embodiment, provides a self-walking apparatus, which can be referred to the relevant description in the first embodiment.
This from walking equipment includes: the system comprises an image acquisition device, a depth sensor and a processor;
the image acquisition equipment is used for acquiring a region image of a region to be analyzed;
the depth sensor is used for acquiring a depth image of a region to be analyzed;
the processor is used for receiving the area image of the area to be analyzed transmitted by the image acquisition equipment and the depth image of the area to be analyzed transmitted by the depth sensor, carrying out object detection on the area image and judging whether the detected object contains an attention object; if the detected object comprises the attention object, obtaining an attention object image and a depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed; judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object; and if the attention object is the first attention object, determining a mutual avoidance strategy between the self-walking equipment and the first attention object, wherein the avoidance strategy is used for controlling the self-walking equipment to move.
Application scenario 1
When the cleaning robot cleans the ground in a market environment, the RGBD camera arranged in front of the cleaning robot is used for shooting the surrounding cleaning environment in real time while cleaning, so that a two-dimensional image of the cleaning environment and a depth image of the cleaning environment are obtained, a processor inside the cleaning robot judges that a pedestrian moves towards the cleaning robot in a position 0.75 meter away in the cleaning direction in front of the cleaning robot based on the two-dimensional image of the cleaning environment and the depth image of the cleaning environment, the processor inside the cleaning robot controls the cleaning robot to decelerate and advance and controls a voice interaction device on the cleaning robot to carry out voice reminding on the pedestrian, so that the pedestrian notices the cleaning robot, and the cleaning robot is actively avoided.
Application scenario 2
When the cleaning robot cleans the ground, the processor inside the cleaning robot controls the voice interaction device on the cleaning robot to carry out voice prompt on pedestrians for avoiding, but after a period of time, it is detected that the pedestrians are not actively avoided, the processor inside the cleaning robot replans the traveling path of the cleaning robot to avoid the pedestrians, and meanwhile, the cleaning robot is controlled to continue to clean the ground according to the replanned path.
Application scenario 3
When the cleaning robot cleans the ground in a market environment, the RGBD camera mounted in front of the cleaning robot is used for shooting the surrounding cleaning environment in real time while cleaning, so that a two-dimensional image of the cleaning environment and a depth image of the cleaning environment are obtained, a processor inside the cleaning robot judges that a human-shaped vertical plate is located in the position of the cleaning robot in the right front cleaning direction by 0.3 m based on the two-dimensional image of the cleaning environment and the depth image of the cleaning environment, the processor inside the cleaning robot replans the traveling path of the cleaning robot to avoid the human-shaped vertical plate, and meanwhile the cleaning robot is controlled to continue to clean the ground according to the replanned path.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the appended claims.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The Memory may include volatile Memory in a computer readable medium, random Access Memory (RAM) and/or nonvolatile Memory such as Read-Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change Memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of RAM, read-Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, compact Disc Read-Only Memory (CD-ROM), digital Versatile Disc (DVD), or other optical storage, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include non-transitory computer readable storage media (non-transitory computer readable storage media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (15)

1. A method of controlling movement of a self-propelled device, comprising:
acquiring a region image of a region to be analyzed and a depth image of the region to be analyzed;
performing object detection on the area image, and judging whether the detected object contains an attention object;
if the detected object comprises an attention object, obtaining an attention object image and a depth image of the attention object based on the attention object, the region image and the depth image of the region to be analyzed;
judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object;
and if the object of interest is a first object of interest, determining a mutual avoidance strategy between the self-walking equipment and the first object of interest, wherein the avoidance strategy is used for controlling the self-walking equipment to move.
2. The self-propelled device movement control method according to claim 1, wherein the determining a mutual avoidance strategy between the self-propelled device and the first object of interest includes:
controlling the self-walking equipment to carry out voice reminding avoidance on the first object of interest;
and if the first object of interest does not avoid within the preset time, replanning the traveling path of the self-walking device to avoid the first object of interest.
3. The self-propelled device movement control method according to claim 1, characterized by further comprising: and if the object of interest is a second object of interest, determining a movement control strategy for the self-walking device to avoid the second object of interest.
4. The self-propelled device movement control method according to claim 1, wherein the determining a mutual avoidance strategy between the self-propelled device and the first object of interest includes:
and judging whether the first object of interest is on a path traveled by the self-walking equipment, and if so, determining a mutual avoidance strategy between the self-walking equipment and the first object of interest.
5. The self-propelled device movement control method according to claim 1, wherein the determining a mutual avoidance strategy between the self-propelled device and the first object of interest includes:
and judging whether the distance between the first concerned object and the self-walking equipment is smaller than a preset first distance threshold value, and if so, determining a mutual avoidance strategy between the self-walking equipment and the first concerned object.
6. The self-propelled device movement control method according to claim 1, wherein the performing object detection on the region image and determining whether the detected object includes an object of interest includes:
obtaining an object detection result of the region image by using the region image as input data of an image object detection model, wherein the image object detection model is a model for obtaining the object detection result of the image according to the image;
and judging whether the detected objects contain the attention object or not based on the object detection result of the area image.
7. The self-propelled device movement control method according to claim 1, wherein the obtaining of the image of the object of interest and the depth image of the object of interest based on the object of interest, the region image, and the depth image of the region to be analyzed comprises:
according to the attention object, the region image is cut to obtain an attention object image;
and screening the depth image of the attention object from the depth image of the region to be analyzed according to the attention object.
8. The self-propelled device movement control method according to claim 7, wherein the depth image of the area to be analyzed is acquired using a depth sensor mounted on the self-propelled device.
9. The self-propelled device movement control method according to claim 1, wherein the determining whether the object of interest is a first object of interest based on the object of interest image and a depth image of the object of interest includes:
taking the image of the attention object and the depth image of the attention object as input data of a target convolution neural network model to obtain attribute feature information of the attention object, wherein the target convolution neural network model is used for obtaining object attribute feature information of an image according to the image;
and judging whether the attention object is a first attention object or not according to the attribute feature information of the attention object.
10. The self-propelled device movement control method according to claim 3, wherein the determining a movement control strategy of the self-propelled device to avoid the second object of interest includes:
replanning the traveling path of the self-traveling equipment, and determining a replanning path; the re-planned path is used for controlling the self-walking equipment to move.
11. The self-propelled device movement control method according to claim 1, wherein the avoidance strategy includes at least one of:
performing voice reminding avoidance on the first concerned object;
controlling the self-walking equipment to move in a decelerating way or stop moving;
changing a travel path of the self-propelled device.
12. A self-propelled device movement control apparatus, comprising:
the original image acquisition unit is used for acquiring a region image of a region to be analyzed and a depth image of the region to be analyzed;
a first judgment unit configured to perform object detection on the region image and judge whether or not the detected object includes an attention object;
a focused image obtaining unit, configured to obtain a focused object image and a depth image of the focused object based on the focused object, the region image, and the depth image of the region to be analyzed if the detected object includes the focused object;
a second determination unit configured to determine whether the object of interest is a first object of interest based on the object of interest image and a depth image of the object of interest;
a mutual avoidance strategy determining unit, configured to determine a mutual avoidance strategy between the self-walking device and the first object of interest if the object of interest is the first object of interest, where the avoidance strategy is used to control the self-walking device to move.
13. An electronic device, comprising:
a processor;
a memory for storing a computer program for execution by the processor to perform the method of any one of claims 1 to 11.
14. A computer storage medium, characterized in that it stores a computer program that is executed by a processor to perform the method of any one of claims 1-11.
15. A self-propelled device, comprising: the system comprises an image acquisition device, a depth sensor and a processor;
the image acquisition equipment is used for acquiring an area image of an area to be analyzed;
the depth sensor is used for acquiring a depth image of a region to be analyzed;
the processor is used for receiving the area image of the area to be analyzed transmitted by the image acquisition equipment and the depth image of the area to be analyzed transmitted by the depth sensor, detecting an object of the area image and judging whether the detected object contains an attention object; if the detected object comprises an attention object, obtaining an attention object image and a depth image of the attention object based on the attention object, the area image and the depth image of the area to be analyzed; judging whether the attention object is a first attention object or not based on the attention object image and the depth image of the attention object; and if the object of interest is a first object of interest, determining a mutual avoidance strategy between the self-walking device and the first object of interest, wherein the avoidance strategy is used for controlling the self-walking device to move.
CN202211358918.4A 2022-11-01 2022-11-01 Self-walking equipment movement control method and device and self-walking equipment Pending CN115562305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211358918.4A CN115562305A (en) 2022-11-01 2022-11-01 Self-walking equipment movement control method and device and self-walking equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211358918.4A CN115562305A (en) 2022-11-01 2022-11-01 Self-walking equipment movement control method and device and self-walking equipment

Publications (1)

Publication Number Publication Date
CN115562305A true CN115562305A (en) 2023-01-03

Family

ID=84768856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211358918.4A Pending CN115562305A (en) 2022-11-01 2022-11-01 Self-walking equipment movement control method and device and self-walking equipment

Country Status (1)

Country Link
CN (1) CN115562305A (en)

Similar Documents

Publication Publication Date Title
US11487988B2 (en) Augmenting real sensor recordings with simulated sensor data
KR102032070B1 (en) System and Method for Depth Map Sampling
Nissimov et al. Obstacle detection in a greenhouse environment using the Kinect sensor
JP6710426B2 (en) Obstacle detection method and device
CN113848943B (en) Grid map correction method and device, storage medium and electronic device
CN113907663B (en) Obstacle map construction method, cleaning robot, and storage medium
CN112075879A (en) Information processing method, device and storage medium
KR102629036B1 (en) Robot and the controlling method thereof
CN110276251B (en) Image recognition method, device, equipment and storage medium
Han et al. Parking Space Recognition for Autonomous Valet Parking Using Height and Salient‐Line Probability Maps
US11748998B1 (en) Three-dimensional object estimation using two-dimensional annotations
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
TWI726278B (en) Driving detection method, vehicle and driving processing device
CN109984691A (en) A kind of sweeping robot control method
CN111723723A (en) Image detection method and device
EP3703008A1 (en) Object detection and 3d box fitting
CN115240094A (en) Garbage detection method and device
US11227407B2 (en) Systems and methods for augmented reality applications
CN110084825B (en) Image edge information navigation-based method and system
CN111309011A (en) Decision-making method, system, equipment and storage medium for autonomously exploring target
CN112748721A (en) Visual robot and cleaning control method, system and chip thereof
CN116416518A (en) Intelligent obstacle avoidance method and device
US20230367319A1 (en) Intelligent obstacle avoidance method and apparatus based on binocular vision, and non-transitory computer-readable storage medium
CN115562305A (en) Self-walking equipment movement control method and device and self-walking equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination