CN117519124A - Obstacle avoidance method for self-mobile device, self-mobile device and storage medium - Google Patents

Obstacle avoidance method for self-mobile device, self-mobile device and storage medium Download PDF

Info

Publication number
CN117519124A
CN117519124A CN202311326561.6A CN202311326561A CN117519124A CN 117519124 A CN117519124 A CN 117519124A CN 202311326561 A CN202311326561 A CN 202311326561A CN 117519124 A CN117519124 A CN 117519124A
Authority
CN
China
Prior art keywords
point cloud
obstacle avoidance
ground plane
coordinate system
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311326561.6A
Other languages
Chinese (zh)
Inventor
王登峰
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhumang Technology Co ltd
Original Assignee
Shenzhen Zhumang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhumang Technology Co ltd filed Critical Shenzhen Zhumang Technology Co ltd
Priority to CN202311326561.6A priority Critical patent/CN117519124A/en
Publication of CN117519124A publication Critical patent/CN117519124A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the invention provides an obstacle avoidance method of self-mobile equipment, the self-mobile equipment and a storage medium. The self-moving device comprises a depth sensor, and the method comprises: acquiring a first point cloud positioned in a sensor coordinate system through a depth sensor; filtering the first point cloud according to a preset plane of a sensor coordinate system to obtain a second point cloud; determining a target ground plane based on a preset ground plane and a second point cloud, and dividing the first point cloud by taking the target ground plane as a dividing plane to obtain an initial obstacle avoidance point cloud; and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of a robot coordinate system, and controlling the self-mobile equipment to avoid the obstacle based on the target obstacle avoidance point cloud. The embodiment of the application aims at taking the preset ground plane as priori information to determine the target ground plane, so that the extraction speed, stability and precision of the target ground plane are improved, further more accurate obstacle avoidance point clouds can be obtained, and more reliable obstacle avoidance is realized.

Description

Obstacle avoidance method for self-mobile device, self-mobile device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an obstacle avoidance method for a self-mobile device, and a computer readable storage medium.
Background
The self-moving device is a device which can realize automatic movement without manual operation. The self-mobile device has wide application prospect in the fields of unmanned driving, industrial automation, service and the like. Since the self-mobile device generally works in a complex operation scene, how to accurately avoid the obstacle is a key for realizing autonomous navigation and safe movement of the self-mobile device.
In the obstacle avoidance process of the self-mobile device, the real ground plane where the self-mobile device is located is usually required to be extracted, and then the obstacles possibly encountered by the self-mobile device are determined based on the extracted real ground plane so as to avoid the obstacle. However, errors usually exist between the ground plane and the real ground plane obtained directly according to the camera, so that an obstacle cannot be accurately determined to avoid the obstacle. In addition, in the related art, a real ground plane is extracted by adopting methods such as coordinate transformation, however, the process is complex, the effect of real-time extraction is difficult to achieve, and then obstacle avoidance cannot be performed timely.
Disclosure of Invention
The application provides an obstacle avoidance method of self-mobile equipment, the self-mobile equipment and a computer readable storage medium, and aims to take a preset ground plane as prior information to determine a target ground plane, so that the extraction speed, stability and precision of the target ground plane are improved, more accurate obstacle avoidance point clouds can be obtained, and more reliable obstacle avoidance is realized.
To achieve the above object, the present application provides an obstacle avoidance method of a self-mobile device, the self-mobile device including a depth sensor, the method comprising:
acquiring a first point cloud positioned in a sensor coordinate system through the depth sensor;
filtering the first point cloud according to a preset plane of the sensor coordinate system to obtain a second point cloud; the preset ground plane is obtained based on the installation position information of the depth sensor in a robot coordinate system;
determining a target ground plane based on the preset ground plane and the second point cloud, and dividing the first point cloud by taking the target ground plane as a dividing plane to obtain an initial obstacle avoidance point cloud; wherein the initial obstacle avoidance point cloud corresponds to the sensor coordinate system;
and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of the robot coordinate system, and controlling the self-mobile equipment to avoid the obstacle based on the target obstacle avoidance point cloud.
In addition, to achieve the above object, the present application further provides a self-mobile device, including a memory and a processor; the memory is used for storing a computer program; the processor is configured to execute the computer program and implement the steps of the obstacle avoidance method of the self-mobile device according to any one of the embodiments of the present application when the computer program is executed.
In addition, to achieve the above object, the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor causes the processor to implement the steps of the obstacle avoidance method of the self-mobile device provided in any one of the embodiments of the present application.
According to the obstacle avoidance method of the self-mobile device, the self-mobile device and the computer readable storage medium, the first point cloud located in the sensor coordinate system can be obtained through the depth sensor, and the first point cloud is filtered according to the preset plane of the sensor coordinate system, so that the second point cloud is obtained. The preset ground plane is obtained based on the installation position information of the depth sensor in the machine coordinate system. Further, the target ground plane can be determined based on the preset ground plane and the second point cloud, and the target ground plane is used as a segmentation plane to segment the first point cloud, so that the initial obstacle avoidance point cloud under the sensor coordinate system is obtained. Therefore, the initial obstacle avoidance point cloud can be subjected to coordinate transformation to obtain a target obstacle avoidance point cloud of a robot coordinate system, and obstacle avoidance is controlled to be performed from the mobile equipment based on the target obstacle avoidance point cloud. The method aims at taking the preset ground plane as priori information to determine the target ground plane, improving the extraction speed, stability and precision of the target ground plane, and further obtaining more accurate obstacle avoidance point cloud based on the target ground plane, so that more reliable obstacle avoidance can be achieved based on the obstacle avoidance point cloud.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an obstacle avoidance method of a self-mobile device according to an embodiment of the present application;
FIG. 2 is a schematic view of a scenario of an obstacle avoidance method of the self-mobile device shown in FIG. 1;
fig. 3 is a schematic flow chart of determining a preset ground plane according to an embodiment of the present application;
fig. 4 is a schematic view of a robot coordinate system according to an embodiment of the present application;
fig. 5 is a schematic flow chart of determining a target ground plane according to an embodiment of the present application;
fig. 6 is a schematic flow chart of another obstacle avoidance method for a self-mobile device according to an embodiment of the present application;
FIG. 7 is a schematic view of a scenario of an obstacle avoidance method of the self-moving device shown in FIG. 6;
fig. 8 is a schematic flow chart of controlling a self-mobile device to avoid an obstacle according to an embodiment of the present application;
fig. 9 is a schematic block diagram of a self-mobile device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations. In addition, although the division of the functional modules is performed in the apparatus schematic, in some cases, the division of the modules may be different from that in the apparatus schematic.
The term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1 and fig. 2, fig. 1 is a flow chart of an obstacle avoidance method of a self-mobile device according to an embodiment of the present application; fig. 2 is a schematic view of a scenario of an obstacle avoidance method of the self-mobile device shown in fig. 1. Self-moving devices include depth sensors, which are sensor devices that can be used to measure distance (depth) information of points in an object or scene, which are typically represented in the form of three-dimensional coordinates. The type of the depth sensor is not limited, and may include, for example, a depth camera, a laser radar, and the like.
The depth camera may calculate the distance from each point on the object surface to the camera by transmitting and receiving time information of the light beam using a time flight technique, and then convert the distance information into three-dimensional point coordinates, thereby generating a point cloud. Thus, the present application may acquire a point cloud of a scene or object based on a depth camera.
The self-moving device may be a mobile robot such as a sweeping robot, a meal delivery robot, a snowplow robot, a greeting robot, or a vehicle such as an automobile with an automatic driving function.
As shown in fig. 1, the obstacle avoidance method of the self-mobile device includes steps S11 to S14.
Step S11: a first point cloud is acquired by a depth sensor in a sensor coordinate system.
The first point cloud refers to a point cloud acquired by a depth sensor with a sensor coordinate system as a reference, and represents a three-dimensional spatial representation of all objects or scenes detected in the field of view of the depth sensor. Specifically, the first point cloud is a set of points, and each point includes position information of an object in a three-dimensional space and possibly other information (such as color or reflection intensity, etc.).
It should be noted that, in the application scenario of the self-mobile device, the sensor coordinate system and the robot coordinate system are coordinate systems for locating and describing the object position. The sensor coordinate system is a coordinate system of the depth sensor and is used for representing the position of an object in an image shot by the depth sensor. It is usually centered on the depth camera optical center as the origin, the camera's visual axis as the Z-axis (forward pointing towards the front of the camera), and the coordinates on the image plane as the X and Y-axes. The camera coordinate system allows mapping pixel coordinates in an image to three-dimensional object coordinates relative to the camera.
The robot coordinate system is the local coordinate system of the self-mobile device for describing the object position inside and in the surrounding environment of the self-mobile device. It is usually referenced to a reference point from the mobile device, or to a reference direction from the mobile device (e.g., the direction of the head of the mobile device is the positive x-axis direction; the positive y-axis direction from the left side of the head of the mobile device). The robot coordinate system is typically used to control motion, navigation, and obstacle avoidance from the mobile device.
In practical applications, raw data output by the depth sensor is defined in the sensor coordinate system. Further, in the robot coordinate system, it is generally necessary to perform coordinate transformation on the targets or objects detected by the sensor coordinate system, for controlling the self-mobile device to achieve autonomous navigation and obstacle avoidance, and performing various tasks such as handling, patrol, and exploration.
In embodiments of the present application, a first point cloud located in a sensor coordinate system may be acquired by a depth sensor to acquire a three-dimensional spatial representation of all objects or scenes in the sensor coordinate system.
Step S12: and filtering the first point cloud according to a preset plane of the sensor coordinate system to obtain a second point cloud.
The second point cloud is a point cloud which is obtained by filtering the first point cloud and is coincident with or close to a preset ground plane.
Further, the preset ground plane is a virtual ground plane under the sensor coordinate system, which can be obtained based on the installation position information of the depth sensor in the robot coordinate system, and the specific acquisition step is described later in the application.
It can be appreciated that, since there is generally an error between the preset ground plane acquired by the depth sensor and the real ground plane where the self-mobile device is located, and since the ground is uneven or the movement of the self-mobile device is not stable, there is shake of the depth sensor relative to the ground, determining the obstacle directly based on the preset ground plane may cause failure in subsequent obstacle avoidance of the self-mobile device. Therefore, after the preset ground plane is obtained, the preset ground plane is used as priori information, so that a target ground plane, namely a real ground plane, which can be used for obstacle avoidance is determined based on the preset ground plane, and obstacle avoidance of the self-mobile device is realized based on the target ground plane.
Optionally, filtering the first point cloud according to a preset plane of the sensor coordinate system to obtain a second point cloud, including: filtering the first point cloud to determine effective point clouds in the first point cloud; and determining the effective point cloud with the distance from the preset ground plane smaller than the first preset threshold value as a second point cloud.
In particular, since the first point cloud typically contains a large number of points, some of them may be invalid or uncorrelated point clouds. Therefore, the first point cloud may be filtered by a filtering method such as moving average, median filtering or gaussian filtering, so as to filter out the effective or irrelevant point clouds in the first point cloud, thereby obtaining an effective point cloud. Wherein the valid point cloud comprises three-dimensional environmental data in a sensor coordinate system.
It will be appreciated that when the self-mobile device is equipped with a depth camera that is usable for overhead viewing, as shown in fig. 2, the effective point cloud includes a point cloud that includes a predetermined ground plane. At this time, the distance between each effective point cloud and the preset ground plane can be determined, and then the effective point cloud with the distance from the preset ground plane smaller than the first preset threshold value is determined as the second point cloud. It will be appreciated that the second point cloud is a point cloud that coincides with or is close to the preset ground plane, and thus may be used for subsequent true ground plane extraction, obstacle avoidance, or other tasks.
The magnitude of the first preset threshold mainly depends on the installation accuracy of the depth sensor on the self-mobile device, the stability of the self-mobile device during the movement, and the ranging accuracy of the depth sensor, for example, 5cm, 8cm, 10cm, etc., which is not limited in this application.
In addition, for the case that the depth camera for looking up is installed on the mobile device, that is, the effective point cloud does not include the point cloud including the preset ground plane, the application will be described later.
In the embodiment of the application, the first point cloud can be filtered according to the preset ground plane of the sensor coordinate system to obtain the second point cloud, and the second point cloud is the point cloud which coincides with or is close to the preset ground plane, so that the method can be used for subsequent extraction of the real ground plane, obstacle avoidance or other tasks.
Step S13: determining a target ground plane based on a preset ground plane and a second point cloud, and dividing the first point cloud by taking the target ground plane as a dividing plane to obtain an initial obstacle avoidance point cloud; wherein the initial obstacle avoidance point cloud corresponds to a sensor coordinate system.
The target ground plane is a real ground plane; the initial obstacle avoidance point cloud is the obstacle avoidance point cloud in the sensor coordinate system.
Specifically, the normal vector and any coordinate point of the preset ground plane in the sensor coordinate system and the second point cloud can be utilized to determine the target ground plane. For example, an initial model may be fitted, and then a fitting algorithm (such as a least squares method) may be used to achieve best fitting of the second point cloud, so that the target ground plane corresponding to the initial model is consistent with the preset ground plane, thereby obtaining the target ground plane. The detailed description is referred to the following description, and is not repeated.
After determining the target ground plane, the target ground plane may be determined as a segmentation plane for segmenting a point cloud in the first point cloud to obtain an initial obstacle avoidance point cloud.
It should be noted that the division plane is a plane defined in a three-dimensional space, and is used to divide point cloud data or other three-dimensional data into three parts: one part is located above the plane, one part is located in the plane, and the other part is located below the plane. Segmentation planes are often used in the fields of computer vision, robotic perception, obstacle avoidance, etc. to separate a specific object or feature, e.g. ground from an obstacle. In point cloud processing, the segmentation plane may help detect and identify the ground, enabling the self-mobile device to know where to walk safely.
In particular, the parameters of the target ground plane (normal vector and reference point) may be used to define equations for the split planes that will be used to divide points in the first point cloud into three classes: above, within, and below the dividing plane. It will be appreciated that the point cloud above the segmentation plane as well as the point cloud below the segmentation plane may be regarded as obstacle point clouds. Therefore, after the first point cloud is segmented by the segmentation plane, an initial obstacle avoidance point cloud of the sensor coordinate system can be obtained.
Optionally, dividing the first point cloud by using the target ground plane as a dividing plane to obtain an initial obstacle avoidance point cloud, including: taking the target ground plane as a segmentation plane, and determining the distance between the first point cloud and the segmentation plane; and determining the first point cloud with the distance from the dividing plane smaller than a second preset threshold value and the first point cloud with the distance from the dividing plane larger than a third preset threshold value as initial obstacle avoidance point clouds.
It can be appreciated that corresponding to the first point cloud having a distance to the dividing plane less than the second preset threshold is an obstacle located below the ground plane, such as a pit, or a step below the ground plane; corresponding to the first point cloud having a distance from the dividing plane greater than a third preset threshold is an obstacle above the ground, such as a table or chair or other.
It should be noted that the magnitudes of the second preset threshold and the third preset threshold generally depend on the obstacle crossing performance of the self-mobile device and the specific application scenario. For example, the obstacle crossing performance of the self-mobile device is better, and the values of the second preset threshold and the third preset threshold can be set to be relatively larger; the obstacle crossing performance of the self-mobile device is poor, and the values of the second preset threshold and the third preset threshold can be set to be relatively smaller; in addition, for a scenario with a higher requirement for operational smoothness of the self-mobile device, the values of the second preset threshold and the third preset threshold may be set to be relatively smaller. The second preset threshold and the third preset threshold are not limited in the application. For example, the second preset threshold value and the third preset threshold value are respectively 1cm, 3cm, 5cm and the like.
Specifically, the target ground plane may be used as a dividing plane, and the distance between each first point cloud and the dividing plane may be determined in turn, so that the first point cloud with the distance from the dividing plane smaller than the second preset threshold value is determined as the point cloud below the dividing plane, and the first point cloud with the distance from the dividing plane larger than the third preset threshold value is determined as the point cloud above the dividing plane. Thus, the point cloud can be determined as an initial obstacle avoidance point cloud of the sensor coordinate system.
In the embodiment of the application, the target ground plane can be determined through the preset ground plane and the second point cloud and used as the segmentation plane for segmenting the first point cloud so as to obtain the initial obstacle avoidance point cloud.
Step S14: and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of a robot coordinate system, and controlling the self-mobile equipment to avoid the obstacle based on the target obstacle avoidance point cloud.
It will be appreciated that the initial obstacle avoidance point cloud corresponds to the sensor coordinate system. Therefore, the robot coordinate system can be subjected to coordinate transformation to obtain a target obstacle avoidance point cloud of the robot coordinate system, so that obstacle avoidance is controlled to be performed from the mobile equipment based on the target obstacle avoidance point cloud.
It should be noted that, the method for controlling the self-mobile device to avoid the obstacle based on the target obstacle avoidance point cloud is not limited, for example, the path of the self-mobile device can be planned based on the target obstacle avoidance point cloud through an obstacle avoidance algorithm, so that the self-mobile device can avoid the obstacle according to the path and advance along the target direction. This application is described later.
In addition, the initial obstacle avoidance point cloud may be transformed into a map coordinate system for navigation or other coordinate systems according to a specific application scenario, so as to be used for realizing obstacle avoidance of the self-mobile device, which is not limited in the application.
Optionally, performing coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of a robot coordinate system, including: and carrying out coordinate transformation on the initial obstacle avoidance point cloud based on the transformation matrix to obtain the target obstacle avoidance point cloud.
The transformation matrix is a transformation matrix from a sensor coordinate system to a robot coordinate system, and can be obtained specifically according to the installation position information of the depth sensor in the robot coordinate system. Thus, the initial obstacle avoidance point cloud can be transformed to the target obstacle avoidance point cloud according to the transformation matrix.
Specifically, the application uses a robot coordinate system as x r -o r -y r The sensor coordinate system is x c -o c -y c An example is described. Wherein the installation position information of the depth sensor in the robot coordinate system is P (x 0 ,y 0 ,z 0 ,roll 0 ,pitch 0 ,yaw 0 ). After the installation position information P of the depth sensor is obtained, the installation position information P is transformed from the sensor coordinate system to the robot coordinate system, whereby a transformation matrix of the sensor coordinate system to the robot coordinate system can be determined
In the embodiment of the application, the initial obstacle avoidance point cloud can be converted from a sensor coordinate system to a target obstacle avoidance point cloud in a robot coordinate system, and obstacle avoidance control of the self-moving device is performed by utilizing the target obstacle avoidance point cloud, so that the self-moving device can safely move and avoid collision.
According to the obstacle avoidance method of the self-mobile device, the first point cloud located in the sensor coordinate system can be obtained through the depth sensor, and the first point cloud is filtered according to the preset plane of the sensor coordinate system, so that the second point cloud is obtained. The preset ground plane is obtained based on the installation position information of the depth sensor in the machine coordinate system. Further, the target ground plane can be determined based on the preset ground plane and the second point cloud, and the target ground plane is used as a segmentation plane to segment the first point cloud, so that the initial obstacle avoidance point cloud under the sensor coordinate system is obtained. Therefore, the initial obstacle avoidance point cloud can be subjected to coordinate transformation to obtain a target obstacle avoidance point cloud of a robot coordinate system, and obstacle avoidance is controlled to be performed from the mobile equipment based on the target obstacle avoidance point cloud. The method aims at taking the preset ground plane as priori information to determine the target ground plane, improving the extraction speed, stability and precision of the target ground plane, and further obtaining more accurate obstacle avoidance point cloud based on the target ground plane, so that more reliable obstacle avoidance can be achieved based on the obstacle avoidance point cloud.
With continued reference to fig. 3 and fig. 4, fig. 3 is a schematic flow chart of determining a preset ground plane according to an embodiment of the present application; fig. 4 is a schematic view of a robot coordinate system according to an embodiment of the present application. As shown in fig. 3, determining the preset ground plane may be achieved through steps S21 to S23.
Step S21: and acquiring the installation position information of the depth sensor in the robot coordinate system.
Step S22: and determining the transformation relation between the sensor coordinate system and the robot coordinate system according to the installation position information.
Step S23: a preset ground plane is determined based on the transformation relationship.
The installation position information comprises installation position coordinates of the depth camera in a robot coordinate system and direction (such as gesture or angle information) coordinates of the depth camera.
Further, the present application is expressed in terms of euler angles for the direction coordinates of the sensor. The euler angle generally includes three components, roll angle (Roll), pitch angle (Pitch), and Yaw angle (Yaw): the roll angle is the angle of rotation around the X axis of the current coordinate system of the object and represents the degree of tilting of the object around itself; the pitch angle is the angle of rotation around the Y axis of the current coordinate system of the object and represents the degree of tilting of the object around itself; yaw angle is the angle of rotation about the Z-axis of the object's current coordinate system and represents the degree to which the object rotates about an axis perpendicular to itself.
Specifically, the application uses a robot coordinate system as x r -o r -y r The sensor coordinate system is x c -o c -y c An example is described. Wherein the installation position information of the depth sensor in the robot coordinate system is P (x 0 ,y 0 ,z 0 ,roll 0 ,pitch 0 ,yaw 0 ). After obtaining the installation position information P of the depth sensor, the installation position information P is transformed from the robot coordinate system to the sensor coordinate system, and a transformation matrix of the robot coordinate system to the sensor coordinate system is calculatedThat is, the transformation relationship between the sensor coordinate system and the robot coordinate system is determined.
Further, two points can be selected from the robot coordinate systemBy transforming matrix->Transforming them into the sensor coordinate system, resulting in:
thereby, the normal vector n of the preset ground plane in the sensor coordinate system can be obtained c
And, presetting an equation of the ground plane in the sensor coordinate system:
A*x+B*y+C*z+D=0,
wherein,
it will be appreciated that the above equation is a mathematical description of the pre-set ground plane, representing the position and orientation of the pre-set ground plane in the sensor coordinate system. That is, determination of the preset ground plane based on the transformation relation of the sensor coordinate system and the robot coordinate system is achieved.
In the embodiment of the application, the installation position information of the depth sensor in the robot coordinate system can be obtained, the transformation relation between the sensor coordinate system and the robot coordinate system is further determined, and the preset ground plane is determined based on the transformation relation, so that the preset ground plane can be used as prior information of subsequent real ground plane extraction and obstacle avoidance tasks.
With continued reference to fig. 5, fig. 5 is a schematic flow chart of determining a target ground plane according to an embodiment of the present application. As shown in fig. 5, determining the target ground plane may be accomplished through steps S131 through S134.
Step S131: and acquiring a normal vector and any coordinate point of the preset ground plane in a sensor coordinate system.
Step S132: and taking the normal vector and any coordinate point as initial parameters of the initial model, and taking the second point cloud as input parameters of the initial model.
Step S133: and performing iterative training on the initial model based on the initial parameters and the input parameters to obtain the target ground plane model.
Step S134: a target ground plane is determined based on the target ground plane model.
The initial model is a model for training to obtain a target ground plane.
Specifically, a normal vector, any coordinate point and a second point cloud of the preset ground plane can be obtained, and then the normal vector and any coordinate point are used as initial parameters of an initial model. And taking the second point cloud as an input parameter of the initial model, so that the initial model can fit the second point cloud to determine the target ground plane. The training process of the initial model is an iterative process in which the parameters of the initial model are gradually adjusted by a preset algorithm to achieve a best fit to the second point cloud.
It should be noted that the preset algorithm is not limited in this application, and examples of the preset algorithm include a probability sampling consistency, a random sampling consistency, and a least square method, and the preset algorithm is exemplified by a random sampling consistency (RANSAC, random Sample Consensus) algorithm.
The random sample consensus algorithm is an iterative algorithm that estimates a parametric model (e.g., a planar model) from a dataset and robustly identifies outliers. Random sampling consistency algorithms are typically used to process data with noise and outliers, and better fit results can be obtained in the presence of outliers.
After the iterative training of the initial model is realized based on the random sampling consistency algorithm, parameters (including normal vectors, reference points and the like) of the target ground plane model can be determined. Hereby is achieved a target ground plane model describing the position and orientation of the target ground plane in the sensor coordinate system. That is, the target ground plane may be determined based on the target ground plane model.
In the embodiment of the application, the target ground plane model can be obtained according to the preset ground plane and the second point cloud training, and the model can accurately describe the position and the direction of the target ground plane. Since the target ground plane is an actual ground plane, it may be used as a segmentation plane to aid in perception and navigation from the mobile device and to perform obstacle avoidance operations.
With continued reference to fig. 6 and fig. 7, fig. 6 is a flowchart of another obstacle avoidance method for a self-mobile device according to an embodiment of the present disclosure; fig. 7 is a schematic view of a scenario of an obstacle avoidance method of the self-mobile device shown in fig. 6. As shown in fig. 6, obstacle avoidance of the self-mobile device can be achieved through steps S31 to S34.
Step S31: a first point cloud is acquired by a depth sensor in a sensor coordinate system.
Step S32: and filtering the first point cloud according to a preset plane of the sensor coordinate system.
Step S33: if the second point cloud is not obtained, the first point cloud is segmented by taking the preset ground plane as a segmentation plane, and the initial obstacle avoidance point cloud is obtained.
Step S34: and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of a robot coordinate system, and carrying out obstacle avoidance based on the target obstacle avoidance point cloud.
For a detailed description of step S31 and step S32, refer to the above embodiments, and for avoiding repetition, the detailed description is omitted here,
it will be appreciated that, as shown in fig. 7, when the depth camera for looking up is installed on the self-mobile device, that is, the effective point cloud obtained after the filtering does not include the second point cloud, it is indicated that the target ground plane cannot be obtained. At this time, the preset ground plane can be directly used as a segmentation plane to segment the first point cloud, so as to obtain the initial obstacle avoidance point cloud.
In particular, the parameters of the preset ground plane (normal vector and reference point) may be used to define equations of the split plane that will be used to divide the points in the first point cloud into three classes: above, within, and below the dividing plane. It will be appreciated that the point cloud above the segmentation plane as well as the point cloud below the segmentation plane may be regarded as obstacle point clouds. Therefore, after the first point cloud is segmented by the segmentation plane, an initial obstacle avoidance point cloud of the sensor coordinate system can be obtained.
Further, since the initial obstacle avoidance point cloud corresponds to the sensor coordinate system. Therefore, the robot coordinate system can be subjected to coordinate transformation to obtain a target obstacle avoidance point cloud of the robot coordinate system, so that obstacle avoidance is controlled to be performed from the mobile equipment based on the target obstacle avoidance point cloud.
In this embodiment of the present application, if the second point cloud is not obtained after the filtering operation, the preset ground plane may be directly determined as the segmentation plane to segment the first point cloud, so as to obtain an initial obstacle avoidance point cloud, so that the initial obstacle avoidance point cloud may be transformed to a target obstacle avoidance point cloud, so as to control the self-mobile device to avoid an obstacle based on the target obstacle avoidance point cloud.
Referring to fig. 8, fig. 8 is a schematic flow chart of controlling a self-mobile device to avoid an obstacle according to an embodiment of the present application. As shown in fig. 8, the obstacle avoidance based on the target obstacle avoidance point cloud control from the mobile device may be implemented based on steps S141 to S143.
Step S141: and detecting the obstacle according to the target obstacle avoidance point cloud to obtain obstacle information.
Step S142: and determining a target path through a path planning algorithm based on the obstacle information and map information of the self-mobile device.
Step S143: and controlling the self-moving equipment to move according to the target path so as to avoid the obstacle.
Since the target obstacle avoidance point cloud is an obstacle avoidance point cloud in the robot coordinate system, it describes obstacles in the environment surrounding the self-mobile device. Therefore, the target obstacle avoidance point cloud can be analyzed through computer vision or deep learning techniques to identify and locate information such as the position, shape, and size of the obstacle.
Further, the obstacle information and map information (including known topography, buildings, other features, etc.) of the self-mobile device may be analyzed by a path planning algorithm to determine a target path that the self-mobile device should follow, that is, a path that the self-mobile device can move and avoid the obstacle. Thus, the movement of the self-moving device can be controlled according to the target path, such as adjusting the direction and the gesture of the self-moving device, so as to ensure that the self-moving device does not collide with the obstacle.
The type of the path planning algorithm is not limited, and includes, for example, an artificial potential field method, a genetic algorithm, and simulated annealing.
In the embodiment of the application, the target path of the self-mobile device can be determined by detecting the obstacle and planning the path, so that the self-mobile device can avoid the obstacle based on the target path.
The methods of the present application may be used in a wide variety of general purpose or special purpose computing system environments or configurations with which to control obstacle avoidance from a mobile device. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Of course, the above-described method, apparatus may be implemented in the form of a computer program that is executable on a self-mobile device as shown in fig. 6.
Referring to fig. 9, fig. 9 is a schematic diagram of a self-mobile device according to an embodiment of the present application.
As shown in fig. 9, the self-mobile device 400 includes a processor 401, a memory 402, and a network interface connected through a system bus, wherein the memory 402 may include a volatile storage medium, a nonvolatile storage medium, and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause the processor 401 to perform any of a number of obstacle avoidance methods from a mobile device.
The processor 401 serves to provide computing and control capabilities, supporting the operation of the entire self-mobile device 400.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by the processor 401, causes the processor 401 to perform any of the obstacle avoidance methods of the self-mobile device.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure of the self-moving device 400 is merely a block diagram of some of the structures related to the present application and does not constitute a limitation of the self-moving device 400 to which the present application is applied, and that a specific self-moving device 400 may include more or less components than those shown in the drawings, or may combine some components, or have different arrangements of components.
It should be appreciated that the processor 401 may be a central processing unit (Central Processing Unit, CPU), and the processor 401 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in some embodiments the processor 401 is configured to run a computer program stored in the memory 402 to implement the steps of: acquiring a first point cloud positioned in a sensor coordinate system through the depth sensor; filtering the first point cloud according to a preset plane of the sensor coordinate system to obtain a second point cloud; the preset ground plane is obtained based on the installation position information of the depth sensor in a robot coordinate system; determining a target ground plane based on the preset ground plane and the second point cloud, and dividing the first point cloud by taking the target ground plane as a dividing plane to obtain an initial obstacle avoidance point cloud; wherein the initial obstacle avoidance point cloud corresponds to the sensor coordinate system; and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of the robot coordinate system, and controlling the self-mobile equipment to avoid the obstacle based on the target obstacle avoidance point cloud.
In some embodiments, the processor 401 is further configured to obtain information of a mounting position of the depth sensor in the robot coordinate system; determining a transformation relation between the sensor coordinate system and the robot coordinate system according to the installation position information; and determining the preset ground plane based on the transformation relation.
In some embodiments, the processor 401 is further configured to filter the first point clouds and determine an effective point cloud in the first point clouds; and determining an effective point cloud with a distance from the preset ground plane smaller than a first preset threshold value as the second point cloud.
In some embodiments, the processor 401 is further configured to obtain a normal vector and an arbitrary coordinate point of the preset ground plane in the sensor coordinate system; taking the normal vector and the arbitrary coordinate point as initial parameters of an initial model, and taking the second point cloud as input parameters of the initial model; performing iterative training on the initial model based on the initial parameters and the input parameters to obtain a target ground plane model; the target ground plane is determined based on the target ground plane model.
In some implementations, the processor 401 is further configured to treat the target ground plane as the segmentation plane and determine a distance of the first point cloud from the segmentation plane; and determining the first point cloud with the distance from the dividing plane smaller than a second preset threshold value and the first point cloud with the distance from the dividing plane larger than a third preset threshold value as the initial obstacle avoidance point cloud.
In some embodiments, the processor 401 is further configured to perform coordinate transformation on the initial obstacle avoidance point cloud based on the transformation matrix, to obtain the target obstacle avoidance point cloud.
In some embodiments, the processor 401 is further configured to segment the first point cloud with the preset ground plane as the segmentation plane if the second point cloud is not obtained, so as to obtain the initial obstacle avoidance point cloud; and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of the robot coordinate system, and carrying out obstacle avoidance based on the target obstacle avoidance point cloud.
In some embodiments, the processor 401 is further configured to perform obstacle detection according to the target obstacle avoidance point cloud to obtain obstacle information; determining a target path through a path planning algorithm based on the obstacle information and map information of the self-mobile equipment; and controlling the self-moving equipment to move according to the target path so as to avoid the obstacle.
The embodiment of the application also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, wherein the computer program comprises program instructions, and the program instructions realize any obstacle avoidance method of the self-mobile device provided by the embodiment of the application when being executed.
The computer readable storage medium may be an internal storage unit of the self-mobile device according to the foregoing embodiment, for example, a hard disk or a memory of the self-mobile device. The computer readable storage medium may also be an external storage device of the self-mobile device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the self-mobile device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of obstacle avoidance for a self-moving device, the self-moving device comprising a depth sensor, the method comprising:
acquiring a first point cloud positioned in a sensor coordinate system through the depth sensor;
filtering the first point cloud according to a preset plane of the sensor coordinate system to obtain a second point cloud; the preset ground plane is obtained based on the installation position information of the depth sensor in a robot coordinate system;
determining a target ground plane based on the preset ground plane and the second point cloud, and dividing the first point cloud by taking the target ground plane as a dividing plane to obtain an initial obstacle avoidance point cloud; wherein the initial obstacle avoidance point cloud corresponds to the sensor coordinate system;
and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of the robot coordinate system, and controlling the self-mobile equipment to avoid the obstacle based on the target obstacle avoidance point cloud.
2. The method according to claim 1, wherein the method further comprises:
acquiring the installation position information of the depth sensor in the robot coordinate system;
determining a transformation relation between the sensor coordinate system and the robot coordinate system according to the installation position information;
and determining the preset ground plane based on the transformation relation.
3. The method of claim 1, wherein the filtering the first point cloud according to the preset plane of the sensor coordinate system to obtain a second point cloud comprises:
filtering the first point cloud to determine effective point clouds in the first point cloud;
and determining an effective point cloud with a distance from the preset ground plane smaller than a first preset threshold value as the second point cloud.
4. The method of claim 1, wherein the determining a target ground plane based on the preset ground plane and the second point cloud comprises:
acquiring a normal vector and any coordinate point of the preset ground plane in the sensor coordinate system;
taking the normal vector and the arbitrary coordinate point as initial parameters of an initial model, and taking the second point cloud as input parameters of the initial model;
performing iterative training on the initial model based on the initial parameters and the input parameters to obtain a target ground plane model;
the target ground plane is determined based on the target ground plane model.
5. The method of claim 1, wherein the segmenting the first point cloud with the target ground plane as a segmentation plane results in an initial obstacle avoidance point cloud, comprising:
taking the target ground plane as the dividing plane, and determining the distance between the first point cloud and the dividing plane;
and determining the first point cloud with the distance from the dividing plane smaller than a second preset threshold value and the first point cloud with the distance from the dividing plane larger than a third preset threshold value as the initial obstacle avoidance point cloud.
6. The method of claim 2, wherein the transformation relationship comprises a transformation matrix, and the performing coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of the robot coordinate system comprises:
and carrying out coordinate transformation on the initial obstacle avoidance point cloud based on the transformation matrix to obtain the target obstacle avoidance point cloud.
7. The method of claim 1, wherein after filtering the first point cloud according to the preset plane of the sensor coordinate system, further comprising:
if the second point cloud is not obtained, the preset ground plane is used as the segmentation plane to segment the first point cloud, and the initial obstacle avoidance point cloud is obtained;
and carrying out coordinate transformation on the initial obstacle avoidance point cloud to obtain a target obstacle avoidance point cloud of the robot coordinate system, and carrying out obstacle avoidance based on the target obstacle avoidance point cloud.
8. The method of claim 1, wherein the controlling the self-mobile device to avoid the obstacle based on the target obstacle avoidance point cloud comprises:
performing obstacle detection according to the target obstacle avoidance point cloud to obtain obstacle information;
determining a target path through a path planning algorithm based on the obstacle information and map information of the self-mobile equipment;
and controlling the self-moving equipment to move according to the target path so as to avoid the obstacle.
9. A self-moving device, comprising: a memory and a processor; wherein the memory is connected to the processor for storing a program, and the processor is configured to implement the steps of the obstacle avoidance method of the self-mobile device according to any one of claims 1 to 7 by running the program stored in the memory.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the steps of the obstacle avoidance method of a self-mobile device as claimed in any one of claims 1 to 7.
CN202311326561.6A 2023-10-12 2023-10-12 Obstacle avoidance method for self-mobile device, self-mobile device and storage medium Pending CN117519124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311326561.6A CN117519124A (en) 2023-10-12 2023-10-12 Obstacle avoidance method for self-mobile device, self-mobile device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311326561.6A CN117519124A (en) 2023-10-12 2023-10-12 Obstacle avoidance method for self-mobile device, self-mobile device and storage medium

Publications (1)

Publication Number Publication Date
CN117519124A true CN117519124A (en) 2024-02-06

Family

ID=89761494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311326561.6A Pending CN117519124A (en) 2023-10-12 2023-10-12 Obstacle avoidance method for self-mobile device, self-mobile device and storage medium

Country Status (1)

Country Link
CN (1) CN117519124A (en)

Similar Documents

Publication Publication Date Title
US20210124029A1 (en) Calibration of laser and vision sensors
JP5991952B2 (en) A method for determining the camera's attitude to the surroundings
JP2010061655A (en) Object tracking using linear feature
CN111373336B (en) State awareness method and related equipment
US20200206927A1 (en) Relocalization method and robot using the same
JP2022027593A (en) Positioning method and device for movable equipment, and movable equipment
US20230419531A1 (en) Apparatus and method for measuring, inspecting or machining objects
CN111862214A (en) Computer equipment positioning method and device, computer equipment and storage medium
US11514588B1 (en) Object localization for mapping applications using geometric computer vision techniques
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
Manivannan et al. Vision based intelligent vehicle steering control using single camera for automated highway system
CN114509079A (en) Method and system for ground projection for autonomous driving
CN116295353A (en) Positioning method, device and equipment of unmanned vehicle and storage medium
CN112597946A (en) Obstacle representation method and device, electronic equipment and readable storage medium
CN117635721A (en) Target positioning method, related system and storage medium
Zhang et al. Visual odometry based on random finite set statistics in urban environment
CN117519124A (en) Obstacle avoidance method for self-mobile device, self-mobile device and storage medium
JP2020064029A (en) Mobile body controller
Cupec et al. Global localization based on 3d planar surface segments
CN110909569B (en) Road condition information identification method and terminal equipment
Szaj et al. Vehicle localization using laser scanner
CN113516013A (en) Target detection method and device, electronic equipment, road side equipment and cloud control platform
JP6670712B2 (en) Self-position estimation device, moving object and self-position estimation method
Yang et al. Inertial-aided vision-based localization and mapping in a riverine environment with reflection measurements
Marlow et al. Dynamically sized occupancy grids for obstacle avoidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination