CN114353780A - Attitude optimization method and equipment - Google Patents

Attitude optimization method and equipment Download PDF

Info

Publication number
CN114353780A
CN114353780A CN202111666849.9A CN202111666849A CN114353780A CN 114353780 A CN114353780 A CN 114353780A CN 202111666849 A CN202111666849 A CN 202111666849A CN 114353780 A CN114353780 A CN 114353780A
Authority
CN
China
Prior art keywords
point cloud
cloud data
points
environment
environment detectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111666849.9A
Other languages
Chinese (zh)
Other versions
CN114353780B (en
Inventor
黄玉玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202111666849.9A priority Critical patent/CN114353780B/en
Publication of CN114353780A publication Critical patent/CN114353780A/en
Application granted granted Critical
Publication of CN114353780B publication Critical patent/CN114353780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a posture optimization method and equipment. In the embodiment of the application, the characteristic that different environment detectors can scan the same object at different moments can be utilized to perform feature extraction on each frame of point cloud data acquired by a plurality of environment detectors, so as to obtain feature points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymy points in the characteristic points corresponding to the plurality of environment detectors; and the distance between the coordinates of the homonymous points corresponding to the plurality of environment detectors under the same coordinate is minimized as a target, and the attitude information recorded by the plurality of environment detectors in the process of acquiring point cloud data is corrected according to the point cloud coordinates of the homonymous points and the attitude information recorded by the plurality of environment detectors in the process of acquiring the homonymous points, so that the attitude of the environment detectors is optimized, and the attitude accuracy of the environment detectors is improved.

Description

Attitude optimization method and equipment
Technical Field
The application relates to the technical field of computers, in particular to a posture optimization method and equipment.
Background
With the continuous development and popularization of intelligent terminals, map application software is widely installed and applied. The electronic map data is widely applied to the production of the electronic map data based on the data acquired by the data acquisition equipment along the road. In the process of constructing the map based on the data, the corresponding geographic elements and geometric data thereof, such as roads and geometric data thereof (including data of shapes, directions, positions and the like of the roads) need to be identified from the data. In order to accurately acquire the corresponding geographic elements and the geometric data thereof from the data, the pose information of the data acquisition equipment needs to be acquired.
The precision of the electronic map depends on the track precision of the data acquisition equipment to a great extent, and the track precision is divided into position precision and attitude precision. Therefore, the attitude accuracy of the data acquisition device has a crucial influence on the accuracy of the electronic map. In summary, how to improve the attitude accuracy of the data acquisition device becomes a problem of continuous research by those skilled in the art.
Disclosure of Invention
Aspects of the present application provide a method and an apparatus for optimizing a posture, so as to optimize a posture of an environment detector, and contribute to improving the posture accuracy of the environment detector.
The embodiment of the application provides a posture optimization method, which comprises the following steps:
acquiring point cloud data of each frame acquired by a plurality of environment detectors;
extracting the characteristics of each frame of point cloud data to obtain characteristic points corresponding to the plurality of environment detectors from each frame of point cloud data;
matching the characteristic points corresponding to the plurality of environment detectors to determine homonymous points in the characteristic points corresponding to the plurality of environment detectors; the same-name points are data points corresponding to the same physical point of the real world in the point cloud data of each frame collected by the plurality of environment detectors;
acquiring pose information recorded in the process of acquiring the same-name points by the plurality of environment detectors;
and optimizing the postures recorded in the process of acquiring the point cloud data by the plurality of environment detectors according to the point cloud coordinates of the same-name points corresponding to the plurality of environment detectors and the pose information recorded in the process of acquiring the same-name points by the plurality of environment detectors.
An embodiment of the present application further provides a computer device, including: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-described pose optimization method.
An embodiment of the present application further provides a data acquisition device, including: a machine body; the machine body is provided with a memory, a processor and a plurality of environment detectors;
the environment detectors are used for acquiring point cloud data;
the memory for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-described pose optimization method.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-described pose optimization method.
An embodiment of the present application further provides a computer program product, including: a computer program. Wherein the computer program product is executable by a processor to implement the above-described method of pose optimization.
In the embodiment of the application, the characteristic that different environment detectors can scan the same object at different moments is utilized to perform feature extraction on each frame of point cloud data acquired by a plurality of environment detectors, so as to obtain feature points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymy points in the characteristic points corresponding to the plurality of environment detectors; and the distance between the coordinates of the homonymous points corresponding to the plurality of environment detectors under the same coordinate is minimized as a target, and the attitude information recorded in the process of acquiring the point cloud data by the plurality of environment detectors is corrected according to the point cloud coordinates of the homonymous points corresponding to the plurality of environment detectors and the attitude information in the process of acquiring the homonymous points by the plurality of detectors, so that the attitude of the environment detectors is optimized, and the attitude accuracy of the environment detectors is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic diagram of an operating environment of a data acquisition device according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of a data acquisition device according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an arrangement of an environment detector according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating adjacent frame scanning of a dual environment detector according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of point cloud data characteristics of a double single line laser radar scan provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of the effect of coordinate transformation of the point cloud data scanned in FIG. 4 using poses before and after optimization;
fig. 6 is a schematic flowchart of a posture optimization method provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the field of electronic maps, in order to create electronic map data, data collection equipment is often used to collect environmental data along roads and create electronic map data based on the environmental data. In order to accurately acquire the corresponding geographic elements and the geometric data thereof from the data, the pose information of the data acquisition equipment needs to be acquired.
The precision of the electronic map depends on the track precision of the data acquisition equipment to a great extent, and the track precision is divided into position precision and attitude precision. Therefore, the attitude accuracy of the data acquisition device has a crucial influence on the accuracy of the electronic map. In the prior art, a radar ranging and mapping (LOAM) method can be used to correct the attitude of the Lidar, but the LOAM method requires overlapping of front and rear frame data acquired by the Lidar, and is not suitable for attitude correction of the Lidar in which the acquired front and rear frame data do not overlap.
In summary, it is desirable to provide an attitude optimization method without limitation on the data of the front and rear frames acquired by the laser radar. In order to solve the technical problem, in some embodiments of the present application, features of each frame of point cloud data collected by a plurality of environment detectors are extracted by using characteristics that different environment detectors can scan the same object at different times, so as to obtain feature points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymy points in the characteristic points corresponding to the plurality of environment detectors; and the distance between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinate is minimized as a target, and the postures of the plurality of environment detectors in the process of acquiring the point cloud data are corrected according to the point cloud coordinates of the same-name points corresponding to the plurality of environment detectors and the posture information of the plurality of environment detectors in the process of acquiring the same-name points, so that the postures of the environment detectors are optimized, and the posture accuracy of the environment detectors is improved.
On the other hand, the gesture optimization method for the laser radar front and back frame data is provided by utilizing the characteristic that different environment detectors can scan the same object at different moments to optimize the gesture of the environment detectors and limitlessly overlapping the front and back frame point cloud data collected by the same environment detector.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1a is a diagram illustrating an example of an operating environment of a data acquisition device according to an embodiment of the present application. Fig. 1b is a schematic structural diagram of a data acquisition device according to an embodiment of the present application. As shown in fig. 1a and 1b, the data collecting apparatus S1 includes: a machine body 10 and a plurality of environment detectors 11 disposed on the machine body. Plural means 2 or more. A plurality of environment detectors 11 may be disposed on the machine body 10 at an angle, as shown in fig. 2. Fig. 1a and 2 illustrate only the number of environment detectors 11 as 2, but are not limited thereto.
The machine body 10 is an actuator of the data acquisition device, and mainly refers to a body of the data acquisition device. The specified operations may be performed in certain circumstances. The machine body 10 represents the appearance of the data acquisition device to a certain extent. In the present embodiment, the appearance of the data acquisition device is not limited. For example, the data acquisition device may be a data acquisition vehicle, a data acquisition drone, or a humanoid or non-humanoid robot, or the like.
In the present embodiment, the environment detector 11 mainly refers to an electronic device capable of collecting environment information. The environment detector 11 may be a vision sensor or a radar, etc. Wherein, the visual sensor can be a camera, etc.; the radar may be a microwave radar, a millimeter wave radar, a laser radar, or the like. The lidar may again be a single line lidar or a multi-line lidar.
In this embodiment, as shown in fig. 1b, some basic components of the data acquisition device, such as the driving component 12, are further disposed on the machine body 10. Alternatively, the drive assembly 12 may include drive wheels, drive motors, universal wheels, and the like.
In this embodiment, as shown in fig. 1a, a plurality of environment detectors 11 may collect environment information to obtain point cloud data. Each environment detector 11 may collect point cloud data to obtain point cloud data collected by each of the plurality of environment detectors 11. The environment detector 11 can move along with the movement of the data acquisition equipment, and acquires environment information in the moving process of the data acquisition equipment to obtain point cloud data. One frame of point cloud data is a data set obtained by the environment detector 11 through one environment detection, and the data set comprises: the spatial coordinates of the detection point in the coordinate system of the ambient detector 11. For the visual sensor, pixel points of an environment image collected by the visual sensor form point cloud data, each pixel point corresponds to a detection point, and each pixel point can be used as a data point in the point cloud data. For radar, the point cloud data is a data set obtained by scanning the radar for one week. The point cloud data will be described below by taking a laser radar as an example.
For the laser radar, the surrounding environment of the current position of the data acquisition equipment can be scanned to obtain point cloud data. In this embodiment, the laser radar includes a laser transmitter, an optical receiver, and an information processing system, where the laser transmitter is configured to transmit a laser probe signal, the probe laser signal reflects a light echo signal when encountering an obstacle, and the optical receiver is configured to receive the light echo signal. Then, the information processing system can obtain the information about the target, such as the distance between the laser radar and the target, the azimuth, the height and the shape of the target, and the like, according to the reflected light echo signal.
In this embodiment, the laser detection signal substantially when encountering an obstacle is: the laser detection signal encounters a point on the obstacle. For convenience of description and distinction, an obstacle point actually encountered by the laser detection signal during propagation is defined as a detection point. In this embodiment, the detection point may be a certain point on the target object, or may belong to other objects besides the target object, such as dust in the air. When each laser detection signal meets a detection point, a corresponding optical echo signal can be returned. The radar can obtain the distance between the laser radar and the detection point according to the difference between the laser detection signal and the optical echo signal. The laser detection signals emitted by the radar are different, and the mode of obtaining the distance between the laser radar and the detection point is also different.
For example, if the laser probe signal emitted by the laser radar is a laser pulse signal, the distance between the radar and the probe point may be calculated according to the time difference between the detected laser pulse signal emitted by the radar and the received optical echo signal. Namely, the distance between the laser radar and the detection point is calculated by using a time-of-flight method. Alternatively, knowing the speed of the laser pulse signal and the optical echo signal in atmospheric propagation, the distance between the laser radar and the detection point can be calculated according to the time difference between the laser pulse signal sent by the laser radar and the received optical echo signal and the speed of the laser second pulse signal and the optical echo signal in atmospheric propagation. For another example, if the laser probe signal emitted by the laser radar is a continuous optical signal, the distance between the laser radar and the probe point may be calculated according to the frequency difference between the continuous optical signal emitted by the radar and the received optical echo signal. Optionally, the Continuous Wave is a Frequency Modulated Continuous Wave (FMCW). The frequency modulation method may be triangular frequency modulation, sawtooth frequency modulation, code modulation, noise frequency modulation, or the like, but is not limited thereto.
Further, the spatial coordinate information of the detection points can be obtained based on the laser detection signals transmitted by the laser radar and the received optical echo signals, and the spatial coordinate information of a plurality of detection points form point cloud data, namely the point cloud data is a set formed by a series of spatial coordinate points. Optionally, the data point corresponding to the detection point may be calculated according to the distance between the laser radar and the detection point and the pose of the laser radar. The pose of the laser radar refers to the position and the posture of the laser radar. Further, the attitude of the lidar may refer to the directionality of the laser beam emitted by the laser transmitter. Further, according to the directivity of the laser beam emitted by the laser transmitter, the direction of the detection point compared with the direction of the laser radar can be obtained; and further, according to the direction of the detection point compared with the laser radar, the distance between the laser radar and the detection point and the position of the laser radar, the space coordinate of the detection point under a laser radar coordinate system can be calculated and used as a data point in point cloud data corresponding to the detection point.
In some embodiments, the data acquisition device may perform data acquisition only and provide the acquired point cloud data to other computer devices for processing. In other embodiments, the data acquisition device may also have data processing capabilities. For example, the data acquisition device may be an autonomous mobile device, such as an autonomous vehicle, a robot, or a drone. The autonomous mobile equipment can acquire environmental information and construct an environmental map in the moving process; and performing route planning and the like based on the constructed environment map. Of course, the data acquisition device may also be other mobile devices, such as a vehicle that requires driving by the driver, etc.
The following takes a data acquisition device with a data processing function as an example to exemplarily explain the posture optimization method provided by the embodiment of the present application.
As shown in fig. 1b, the data acquisition device S1 may further include: a memory 13 and a processor 14 disposed on the machine body 10. It should be noted that the memory 13 and the processor 14 may be disposed inside the machine body 10, or may be disposed on the surface of the machine body 10.
In the present embodiment, the memory 13 may store a computer program. The computer program may be executed by the processor 14 to cause the processor 14 to perform a corresponding function or to control the data acquisition device to perform a corresponding action or task, etc.
The processor 14 may be regarded as a control system of the data acquisition device and may be configured to execute a computer program stored in the memory 13 to control the data acquisition device to implement corresponding functions, perform corresponding actions or tasks, and so on.
In this embodiment, the processor 14 may acquire each frame of point cloud data acquired by the plurality of environment detectors 11. Wherein, each environment detector 11 corresponds to one frame of point cloud data. In consideration of the fact that the data amount of one frame of point cloud data is large and there are noise points, in order to reduce the calculation amount and noise, in the present embodiment, the processor 14 may perform feature extraction on each frame of point cloud data to obtain feature points corresponding to the plurality of environment detectors 11 from each frame of point cloud data. The feature points obtained from the point cloud data collected by which environment detector are the feature points corresponding to the environment detector 11.
Alternatively, the processor 14 may calculate the curvature of the data points from the point cloud coordinates of the data points in each frame of point cloud data. Alternatively, for any data point a, the processor 14 may calculate distances between other data points B and data point a belonging to the same frame of point cloud data as the data point a according to the point cloud coordinates of the data points in the point cloud data to which the data point a belongs; and selecting a data point C which has a distance from the data point A meeting the set requirement from other data points B. Alternatively, a data point having a distance from the data point a smaller than or equal to a set distance threshold may be selected as the data point C from the data points B. Alternatively, a set number of data points may be selected as the data point C in order of increasing distance from the data point a. Further, the curvature of data point a may be calculated based on data point C and the coordinates of data point a.
Based on the curvature of the data points in each frame of point cloud data, the processor 14 may obtain data points whose curvatures meet the set requirements from each frame of point cloud data, and use the data points as feature points corresponding to the environment detector 11 for collecting the frame of point cloud data. Optionally, the processor 14 may obtain, from each frame of point cloud data, an edge point whose curvature is greater than or equal to a set first curvature threshold as a feature point corresponding to the environment detector 11 that collects the frame of point cloud data; and/or acquiring a plane point with a curvature smaller than a set second curvature threshold value from each frame of point cloud data as a characteristic point corresponding to the environment detector 11 for acquiring the frame of point cloud data; wherein the first curvature threshold is greater than the second curvature threshold.
Further, the plurality of environment detectors 11 are disposed on the machine body 10 at a certain angle, and can observe the same object at different times. As shown in fig. 3, the dotted line represents the scanning schematic of the first environment detector 11a at times T1 and T2; the solid line shows the scanning pattern of the first environment detector 11b at times T1 and T2. As can be taken from fig. 3, the first environment probe 11a and the second environment probe 11b may observe the object P1 at times T1 and T2, respectively; and the subject P2 was observed at the time T2 and the time T1, respectively. Based on this, in this embodiment, the processor 14 may match the feature points corresponding to the plurality of environment detectors 11, and determine the same-name point among the feature points corresponding to the plurality of environment detectors 11. The corresponding point refers to a data point corresponding to the same physical point in the real world in each frame of point cloud data acquired by the plurality of environment detectors 11. As shown in a characteristic diagram of point cloud data in a tunnel scene acquired by 2 environment detectors in fig. 4, data points a and B are a pair of homonymous points. The point cloud data shown in fig. 4 may be obtained by scanning with a double single line laser radar, but is not limited thereto.
Since the pair of homologous points are the same physical point at the physical point of the real world, when matching the feature points corresponding to the plurality of environment detectors 11, the processor 14 may convert the point cloud data collected by the different environment detectors 11 into the same set coordinate system, such as a world coordinate system. For any data point a of the point cloud data collected by any environment detector 11, the point cloud coordinate can be converted to a set coordinate system by using the following formula (1), where the formula (1) can be expressed as:
Figure BDA0003451349290000061
therefore, under ideal conditions, for a pair of homologous points a and B, it can satisfy:
Pw=Pw0 (2)
in formula (1), PlRepresenting point cloud coordinates of the data point A in point cloud data, namely coordinates of the data point A in an environment detector coordinate system for collecting the frame of point cloud data;
Figure BDA0003451349290000062
and
Figure BDA0003451349290000063
representing the attitude matrix and the translation matrix between the environment probe coordinate system and a set coordinate system (e.g., world coordinate system). In the formulae (1) and (2), PwRepresenting the coordinates of the data point A in a set coordinate system, Pw0And representing the coordinates of the same-name point B of the data point A in a set coordinate system.
In some embodiments, as shown in fig. 1b, the machine body 10 of the data acquisition device S1 is provided with a combined navigation module 15. The integrated navigation module 15 may acquire the pose (i.e., position and posture) of the environment probe 11. The integrated navigation module 15 may include: inertial sensors (IMU), positioning units, wheel speed gauges, and the like, but are not limited thereto. The pose of the environment detector 11 acquired by the integrated navigation module 15 is a pose matrix and a translation matrix between the IMU coordinate system and the world coordinate system. Therefore, in order to reduce the amount of calculation, the point cloud data collected by the environment detector 11 may be converted into the IMU coordinate system, wherein the conversion formula may be expressed as:
Figure BDA0003451349290000064
in formula (3), PlRepresenting point cloud coordinates of the data point A in point cloud data, namely coordinates of the data point A in an environment detector coordinate system for collecting the frame of point cloud data; piRepresenting the coordinates of the data point A in an IMU coordinate system;
Figure BDA0003451349290000065
and
Figure BDA0003451349290000066
representing a pose matrix and a translation matrix between the environment probe coordinate system and the IMU coordinate system. Wherein, because the relative position relationship between the rotation plane of the environment detector 11 and the integrated navigation module 15 is fixed, therefore,
Figure BDA0003451349290000067
and
Figure BDA0003451349290000068
can be obtained by calibration in advance.
Based on the above formula (3), the point cloud data collected by the environment detectors 11 is converted into an IMU coordinate system, and when matching the feature points corresponding to the plurality of environment detectors 11, coordinates of data points collected by different environment detectors 11 in the IMU coordinate system may be converted into the same set coordinate system, and if the world coordinate system is used, for any data point a, the point cloud coordinate may be converted into the set coordinate system by using the following formula (4), where the formula (4) may be expressed as:
Figure BDA0003451349290000069
in the formula (4), the reaction mixture is,
Figure BDA00034513492900000610
and
Figure BDA00034513492900000611
representing the pose matrix and the translation matrix between the IMU coordinate system and a set coordinate system, such as the world coordinate system.
Under theoretical conditions, for a pair of homologous points a and B, the following conditions can be satisfied:
Figure BDA00034513492900000612
in formula (5), PiAnd Pi0Representing the coordinates of data point a and the co-named point B of data point a in the IMU coordinate system. The first environment detector 11a is an environment detector for collecting the data point a; the second environment sensor 11B is an environment sensor for collecting the same name point B of the data point a, and in the equation (5),
Figure BDA0003451349290000071
and
Figure BDA0003451349290000072
respectively acquiring the posture and the position (namely the posture) of the first environment detector 11a in the process of acquiring the data point a, namely a posture matrix and a translation matrix between an IMU coordinate system corresponding to the first environment detector 11a and a set coordinate system (such as a world coordinate system);
Figure BDA0003451349290000073
and
Figure BDA0003451349290000074
the pose and the position (i.e., pose) of the second environment detector 11B during the process of collecting the corresponding point B of the data point a are obtained, respectively, that is, the pose matrix and the translation matrix between the IMU coordinate system corresponding to the second environment detector 11B and the set coordinate system (e.g., the world coordinate system). P can be represented by the formula (5) on the left and right sides of equal signiAnd Pi0And (4) converting to the same set coordinate system, such as a world coordinate system, namely, the coordinates of the points A and B with the same name in the same set coordinate system are equal to form the same point.
Based on the above analysis, in the present embodiment, the integrated navigation module 15 can acquire pose information of the plurality of environment detectors 11 in the process of acquiring the point cloud data. The processor 14 may acquire pose information of the plurality of environment detectors 11 during the process of acquiring the point cloud data.
When matching the feature points corresponding to the plurality of environment detectors 11, the processor 14 may match the feature points in each frame of point cloud data acquired by the plurality of environment detectors. However, this is computationally expensive. In consideration of the difference of the geographic areas acquired by point cloud data with a longer acquisition position distance, the matching degree between feature points of the point cloud data acquired aiming at different geographic areas or different sub-areas in the same geographic area is lower. Based on this, in order to reduce the amount of calculation, the processor 14 may acquire pose information of the plurality of environment detectors 11 in the process of acquiring point cloud data; calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors according to the position information in the pose information; and selecting target point cloud data of which the distance between the acquisition positions is smaller than or equal to a set distance threshold from point cloud data corresponding to different environment detectors. And point cloud data corresponding to the environment detector is the point cloud data collected by the environment detector.
Further, target feature points belonging to target point cloud data may be acquired from the feature points corresponding to the plurality of environment detectors 11; and calculating the position information of the target characteristic points in a set coordinate system according to the pose information of the plurality of environment detectors in the process of acquiring the target characteristic points and the point cloud coordinates of the target characteristic points. In this embodiment, the point cloud coordinates of the target feature point can be converted into coordinates of the target feature point in the IMU coordinate system by using the above formula (3); further, the position information of the target feature point in the set coordinate system is calculated according to the above equation (4). In calculating the position information of the target feature point in the set coordinate system, the formula (4)
Figure BDA0003451349290000075
And
Figure BDA0003451349290000076
and respectively adopting the attitude and position information of the environment detector in the process of acquiring the target characteristic points.
Further, the processor 14 may determine, according to the position information of the target feature point in the set coordinate system, a corresponding point of the plurality of environment detectors 11.
Optionally, the processor 14 may calculate distances between the target feature points corresponding to different environment detectors 11 according to the position information of the target feature points in the set coordinate system; and determining the target feature points with the distance between the target feature points smaller than or equal to the set distance threshold as the same name points in the target feature points corresponding to different environment detectors 11. Or; the processor 14 may calculate an included angle between normal vectors of target feature points corresponding to different environment detectors according to position information of the target feature points corresponding to the plurality of environment detectors 11 in a set coordinate system; and determining the target characteristic points of which the included angles between the normal vectors are smaller than or equal to the set angle threshold as homonymous points in the target characteristic points corresponding to different environment detectors. Or, the processor 14 may also calculate distances between the target feature points corresponding to the different environment detectors 11 according to the position information of the target feature points in the set coordinate system; calculating included angles between normal vectors of the target characteristic points corresponding to different environment detectors according to position information of the target characteristic points corresponding to the environment detectors 11 in a set coordinate system; further, the processor 14 may determine that the distance between the target feature points is less than or equal to the set distance threshold, and the target feature points whose included angle between the normal vectors is less than or equal to the set angle threshold are corresponding to the same-name points among the target feature points corresponding to different environment detectors.
After determining the corresponding points of the feature points corresponding to the plurality of environment detectors 11, the processor 14 may correct the postures of the plurality of environment detectors 11 in the process of acquiring the point cloud data according to the associated information of the corresponding points of the plurality of environment detectors 11.
Wherein, the associated information of the same name point may include: point cloud coordinates of the same-name points, pose information recorded by the plurality of environment detectors 11 in the process of collecting the same-name points, and the like. The pose information recorded by the environment detectors 11 in the process of acquiring the same-name point may be pose information acquired by the integrated navigation module 15 by the environment detectors 11 in the process of acquiring point cloud data.
Accordingly, the processor 14 may correct the postures of the plurality of environment detectors 11 in the process of acquiring the point cloud data according to the point cloud coordinates of the corresponding points of the plurality of environment detectors 11 and the pose information of the plurality of environment detectors in the process of acquiring the corresponding points. The above equation (5) is satisfied under the theoretical conditions due to the pair of homonymous points a and B, i.e., the coordinates of the pair of homonymous points a and B in the same set coordinate system under the theoretical conditions are the same. Therefore, the distance between the corresponding coordinates of the corresponding homonymous points of the plurality of environment detectors 11 in the same coordinate system can be minimized as a target, and the attitude information recorded in the process of acquiring the point cloud data by the plurality of environment detectors can be corrected according to the point cloud coordinates of the homonymous points in each frame of point cloud data and the attitude information recorded in the process of acquiring the homonymous points by the plurality of environment detectors 11. In this way, when the position information of the point cloud data in the set coordinate system is calculated by using the corrected posture information, the calculated position information is close to the actual position information as much as possible, and the accuracy of the determined position information is improved.
Optionally, the processor 14 may obtain pose information recorded by the plurality of environment detectors 11 in the process of acquiring the point cloud data from the pose information of the plurality of environment detectors 11 acquired by the integrated navigation module in the process of acquiring the point cloud data; and calculating the attitude correction quantity for attitude optimization of the plurality of environment detectors 11 according to the point cloud coordinates of the homonymous points in each frame of point cloud data and the attitude information recorded by the plurality of environment detectors in the process of collecting the homonymous points.
In this embodiment, a specific implementation manner of calculating the posture rectification amount for performing posture optimization on the plurality of environment detectors 11 according to the point cloud coordinates of the same-name point in each frame of point cloud data and the posture information recorded by the plurality of environment detectors in the process of acquiring the same-name point is not limited. Alternatively, the point cloud coordinates of the same-name point may be converted into coordinates in the IMU coordinate system according to the above equation (4), and the attitude correction amount for performing attitude optimization on the plurality of environment probes 11 may be calculated according to the coordinates of the same-name point in the IMU coordinate system. In some embodiments, the above equation (5) is satisfied under theoretical conditions due to the pair of homonymous points a and B, i.e., the coordinates of the pair of homonymous points a and B in the same set coordinate system under theoretical conditions are the same. Therefore, the calculated attitude correction quantity for attitude optimization of the environment detector should make the difference between the coordinates of the homonymous points a and B in the same coordinate system after attitude optimization as small as possible.
Based on the analysis, the posture deviation correction parameters are to-be-solved quantities, and coordinate expressions of the corresponding environment detectors under the set coordinate system are determined according to point cloud coordinates of the corresponding environment detectors in each frame of point cloud data and pose information recorded by the environment detectors in the process of collecting the corresponding environment detectors. Specifically, based on the above formula (3), the point cloud coordinates of the homonymous point in each frame of point cloud data can be converted into coordinates of the homonymous point in the IMU coordinate system; further, based on the formula (4), the posture deviation correction parameter Δ R can be used as a quantity to be solved, and the coordinate expression of the corresponding point of the same name of the plurality of environment detectors in the set coordinate system is determined by using the coordinate of the point of the same name in the IMU coordinate system and the pose information recorded by the plurality of environment detectors in the process of collecting the point of the same name as known quantities:
Figure BDA0003451349290000091
in the formula (6.1), Δ R represents an attitude deviation correction parameter to be solved; pwAnd the coordinate of the point i with the same name in a set coordinate system is shown.
Further, a mathematical model reflecting the distance between the coordinates of the corresponding homonymous points of the plurality of environment detectors in the set coordinate system can be constructed according to the coordinate expression of the corresponding homonymous points of the plurality of environment detectors in the set coordinate system. Wherein, the solution model of the mathematical model can be expressed as:
Figure BDA0003451349290000092
in the formula (6.2), delta R represents an attitude deviation correction parameter to be solved; n represents the total logarithm of the homonymous points in the point cloud data collected by the first environment detector 11a and the second environment detector 11 b; k represents the kth pair of homologous points; wherein k is 1, 2. argminf (x) represents the value of the argument x when the function f (x) takes the minimum value. Accordingly, the above formula (6) indicates that f (Δ R) is equal to the function
Figure BDA0003451349290000093
And taking the value of the delta R at the minimum value.
In the formula (6), the reaction mixture is,
Figure BDA0003451349290000094
and the distance between the coordinates of the k-th pair of homologous points in the set coordinate system is shown. Accordingly, the number of the first and second electrodes,
Figure BDA0003451349290000095
indicating the coordinate point of the kth pair of homologous points in the set coordinate systemThe square of the distance between. Therefore, the temperature of the molten metal is controlled,
Figure BDA0003451349290000096
the mathematical model may be a mathematical model that reflects the distance between coordinates of the corresponding homologous points of the plurality of environment sensors 11 in the set coordinate system, and represents the sum of squares of the distances between the coordinate points of the n pairs of homologous points in the set coordinate system.
Since a pair of named points corresponds to the same point in the set coordinate system under the theoretical condition, the smaller the sum of squares of distances between n pairs of coordinate points of the same name point in the set coordinate system is, the higher the accuracy of the coordinates of the point cloud data in the set coordinate system calculated by the posture correction amount Δ R is. Therefore, the distance between the coordinates of the corresponding points of the environment detectors in the set coordinate system can be minimized, and the mathematical model reflecting the distance between the coordinates of the corresponding points of the environment detectors 11 in the set coordinate system, that is, equation (6.2) can be solved to obtain the value of Δ R, that is, the attitude correction amount.
Further, after the attitude deviation correction amount Δ R is obtained, the attitude deviation correction amount Δ R may be used to correct the attitude of the plurality of environment detectors 11 recorded in the process of collecting the point cloud data
Figure BDA0003451349290000097
And correcting to obtain optimized attitude information of the plurality of environment detectors in the process of acquiring the point cloud data. Wherein, the optimization formula can be expressed as:
Figure BDA0003451349290000101
in the formula (7), the reaction mixture is,
Figure BDA0003451349290000102
representing the posture recorded by the environment detector acquired by the combined navigation module when the ith data point is acquired, namely the posture before optimization;
Figure BDA0003451349290000103
representing the optimized pose of the environment probe at the time of acquisition of the ith data point.
The data acquisition equipment provided by the embodiment can utilize the characteristic that different environment detectors can scan the same object at different moments to perform feature extraction on point cloud data acquired by a plurality of environment detectors to obtain feature points in the point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymy points in the characteristic points corresponding to the plurality of environment detectors; and correcting the postures of the environment detectors in the process of acquiring point cloud data according to the point cloud coordinates of the same-name points corresponding to the environment detectors and the pose information of the environment detectors in the process of acquiring the same-name points, so that the postures of the environment detectors are optimized, and the improvement of the posture accuracy of the environment detectors is facilitated.
After correcting the postures recorded by the environment detectors 11 in the process of acquiring the point cloud coordinates, the processor 14 may further calculate spatial information of the point cloud data acquired by the environment detectors 11 in a set coordinate system according to the optimized posture information of the environment detectors 11 in the process of acquiring the point cloud coordinates and the point cloud coordinates in the point cloud data. The spatial information of the point cloud data may be a position coordinate distribution of the point cloud data in a set coordinate system.
Optionally, the attitude information optimized by the environment detector 11 in the process of acquiring the point cloud coordinates and the point cloud coordinates corresponding to the point cloud data may be substituted into the following formula (8), so as to obtain coordinate expression of the point cloud data in a set coordinate system. Formula (8) may be represented as:
Figure BDA0003451349290000104
for the point cloud data of the tunnel scene shown in fig. 4, the attitude information before optimization and the point cloud coordinates corresponding to the point cloud data are used to obtain an effect graph of the spatial information of the point cloud data in the set coordinate system, as shown in the left graph of fig. 5, and the attitude information after environment detector optimization and the point cloud coordinates corresponding to the point cloud data are used to obtain an effect graph of the spatial information of the point cloud data in the set coordinate system, as shown in the right graph of fig. 5.
According to the comparison of the scene detection effects before and after the attitude optimization of the environment detector shown in fig. 5, it can be seen that the reduction accuracy of the tunnel shape obtained after the optimization is higher, and the real scene can be more truly reduced.
Further, the processor 14 may also construct an electronic map according to the spatial information of the point cloud data acquired by the environment detector 11 in the set coordinate system. Because the posture of the environment detector 11 is used as the optimized posture, the precision of the spatial information of the point cloud data acquired by the environment detector 11 under the set coordinate system is higher, and the precision of the constructed electronic map is higher.
For the autonomous mobile device, after the electronic map is constructed, autonomous route planning can be performed on the basis of the electronic map, which is beneficial to improving the precision of route planning, so that the accuracy of subsequent navigation is improved.
It should be noted that, the implementation forms of the data acquisition device are different, the basic components included in the data acquisition device and the structures of the basic components are different, and the embodiments of the present application are only some examples, which does not mean that the data acquisition device must include all the components shown in fig. 1a and 1b, nor that the data acquisition device can only include the components shown in fig. 1a and 1 b.
It should also be noted that the data processing method executed by the data acquisition device can also be implemented by other computer devices. The following provides an exemplary description of a data processing method provided in an embodiment of the present application.
Fig. 6 is a schematic flowchart of a posture optimization method provided in the embodiment of the present application. As shown in fig. 6, the method includes:
601. and acquiring point cloud data of each frame acquired by a plurality of environment detectors.
602. And extracting the characteristics of each frame of point cloud data to obtain the characteristic points corresponding to the plurality of environment detectors from each frame of point cloud data.
603. And matching the characteristic points corresponding to the plurality of environment detectors to determine the homonymous points in the characteristic points corresponding to the plurality of environment detectors.
604. And acquiring pose information recorded in the process of acquiring the same-name points by the plurality of environment detectors.
605. The method comprises the steps of taking the distance minimization between corresponding coordinates of homonymous points corresponding to a plurality of environment detectors in the same coordinate system as a target, and correcting attitude information recorded by the plurality of environment detectors in the process of acquiring point cloud data according to point cloud coordinates of the homonymous points corresponding to the plurality of environment detectors in each frame of point cloud data and pose information recorded by the plurality of environment detectors in the process of acquiring the homonymous points.
In this implementation, a plurality of environment detectors are disposed on the data acquisition device at a certain angle. A plurality of environment detectors can acquire environment information to obtain point cloud data. For a description of the implementation and arrangement of the environment sensor, reference may be made to the relevant contents of the above embodiments.
The plurality of environment detectors can move along with the movement of the data acquisition equipment, and acquire environment information in the moving process of the data acquisition equipment to obtain point cloud data. A frame of point cloud data is a data set obtained by environment detection of an environment detector, and the method comprises the following steps: the space coordinates of the detection point in the environment detector coordinate system.
In this embodiment, in order to optimize the pose of the environment detector, in step 601, each frame of point cloud data collected by a plurality of environment detectors may be acquired. Each environment detector corresponds to one frame of point cloud data. In order to reduce the amount of calculation and noise, in step 602, feature extraction may be performed on each frame of point cloud data to obtain feature points corresponding to a plurality of environmental detectors from each frame of point cloud data, considering that the data amount of one frame of point cloud data is large and there are noise points. The characteristic point corresponding to each environment detector is a characteristic point obtained from point cloud data acquired by the environment detector.
Alternatively, the curvature of the data points in each frame of point cloud data may be calculated from the point cloud coordinates of the data points in each frame of point cloud data. For the calculation manner of the curvature of the data point, reference may be made to the related contents of the above embodiments, and details are not repeated herein.
Furthermore, data points with curvatures meeting set requirements can be acquired from each frame of point cloud data and used as characteristic points corresponding to the environment detector for acquiring the frame of point cloud data. Optionally, an edge point with a curvature greater than or equal to a set first curvature threshold value can be acquired from each frame of point cloud data and used as a feature point corresponding to an environment detector for acquiring the frame of point cloud data; and/or acquiring a plane point with a curvature smaller than a set second curvature threshold value from each frame of point cloud data, and taking the plane point as a characteristic point corresponding to an environment detector for acquiring the frame of point cloud data; wherein the first curvature threshold is greater than the second curvature threshold.
Furthermore, because a plurality of environment detectors are arranged on the data acquisition equipment at a certain angle, the same object can be observed by the plurality of environment detectors at different moments. Based on this, in step 603, the feature points corresponding to the plurality of environment detectors may be matched, and the homologous points among the feature points corresponding to the plurality of environment detectors may be determined. The homonymous point refers to a data point corresponding to the same physical point of the real world in the point cloud data acquired by the plurality of environment detectors.
Since the physical points of the pair of homologous points in the real world are the same physical point, when matching the feature points corresponding to the plurality of environment detectors, the point cloud data collected by different environment detectors may be converted into the same set coordinate system, such as a world coordinate system, and the conversion process may refer to the relevant content of formula (1), which is not described herein again. Under ideal conditions, the coordinates in the same set coordinate system are equal for a pair of homologous points.
In some embodiments, a combined navigation module is disposed on the data acquisition device. The integrated navigation module may acquire the pose (i.e., position and attitude) of the environmental probe. The integrated navigation module may include: inertial sensors (IMU), positioning units, wheel speed meters, and the like, but are not limited thereto. The pose of the environment detector acquired by the integrated navigation module is a posture matrix and a translation matrix between an IMU coordinate system and a world coordinate system. Therefore, in order to reduce the amount of calculation, the point cloud data collected by the environment detector may be converted into the IMU coordinate system, and the specific conversion process may be referred to in relation to equation (3) above.
Based on the above formula (3), the point cloud data collected by the environment detectors can be converted into an IMU coordinate system, and when matching the feature points corresponding to the plurality of environment detectors, the coordinates of the data points collected by the different environment detectors in the IMU coordinate system can be converted into the same set coordinate system, and if the world coordinate system is adopted, the point cloud coordinates of any data point can be converted into the set coordinate system by using the above formula (4). Based on the above analysis, in this embodiment, the integrated navigation module can acquire pose information of the plurality of environment detectors in the process of acquiring the point cloud data. Accordingly, before step 603, pose information recorded by a plurality of environmental detectors during the process of acquiring point cloud data may be acquired. The position and pose information recorded by the environment detectors in the process of acquiring the point cloud data can be the position and pose information acquired by the integrated navigation module in the process of acquiring the point cloud data by the environment detectors.
When matching the characteristic points corresponding to a plurality of environment detectors, the characteristic points in the point cloud data collected by different environment detectors can be matched. However, this is computationally expensive. In consideration of the difference of the geographic areas acquired by point cloud data with a longer acquisition position distance, the matching degree between feature points of the point cloud data acquired aiming at different geographic areas or different sub-areas in the same geographic area is lower. Based on the method, in order to reduce the calculated amount, the pose information recorded by a plurality of environment detectors in the process of collecting point cloud data can be obtained; calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors according to the position information in the pose information; and selecting target point cloud data of which the distance between the acquisition positions is smaller than or equal to a set distance threshold from point cloud data corresponding to different environment detectors. The point cloud data corresponding to the environment detector is the point cloud data collected by the environment detector.
Further, target feature points belonging to target point cloud data can be obtained from the feature points corresponding to the plurality of environment detectors; and is based on multiple environmental detectorsAnd collecting pose information recorded in the process of collecting the target feature points and point cloud coordinates of the target feature points, and calculating the position information of the target feature points in a set coordinate system. In this embodiment, the point cloud coordinates of the target feature point can be converted into coordinates of the target feature point in the IMU coordinate system by using the above formula (3); further, the position information of the target feature point in the set coordinate system is calculated according to the above equation (4). In calculating the position information of the target feature point in the set coordinate system, the formula (4)
Figure BDA0003451349290000121
And
Figure BDA0003451349290000122
and respectively adopting the attitude and position information recorded by the environment detector in the process of collecting the target characteristic points.
Further, the same-name points in the target characteristic points corresponding to the plurality of environment detectors can be determined according to the position information of the target characteristic points in the set coordinate system.
Optionally, the distances between the target feature points corresponding to different environment detectors can be calculated according to the position information of the target feature points in a set coordinate system; and determining the target characteristic points of which the distance between the target characteristic points is less than or equal to the set distance threshold as the same-name points in the target characteristic points corresponding to different environment detectors. Or; the included angle between the normal vectors of the target characteristic points corresponding to different environment detectors can be calculated according to the position information of the target characteristic points corresponding to the environment detectors under a set coordinate system; and determining the target characteristic points of which the included angles between the normal vectors are smaller than or equal to the set angle threshold as homonymous points in the target characteristic points corresponding to different environment detectors. Or, the distances between the target characteristic points corresponding to different environment detectors can be calculated according to the position information of the target characteristic points in a set coordinate system; calculating included angles between normal vectors of the target characteristic points corresponding to different environment detectors according to the position information of the target characteristic points corresponding to the environment detectors under a set coordinate system; further, it can be determined that the distance between the target feature points is less than or equal to a set distance threshold, and the target feature points of which the included angle between the normal vectors is less than or equal to a set angle threshold are corresponding homonymous points among the target feature points corresponding to different environment detectors.
After the corresponding feature points of the plurality of environment detectors are determined to be the same-name points, in step 604, pose information recorded in the process of acquiring the same-name points by the plurality of environment detectors may be acquired. Optionally, the pose information recorded in the process of acquiring the point cloud data by the plurality of environment detectors may be acquired from the pose information acquired by the integrated navigation module in the process of acquiring the point cloud data by the plurality of environment detectors.
Further, in step 605, the distance between the corresponding coordinates of the corresponding homonymous points in the same coordinate system corresponding to the plurality of environment detectors is minimized as a target, and the posture information recorded in the process of acquiring the point cloud data by the plurality of environment detectors is optimized according to the point cloud coordinates of the corresponding homonymous points corresponding to the plurality of environment detectors and the posture information recorded in the process of acquiring the homonymous points by the plurality of detectors, so as to obtain the optimized postures in the process of acquiring the point cloud data by the plurality of detectors.
Optionally, the posture rectification parameter may be a quantity to be solved, and a coordinate expression of the corresponding point of the same name of the plurality of environment detectors in the set coordinate system is determined according to the point cloud coordinates of the point of the same name in each frame of point cloud data and the pose information recorded by the plurality of environment detectors in the process of collecting the point of the same name. For specific implementation, reference may be made to the content of the above formula (6.1), which is not described herein again.
Further, a mathematical model reflecting the distance between the coordinates of the corresponding homonymous points of the plurality of environment detectors in the set coordinate system can be constructed according to the coordinate expression of the corresponding homonymous points of the plurality of environment detectors in the set coordinate system; and with the distance minimization of the homonymous points corresponding to the plurality of environment detectors between the coordinates under the set coordinates as a target, solving the mathematical model reflecting the distance of the homonymous points corresponding to the plurality of environment detectors between the coordinates under the set coordinate system to obtain a value of the attitude deviation correction parameter, namely the attitude deviation correction quantity. For specific implementation and principles, reference may be made to the related content of the above example formula (6.2), which is not described herein again.
Further, after the attitude deviation correction amount delta R is obtained, the attitude deviation correction amount delta R can be used for acquiring attitude information recorded by a plurality of environment detectors in the process of point cloud data acquisition
Figure BDA0003451349290000131
And correcting to obtain optimized attitude information in the process of acquiring point cloud data of each frame by a plurality of detectors. The optimization formula can be referred to the above formula (7).
In the embodiment, the characteristic that different environment detectors can scan the same object at different times can be utilized to perform feature extraction on point cloud data acquired by a plurality of environment detectors to obtain feature points in the point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymy points in the characteristic points corresponding to the plurality of environment detectors; and the distance between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinate is minimized as a target, and the postures of the plurality of environment detectors in the process of acquiring point cloud data are optimized according to the point cloud coordinates of the same-name points corresponding to the plurality of environment detectors and the pose information of the plurality of environment detectors in the process of acquiring the same-name points, so that the postures of the environment detectors are optimized, and the improvement of the posture accuracy of the environment detectors is facilitated.
After the attitude of the environment detectors is optimized, the spatial information of the point cloud data acquired by the environment detectors under a set coordinate system can be calculated according to the optimized attitude information of the environment detectors in the process of acquiring the point cloud data and the point cloud coordinates in the point cloud data. The spatial information of the point cloud data may be a position coordinate distribution of the point cloud data in a set coordinate system. For a specific embodiment of calculating spatial information of point cloud data acquired by a plurality of environment detectors in a set coordinate system, reference may be made to the related content of the above equation (8).
Furthermore, an electronic map can be constructed according to the spatial information of the point cloud data acquired by the environment detector under a set coordinate system. Because the posture of the environment detector is the optimized posture, the precision of the space information of the point cloud data acquired by the environment detector under the set coordinate system is higher, and the precision of the constructed electronic map is higher.
For the autonomous mobile device, after the electronic map is constructed, autonomous route planning can be performed on the basis of the electronic map, which is beneficial to improving the precision of route planning, so that the accuracy of subsequent navigation is improved.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps 601 and 602 may be device a; for another example, the execution subject of step 601 may be device a, and the execution subject of step 602 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 601, 602, etc., are merely used for distinguishing different operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-described pose optimization method.
An embodiment of the present application further provides a computer program product, including: a computer program. Wherein the computer program product is executable by a processor to implement the above-described method of pose optimization. The computer program product provided by the embodiment may be map data making software and the like.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer equipment can be server-side equipment such as a server and a cloud server array; and can also be terminal equipment such as a computer. As shown in fig. 7, the computer apparatus includes: a memory 70a and a processor 70 b; the memory 70a is used for storing computer programs.
The processor 70b is coupled to the memory 70a for executing a computer program for: acquiring point cloud data of each frame acquired by a plurality of environment detectors; extracting the characteristics of each frame of point cloud data to obtain characteristic points corresponding to a plurality of environment detectors from each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to determine homonymous points in the characteristic points corresponding to the plurality of environment detectors; the same-name point is a data point corresponding to the same physical point of the real world in the point cloud data acquired by the plurality of environment detectors; acquiring pose information recorded in the process of acquiring the same-name points by the plurality of environment detectors; and the distance between the coordinates of the corresponding homonymous points of the plurality of environment detectors under the same coordinate is minimized as a target, and the attitude information recorded in the process of acquiring the point cloud data by the plurality of environment detectors is corrected according to the point cloud coordinates of the homonymous points in each frame of point cloud data and the pose information recorded in the process of acquiring the homonymous points by the plurality of environment detectors.
Optionally, when the processor 70b performs feature extraction on each frame of point cloud data, it is specifically configured to: calculating the curvature of the data points in each frame of point cloud data according to the point cloud coordinates of the data points in each frame of point cloud data; and acquiring data points with curvatures meeting set requirements from each frame of point cloud data, and using the data points as characteristic points corresponding to an environment detector for acquiring the frame of point cloud data.
Optionally, when the processor 70b obtains a data point with a curvature meeting the setting requirement from each frame of point cloud data, it is specifically configured to: acquiring edge points with the curvature greater than or equal to a set first curvature threshold value from each frame of point cloud data, and taking the edge points as characteristic points corresponding to an environment detector for acquiring the frame of point cloud data; and/or acquiring a plane point with a curvature smaller than a set second curvature threshold value from each frame of point cloud data, and taking the plane point as a characteristic point corresponding to an environment detector for acquiring the frame of point cloud data; the first curvature threshold is greater than the second curvature threshold.
In some embodiments, the processor 70b is further configured to: acquiring pose information of a plurality of environment detectors acquired by a combined navigation module in the process of acquiring point cloud data of each frame; calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors according to the position information in the pose information; and selecting target point cloud data of which the distance between the acquisition positions is smaller than or equal to a set first distance threshold from point cloud data corresponding to different environment detectors.
Correspondingly, when the processor 70b matches the feature points corresponding to the plurality of environment detectors, it is specifically configured to: acquiring target characteristic points belonging to target point cloud data from the characteristic points corresponding to the plurality of environment detectors; the target point cloud data refers to data points, wherein the distances between the acquisition positions of the plurality of corresponding environment detectors in each frame of point cloud data are smaller than or equal to a set first distance threshold; further, position information of the target feature points under a set coordinate system can be calculated according to pose information of the plurality of environment detectors in the process of acquiring the target feature points and point cloud coordinates of the target feature points; and determining the homonymous points in the target characteristic points corresponding to the plurality of environment detectors according to the position information of the target characteristic points in the set coordinate system.
Optionally, when determining the same-name point in the target feature points corresponding to the plurality of environment detectors, the processor 70b is specifically configured to: calculating the distance between the target characteristic points corresponding to different environment detectors according to the position information of the target characteristic points in a set coordinate system; determining the target characteristic points of which the distance between the target characteristic points is smaller than or equal to a set second distance threshold value as the same-name points in the target characteristic points corresponding to different environment detectors; and/or; calculating included angles between normal vectors of target characteristic points corresponding to different environment detectors according to position information of the target characteristic points corresponding to the environment detectors under a set coordinate system; and determining the target characteristic points of which the included angles between the normal vectors are smaller than or equal to the set angle threshold as homonymous points in the target characteristic points corresponding to different environment detectors.
In other embodiments, when acquiring pose information recorded by a plurality of environment detectors in the process of acquiring the same-name point, the processor 70b is specifically configured to: and acquiring the pose information recorded by the environment detectors in the process of acquiring the same-name points from the pose information of the environment detectors acquired by the integrated navigation module in the process of acquiring the point cloud data. Accordingly, the processor 70b, when correcting the postures recorded by the plurality of environment detectors in the process of acquiring the point cloud data of each frame, is specifically configured to: determining coordinate expressions of the corresponding environment detectors of the same-name points under a set coordinate system according to point cloud coordinates of the same-name points in each frame of point cloud data and pose information recorded by the environment detectors in the process of collecting the same-name points by taking pose correction quantity for carrying out pose optimization on the environment detectors as quantity to be solved; according to a coordinate expression of the corresponding homonymous points of the plurality of environment detectors in a set coordinate system, constructing a mathematical model reflecting the distance between the coordinates of the corresponding homonymous points of the plurality of environment detectors in the set coordinate system; the method comprises the steps of solving a mathematical model by taking the minimization of the distance between coordinates of corresponding homonymous points of a plurality of environment detectors under a set coordinate system as a target to obtain attitude deviation correction; and correcting the postures recorded by the plurality of environment detectors in the process of acquiring the point cloud data by using the solved posture deviation correction amount so as to obtain optimized posture information of the plurality of environment detectors in the process of acquiring the point cloud data.
Optionally, the processor 70b is further configured to: and calculating the spatial information of each frame of point cloud data under a set coordinate system according to the optimized attitude information of the plurality of environment detectors in the process of collecting each frame of point cloud data and the point cloud coordinates corresponding to each frame of point cloud data.
Optionally, the processor 70b is further configured to: and constructing an electronic map according to the spatial information of each frame of point cloud data under a set coordinate system.
In some alternative embodiments, as shown in fig. 7, the computer device may further include: communication component 70c, power component 70d, and the like. In some embodiments, the computer device may be implemented as a terminal device such as a computer, and may further include: display component 70e and audio component 70 f. Only some of the components shown in fig. 7 are schematically shown, and it is not meant that the computer device must include all of the components shown in fig. 7, nor that the computer device only includes the components shown in fig. 7.
The computer device provided by the embodiment can utilize the characteristic that different environment detectors can scan the same object at different moments to extract the characteristics of each frame of point cloud data acquired by a plurality of environment detectors, so as to obtain the characteristic points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymy points in the characteristic points corresponding to the plurality of environment detectors; and the distance between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinate is minimized as a target, and the attitude information of the plurality of environment detectors in the process of acquiring the point cloud data is corrected according to the point cloud coordinates of the same-name points in the point cloud data and the attitude information recorded by the plurality of environment detectors in the process of acquiring the same-name points, so that the attitude of the environment detectors is optimized, and the attitude accuracy of the environment detectors is improved.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chips (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In the embodiment of the present application, the display assembly may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display assembly includes a touch panel, the display assembly may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for devices with language interaction functionality, voice interaction with a user may be enabled through an audio component, and so forth.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. An attitude optimization method, comprising:
acquiring point cloud data of each frame acquired by a plurality of environment detectors;
extracting the characteristics of each frame of point cloud data to obtain characteristic points corresponding to the plurality of environment detectors from each frame of point cloud data;
matching the characteristic points corresponding to the plurality of environment detectors to determine a homonymy point in the characteristic points corresponding to the plurality of environment detectors, wherein the homonymy point is a data point corresponding to a same physical point of the real world in each frame of point cloud data acquired by the plurality of environment detectors;
acquiring pose information recorded in the process of acquiring the same-name points by the plurality of environment detectors;
and correcting the attitude information recorded in the process of acquiring the point cloud data of each frame by the plurality of environment detectors according to the point cloud coordinates of the same-name points in the point cloud data of each frame and the attitude information recorded in the process of acquiring the same-name points by the plurality of environment detectors by taking the distance minimization between the corresponding coordinates of the same-name points corresponding to the plurality of environment detectors in the same coordinate system as a target.
2. The method of claim 1, wherein the extracting features from the frames of point cloud data to obtain feature points corresponding to the plurality of environmental detectors from the frames of point cloud data comprises:
calculating the curvature of the data points in each frame of point cloud data according to the point cloud coordinates of the data points in each frame of point cloud data;
and acquiring data points with curvatures meeting set requirements from each frame of point cloud data, and taking the data points as characteristic points corresponding to an environment detector for acquiring the frame of point cloud data.
3. The method according to claim 2, wherein the acquiring, from each frame of point cloud data, data points whose curvatures meet set requirements as feature points corresponding to the environment detector that acquires the frame of point cloud data includes:
acquiring edge points with the curvature greater than or equal to a set first curvature threshold value from each frame of point cloud data, and taking the edge points as feature points corresponding to the environment detector for acquiring the frame of point cloud data;
and/or the presence of a gas in the gas,
acquiring a plane point with a curvature smaller than a set second curvature threshold value from each frame of point cloud data, and taking the plane point as a characteristic point corresponding to the environment detector for acquiring the frame of point cloud data;
the first curvature threshold is greater than the second curvature threshold.
4. The method according to claim 1, wherein the matching the feature points corresponding to the plurality of environment detectors to determine the same-name points in the feature points corresponding to the plurality of environment detectors comprises:
acquiring target characteristic points belonging to target point cloud data from the characteristic points corresponding to the plurality of environment detectors; the target point cloud data refers to data points, wherein the distances between the collection positions of the plurality of corresponding environment detectors in each frame of point cloud data are smaller than or equal to a set first distance threshold;
calculating the position information of the target feature points under a set coordinate system according to the pose information of the plurality of environment detectors in the process of acquiring the target feature points and the point cloud coordinates of the target feature points;
and determining the homonymous points in the target characteristic points corresponding to the plurality of environment detectors according to the position information of the target characteristic points in a set coordinate system.
5. The method of claim 4, further comprising:
calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors according to the position information in the pose information of the environment detectors acquired by the integrated navigation module in the process of acquiring the point cloud data of each frame;
and selecting target point cloud data of which the distance between the acquisition positions is smaller than or equal to a set first distance threshold from the point cloud data corresponding to the different environment detectors.
6. The method according to claim 4, wherein the determining, according to the position information of the target feature point in the set coordinate system, a corresponding point of the plurality of environment probes, which is of the same name, comprises:
calculating the distance between the target characteristic points corresponding to different environment detectors according to the position information of the target characteristic points in a set coordinate system;
determining the target characteristic points of which the distance between the target characteristic points is smaller than or equal to a set second distance threshold value as the same-name points in the target characteristic points corresponding to the different environment detectors;
and/or;
calculating included angles between normal vectors of the target characteristic points corresponding to different environment detectors according to the position information of the target characteristic points corresponding to the environment detectors under a set coordinate system;
and determining the target characteristic points of which the included angles between the normal vectors are smaller than or equal to a set angle threshold value as homonymous points in the target characteristic points corresponding to the different environment detectors.
7. The method of claim 1, wherein the acquiring pose information recorded during the acquiring of the co-located points by the plurality of environmental probes comprises:
acquiring pose information of the environment detectors acquired by an integrated navigation module in the process of acquiring the point cloud data, and acquiring the pose information recorded by the environment detectors in the process of acquiring the same-name points;
the correcting the attitude information recorded in the process of acquiring the point cloud data by a plurality of environment detectors according to the point cloud coordinates of the homonymous points in each frame of point cloud data and the pose information of the homonymous points acquired by the plurality of environment detectors comprises the following steps:
determining coordinate expressions of the corresponding environment detectors of the same-name points under a set coordinate system according to point cloud coordinates of the same-name points in each frame of point cloud data and pose information recorded by the environment detectors in the process of collecting the same-name points by taking attitude deviation correction quantity for optimizing the postures of the environment detectors as quantity to be solved;
according to a coordinate expression of the corresponding homonymous points of the plurality of environment detectors in a set coordinate system, constructing a mathematical model reflecting the distance between the coordinates of the corresponding homonymous points of the plurality of environment detectors in the set coordinate system;
solving the mathematical model by taking the minimization of the distance between the coordinates of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system as a target to obtain the attitude correction quantity;
and correcting the postures recorded by the plurality of environment detectors in the process of acquiring the point cloud data by using the solved posture deviation correction quantity so as to obtain optimized posture information of the plurality of environment detectors in the process of acquiring each frame of point cloud data.
8. The method of any of claims 1-7, further comprising:
and calculating the spatial information of each frame of point cloud data under a set coordinate system according to the optimized attitude information of the plurality of environment detectors in the process of collecting each frame of point cloud data and the point cloud coordinates corresponding to each frame of point cloud data.
9. The method of claim 8, further comprising:
and constructing an electronic map according to the spatial information of the point cloud data of each frame under a set coordinate system.
10. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps of the method of any of claims 1-9.
11. A data acquisition device, comprising: a machine body; the machine body is provided with a memory, a processor and a plurality of environment detectors;
the environment detectors are used for acquiring point cloud data;
the memory for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps of the method of any of claims 1-9.
12. The device of claim 11, wherein the data acquisition device is an autonomous mobile device.
CN202111666849.9A 2021-12-31 2021-12-31 Gesture optimization method and device Active CN114353780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111666849.9A CN114353780B (en) 2021-12-31 2021-12-31 Gesture optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111666849.9A CN114353780B (en) 2021-12-31 2021-12-31 Gesture optimization method and device

Publications (2)

Publication Number Publication Date
CN114353780A true CN114353780A (en) 2022-04-15
CN114353780B CN114353780B (en) 2024-04-02

Family

ID=81104925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111666849.9A Active CN114353780B (en) 2021-12-31 2021-12-31 Gesture optimization method and device

Country Status (1)

Country Link
CN (1) CN114353780B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037194A1 (en) * 2011-04-13 2014-02-06 Unisantis Electronics Singapore Pte. Ltd. Three-dimensional point cloud position data processing device, three-dimensional point cloud position data processing system, and three-dimensional point cloud position data processing method and program
CN107133325A (en) * 2017-05-05 2017-09-05 南京大学 A kind of internet photo geographical space localization method based on streetscape map
CN107657656A (en) * 2017-08-31 2018-02-02 成都通甲优博科技有限责任公司 Homotopy mapping and three-dimensional rebuilding method, system and photometric stereo camera shooting terminal
CN107767440A (en) * 2017-09-06 2018-03-06 北京建筑大学 Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN110473239A (en) * 2019-08-08 2019-11-19 刘秀萍 A kind of high-precision point cloud registration method of 3 D laser scanning
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN112862894A (en) * 2021-04-12 2021-05-28 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN113240740A (en) * 2021-05-06 2021-08-10 四川大学 Attitude measurement method based on phase-guided binocular vision dense marking point matching
WO2021189468A1 (en) * 2020-03-27 2021-09-30 深圳市速腾聚创科技有限公司 Attitude correction method, apparatus and system for laser radar
CN113608170A (en) * 2021-07-07 2021-11-05 云鲸智能(深圳)有限公司 Radar calibration method, radar, robot, medium, and computer program product

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037194A1 (en) * 2011-04-13 2014-02-06 Unisantis Electronics Singapore Pte. Ltd. Three-dimensional point cloud position data processing device, three-dimensional point cloud position data processing system, and three-dimensional point cloud position data processing method and program
CN107133325A (en) * 2017-05-05 2017-09-05 南京大学 A kind of internet photo geographical space localization method based on streetscape map
CN107657656A (en) * 2017-08-31 2018-02-02 成都通甲优博科技有限责任公司 Homotopy mapping and three-dimensional rebuilding method, system and photometric stereo camera shooting terminal
CN107767440A (en) * 2017-09-06 2018-03-06 北京建筑大学 Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN110473239A (en) * 2019-08-08 2019-11-19 刘秀萍 A kind of high-precision point cloud registration method of 3 D laser scanning
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
WO2021189468A1 (en) * 2020-03-27 2021-09-30 深圳市速腾聚创科技有限公司 Attitude correction method, apparatus and system for laser radar
CN112862894A (en) * 2021-04-12 2021-05-28 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN113240740A (en) * 2021-05-06 2021-08-10 四川大学 Attitude measurement method based on phase-guided binocular vision dense marking point matching
CN113608170A (en) * 2021-07-07 2021-11-05 云鲸智能(深圳)有限公司 Radar calibration method, radar, robot, medium, and computer program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
晏晖;胡丙华;: "基于空间拓扑关系的目标自动跟踪与位姿测量技术", 中国测试, no. 04 *

Also Published As

Publication number Publication date
CN114353780B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN110178048B (en) Method and system for generating and updating vehicle environment map
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
KR102032070B1 (en) System and Method for Depth Map Sampling
CN109709801B (en) Indoor unmanned aerial vehicle positioning system and method based on laser radar
US11157014B2 (en) Multi-channel sensor simulation for autonomous control systems
US20210263159A1 (en) Information processing method, system, device and computer storage medium
CN110119698B (en) Method, apparatus, device and storage medium for determining object state
CN110889808B (en) Positioning method, device, equipment and storage medium
CN110386142A (en) Pitch angle calibration method for automatic driving vehicle
CN111563450B (en) Data processing method, device, equipment and storage medium
EP4213128A1 (en) Obstacle detection device, obstacle detection system, and obstacle detection method
CN112630798B (en) Method and apparatus for estimating ground
CN114353780B (en) Gesture optimization method and device
CN112313535A (en) Distance detection method, distance detection device, autonomous mobile platform, and storage medium
WO2022083529A1 (en) Data processing method and apparatus
US11645762B2 (en) Obstacle detection
CN111025324A (en) Household pattern generating method based on distance measuring sensor
CN116222544B (en) Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm
CN115019167B (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN116630923B (en) Marking method and device for vanishing points of roads and electronic equipment
CN117557654A (en) External parameter calibration method and device, electronic equipment and storage medium
CN115200601A (en) Navigation method, navigation device, wheeled robot and storage medium
CN112116698A (en) Method and device for point cloud fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant