CN115015955A - Method, apparatus, device, storage medium and program product for determining motion information - Google Patents

Method, apparatus, device, storage medium and program product for determining motion information Download PDF

Info

Publication number
CN115015955A
CN115015955A CN202210563625.3A CN202210563625A CN115015955A CN 115015955 A CN115015955 A CN 115015955A CN 202210563625 A CN202210563625 A CN 202210563625A CN 115015955 A CN115015955 A CN 115015955A
Authority
CN
China
Prior art keywords
motion information
depth map
determining
point cloud
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210563625.3A
Other languages
Chinese (zh)
Inventor
王珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Caldog Technology Co ltd
Original Assignee
Tianjin Caldog Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Caldog Technology Co ltd filed Critical Tianjin Caldog Technology Co ltd
Priority to CN202210563625.3A priority Critical patent/CN115015955A/en
Publication of CN115015955A publication Critical patent/CN115015955A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

The disclosed embodiments relate to a method, an apparatus, a device, a storage medium, and a program product for determining motion information. The method comprises the following steps: controlling a laser radar to perform target detection to obtain at least two frames of point cloud data; according to the two frames of point cloud data, determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar; and determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information. By adopting the method, the movement information of the target object can be measured by using the laser radar, so that the applicable scene of the laser radar is enlarged.

Description

Method, apparatus, device, storage medium and program product for determining motion information
Technical Field
The embodiments of the present disclosure relate to the field of object detection technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for determining motion information.
Background
With the development of automotive technology, automatic driving technology has emerged. The laser radar is a relatively common sensor in the automatic driving technology, and mainly adopts the Time of flight principle, namely, the distance from the laser radar to the surface of an object is calculated by measuring the Time difference between the emission and the return of a laser beam.
Based on the above principle, most of the laser radars can only measure geometric information and surface reflectance information of an object, and it is difficult to measure motion information of the object.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device, equipment, a storage medium and a program product for determining motion information, which can measure the motion information of a target object by using a laser radar, thereby expanding the application scene of the laser radar.
In a first aspect, an embodiment of the present disclosure provides a method for determining motion information, where the method includes:
controlling a laser radar to perform target detection to obtain at least two frames of point cloud data;
according to the two frames of point cloud data, determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar;
and determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information.
In a second aspect, an embodiment of the present disclosure provides an apparatus for determining motion information, where the apparatus includes:
the data acquisition module is used for controlling the laser radar to carry out target detection to obtain at least two frames of point cloud data;
the first information determining module is used for determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar according to the two frames of point cloud data;
and the second information determining module is used for determining the target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method of the first aspect is implemented.
In a fourth aspect, the embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of the first aspect described above.
In a fifth aspect, the present disclosure provides a computer program product comprising a computer program, which when executed by a processor implements the method of the first aspect.
The method, the device, the equipment, the storage medium and the program product for determining the motion information control the laser radar to perform target detection to obtain at least two frames of point cloud data; according to the two frames of point cloud data, determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar; and determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information. According to the laser radar motion information determining method and device, the motion information of the target object can be determined through the motion information of the laser radar and the motion information of the target object relative to the laser radar, the function expansion of the laser radar is achieved, and the laser radar can be suitable for more scenes.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a method for determining motion information;
FIG. 2 is a flow diagram illustrating a method for determining motion information according to one embodiment;
FIG. 3 is a second flowchart illustrating a method for determining motion information according to an embodiment;
FIG. 4 is a flowchart illustrating the steps of determining the first motion information and the second motion information in one embodiment;
FIG. 5 is a flowchart illustrating the step of converting a depth map in one embodiment;
FIG. 6 is a flowchart illustrating the step of determining first motion information in one embodiment;
FIG. 7 is a flowchart illustrating the step of determining second motion information in one embodiment;
fig. 8 is a flowchart illustrating a method for determining motion information according to another embodiment;
FIG. 9 is a block diagram showing the structure of a motion information determining apparatus according to an embodiment;
FIG. 10 is a second block diagram illustrating an exemplary apparatus for determining motion information;
FIG. 11 is a diagram illustrating the internal architecture of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clearly understood, the embodiments of the present disclosure are described in further detail below with reference to the accompanying drawings and the embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the embodiments of the disclosure and that no limitation to the embodiments of the disclosure is intended.
First, before specifically describing the technical solution of the embodiment of the present disclosure, a technical background or a technical evolution context on which the embodiment of the present disclosure is based is described. The laser radar is a relatively common sensor in the automatic driving technology, and mainly adopts the Time of flight principle, namely, the distance from the laser radar to the surface of an object is calculated by measuring the Time difference between the emission and the return of a laser beam. Based on the above principle, most of the laser radars can only measure geometric information and surface reflectance information of an object, and it is difficult to measure motion information of the object.
In the scheme for determining the motion information provided by the embodiment of the disclosure, a laser radar is controlled to perform target detection to obtain at least two frames of point cloud data; according to the two frames of point cloud data, determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar; and determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information. According to the laser radar motion information determining method and device, the motion information of the target object can be determined through the motion information of the laser radar and the motion information of the target object relative to the laser radar, the function expansion of the laser radar is achieved, and the laser radar can be suitable for more scenes. It should be noted that, the applicant has paid a lot of creative work to find that the lidar is difficult to measure the motion information of the object and the technical solution described in the following embodiments.
The following describes technical solutions related to the embodiments of the present disclosure in combination with a scenario in which the embodiments of the present disclosure are applied.
The method for determining motion information provided by the embodiment of the disclosure can be applied to an application environment as shown in fig. 1. The application environment includes a vehicle 102, and the vehicle 102 is provided with electronic devices and a laser radar. The laser radar can sense the surrounding environment of the vehicle, the electronic equipment performs target detection according to point cloud data collected by the laser radar, the vehicle is controlled according to a detection result, and automatic driving of the vehicle is achieved.
In one embodiment, as shown in fig. 2, a method for determining motion information is provided, which is described by taking the method as an example applied to the electronic device in fig. 1, and includes the following steps:
step 201, controlling a laser radar to perform target detection to obtain at least two frames of point cloud data.
The electronic equipment is communicated with the laser radar, and controls the laser radar to emit laser pulses to the surrounding environment and detect the returned laser pulses to obtain point cloud data. When the laser radar is used for continuous detection, at least two frames of point cloud data can be obtained.
Step 202, according to the two frames of point cloud data, first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar are determined.
Wherein the motion information includes at least one of velocity information, acceleration information, and direction information.
After the point cloud data are obtained, according to the difference of the two frames of point cloud data, first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected moving relative to the laser radar can be respectively determined.
For example, the reference coordinate system is a world coordinate system, and the target object to be detected is a pedestrian. According to the difference of the two frames of point cloud data, the position change of the laser radar in a world coordinate system can be determined, so that the speed and the direction of the laser radar are determined, and the first motion information of the laser radar is obtained. Meanwhile, the position change of the pedestrian relative to the laser radar can be determined according to the two frames of point cloud data, so that the speed and the direction of the pedestrian are determined, and second motion information of the pedestrian is obtained.
The determining mode of the first motion information and the second motion information is not limited, and the determining mode can be selected according to actual conditions.
Optionally, the two frames of point cloud data are two adjacent frames of point cloud data.
In determining the first motion information and the second motion information, point cloud data of two adjacent frames may be utilized. For example, point cloud data at time T and time T-1 are utilized. It can be understood that, because the time interval between two frames of point cloud data is short, the first motion information and the second motion information can be determined more accurately.
Optionally, the two frames of point cloud data are non-adjacent two frames of point cloud data. In this case, the velocity information may be determined according to the position change and the time interval between two frames of point cloud data. The time interval between two frames of point cloud data is not limited in the embodiment of the disclosure.
And step 203, determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information.
After the first motion information and the second motion information are determined, according to the motion situation of the laser radar in the reference coordinate system and the motion situation of the target object relative to the laser radar, which are indicated by the first motion information, the motion situation of the target object in the reference coordinate system can be determined, and therefore the target motion information of the target object can be determined.
For example, the movement of the lidar in the world coordinate system and the movement of the pedestrian relative to the lidar are determined, and the movement of the pedestrian in the world coordinate system can be determined.
In the method for determining the motion information, a laser radar is controlled to perform target detection to obtain at least two frames of point cloud data; according to the two frames of point cloud data, determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar; and determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information. According to the laser radar motion information determining method and device, the motion information of the target object can be determined through the motion information of the laser radar and the motion information of the target object relative to the laser radar, the function expansion of the laser radar is achieved, and the laser radar can be suitable for more scenes.
In an embodiment, the determining the target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information may include: and determining the target motion information of each cloud point on the target object in the reference coordinate system according to the first motion information of the laser radar in the reference coordinate system and the second motion information of each cloud point on the target object relative to the laser radar.
In practical application, the point cloud data obtained by detecting the target object by the laser radar is obtained by irradiating laser pulses to the target object and reflecting the laser pulses by the target object, and each cloud point in the point cloud data corresponds to an irradiation point on the target object, so that the motion information of the target object relative to the laser radar comprises second motion information of each cloud point on the target object relative to the laser radar.
When determining the target motion information of the target object, the motion condition of each cloud point on the target object in the reference coordinate system can be determined according to the motion condition of the laser radar in the reference coordinate system and the motion condition of each cloud point on the target object relative to the laser radar, so as to obtain the target motion information of each cloud point on the target object in the reference coordinate system.
In the above embodiment, the target motion information of each cloud point on the target object in the reference coordinate system is determined according to the first motion information of the laser radar in the reference coordinate system and the second motion information of each cloud point on the target object relative to the laser radar. The embodiment of the disclosure determines the target motion information of cloud points of each point on the target object, and can more finely represent the motion condition of the target object.
In one embodiment, as shown in fig. 3, before determining the target motion information of each cloud point on the target object in the reference coordinate system, the embodiment of the present disclosure may further include:
and 204, performing interpolation processing on the first motion information to obtain a plurality of interpolated third motion information.
And the third motion information corresponds to the point cloud points on the target object one by one.
When the mechanical laser radar emits laser pulses outwards, the mechanical laser radar rotates mechanically, so that the acquisition time of cloud points of each point is different. For this situation, interpolation processing may be performed on the first motion information to obtain a plurality of third motion information, so that each cloud point on the target object has one corresponding third motion information.
The interpolation process may include: the average value of the adjacent data is calculated and the average value is taken as an interpolated value. The interpolation process may be performed in other manners, which is not limited in this disclosure.
Correspondingly, step 203 comprises: and determining the target motion information of each cloud point on the target object in the reference coordinate system according to the second motion information of each cloud point on the target object and the third motion information corresponding to each cloud point.
After the interpolation processing, each cloud point on the target object has corresponding third motion information, so that the target object can obtain target motion information point by point according to the third motion information and the second motion information.
It is to be understood that, when determining the target motion information point by point, it is preferable that each cloud point has a corresponding third motion information after the interpolation processing, and in order to reduce the calculation amount of the interpolation processing, a plurality of cloud points may be corresponding to the same third motion information after the interpolation processing. The embodiment of the present disclosure does not limit this, and may be set according to actual situations.
Since the target motion information of the target object includes target motion information of a plurality of point cloud points, the target motion information may also be referred to as scene flow (scene flow).
In the above embodiment, the first motion information is subjected to interpolation processing to obtain a plurality of interpolated third motion information; and determining the target motion information of the cloud points of each point in the reference coordinate system on the target object according to the second motion information of the cloud points of each point on the target object and the third motion information corresponding to the cloud points of each point. The embodiment of the disclosure performs interpolation processing on the first motion information, and then determines the target motion information of the target object point by point, so that the target motion information is more accurate.
In an embodiment, as shown in fig. 4, the determining, according to the two frames of point cloud data, first motion information of the laser radar in the reference coordinate system and second motion information of the target object to be detected relative to the laser radar may include:
step 301, coordinate conversion processing is respectively performed on the first point cloud data and the second point cloud data to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data.
The coordinate conversion process is a process of converting three-dimensional coordinates into two-dimensional coordinates.
For two frames of point cloud data, converting cloud points of each point in the first point cloud data from a three-dimensional coordinate system to a two-dimensional coordinate system to obtain a first depth map; and converting each cloud point in the second point cloud data from the three-dimensional coordinate system to the two-dimensional coordinate system to obtain a second depth map.
It can be understood that the depth map not only can retain more three-dimensional spatial information, but also is more suitable for the neural network model to learn, so that a larger receptive field and more global information can be obtained.
Step 302, determining first motion information according to the mark points corresponding to the first depth map and the second depth map.
And determining the positions of the same mark point in the first depth map and the second depth map, and determining the motion condition of the laser radar according to the position change of the mark point to obtain the first motion information of the laser radar.
For example, if the landmark point is the tree tip of a tree, the position change of the lidar can be reversely deduced according to the position change of the tree tip in the first depth map and the second depth map; and then, determining speed information, direction information and the like of the laser radar according to the position transformation of the laser radar and the time interval between the first point cloud data and the second point cloud data to obtain first motion information of the laser radar.
It can be understood that the effect of the laser odometer can be achieved by determining the first motion information according to the mark points corresponding to the first depth map and the second depth map.
And step 303, performing optical flow calculation according to the first depth map and the second depth map to determine second motion information.
And determining the positions of the target object in the first depth map and the second depth map, and performing optical flow calculation according to the position of the target object in the first depth map and the position of the target object in the second depth map to determine second motion information of the target object.
In the above embodiment, for the first point cloud data and the second point cloud data, coordinate conversion processing is respectively performed to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data; determining first motion information according to the mark points corresponding to the first depth map and the second depth map; and determining second motion information by performing optical flow calculation according to the first depth map and the second depth map. In the process of determining the first motion information and the second motion information, the point cloud data is converted into the depth map, and the depth map can not only retain more three-dimensional space information, but also is more suitable for the neural network model to learn, so that a larger receptive field and more global information can be obtained, and the accuracy of the motion information is further improved.
In an embodiment, as shown in fig. 5, the above process of performing coordinate transformation processing on the first point cloud data and the second point cloud data to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data may include the following steps:
step 3011, for the first point cloud data and the second point cloud data, converting the three-dimensional coordinates of the cloud points of each point in the reference coordinate system into a spherical coordinate system, and obtaining a first included angle and a second included angle of the cloud points of each point in the spherical coordinate system.
The first included angle is an included angle between the point cloud point and the x axis of the spherical coordinate system; the second included angle is the included angle between the point cloud point and the z axis of the spherical coordinate system.
The reference coordinate system may be a world coordinate system. For the point cloud point 1 in the first point cloud data, the position of the point cloud point 1 in the world coordinate system is (x1, y1, z1), the point cloud point 1 can be converted into the spherical coordinate system according to the coordinate conversion relation between the world coordinate system and the spherical coordinate system, and the position of the point cloud point 1 in the spherical coordinate system is obtained
Figure BDA0003657426430000081
Wherein r1 is the distance between the position of the point cloud point in the spherical coordinate system and the origin of the spherical coordinate system, θ 1 is a first included angle,
Figure BDA0003657426430000082
is a second included angle. By analogy, coordinate conversion is carried out on other point cloud points in the first point cloud data and each point cloud point in the second point cloud data, and a first included angle and a second included angle of each point cloud point in the spherical coordinate system are obtained.
And 3012, taking the first included angle as an abscissa, the second included angle as an ordinate, and the surface reflectivity information and/or the depth information of the point cloud point as a pixel value to obtain a first depth map and a second depth map.
Regarding each cloud point in the first cloud point data, the first included angle theta 1 is taken as an abscissa, and the second included angle is taken as an abscissa
Figure BDA0003657426430000083
And taking the surface reflectivity information of the point cloud point as a pixel value as a vertical coordinate, so as to obtain a first depth map corresponding to the first point cloud data. Or regarding each cloud point in the first point cloud data, taking the first included angle theta 1 as an abscissa and taking the second included angle as an abscissa
Figure BDA0003657426430000084
As ordinate, then the surface reflectivity information of the point cloud is used as a channelThe depth information of the point cloud point is used as the pixel value of the other channel, so that a first depth map corresponding to the first point cloud data can be obtained. In the same manner as described above, a second depth map corresponding to the second point cloud data can be obtained.
The pixel values can also adopt prior information such as projection errors and classification results, and the pixel values are not limited in the embodiment of the disclosure and can be set according to actual conditions.
Understandably, each row of pixels corresponds to one laser scanning line (beam) on the first depth map and the second depth map; each pixel in each row corresponds to a scanning point of the lidar.
In the above embodiment, for the first point cloud data and the second point cloud data, the three-dimensional coordinates of the cloud points of each point in the reference coordinate system are converted into the spherical coordinate system, so as to obtain a first included angle and a second included angle of the cloud points of each point in the spherical coordinate system; and taking the first included angle as an abscissa, the second included angle as an ordinate, and the surface reflectivity information and/or the depth information of the point cloud point as pixel values to obtain a first depth map and a second depth map. The point cloud data are converted into the depth map, so that the coordinate conversion can be realized, and the pixel values of the depth map can also contain surface reflectivity information, depth information and the like, so that the depth map can not only retain more three-dimensional space information, but also obtain more receptive field and more global information.
In an embodiment, as shown in fig. 6, the process of determining the first motion information according to the mark points corresponding to the first depth map and the second depth map may include the following steps:
step 3021, determining the mark points corresponding to the first depth map and the second depth map.
Feature extraction can be performed on the first depth map and the second depth map to obtain a first feature map and a second feature map. And then, classifying and identifying the first feature map and the second feature map respectively to obtain the category of each target object in the first feature map and the category of each target object in the second feature map. Then, according to the category of the target object, the same target object is selected, and a certain point on the target object is used as a mark point.
For example, after feature extraction and classification recognition are carried out on the first depth map and the second depth map, the target objects including vehicles, pedestrians and trees are determined. And selecting the trees in a static state because the vehicles and the pedestrians are in a moving state, and determining the tree tops of the trees as the mark points.
In practical application, the corresponding mark points in the first depth map and the second depth map may also be determined in other manners, which is not limited in the embodiment of the present disclosure.
Step 3022, a first position of the marker point in the first depth map and a second position of the marker point in the second depth map are determined.
After the corresponding marker points in the first depth map and the second depth map are determined, a first position of the marker point in the first depth map and a second position of the marker point in the second depth map may be determined.
For example, a first location of the treetop in the first depth map and a second location of the landmark point in the second depth map are determined.
Step 3023, determining first motion information according to the first position and the second position.
According to the internal and external parameters of the laser radar and the first position of the mark point in the first depth map, the position of the laser radar at the previous moment can be determined; according to the internal parameter of the laser radar and the first position of the mark point in the second depth map, the position of the laser radar at the later moment can be determined; and determining the position change of the laser radar according to the previous moment and the later moment of the laser radar, and determining first motion information such as speed information and direction information of the laser radar according to the time interval between the first point cloud data and the second point cloud data.
In the above embodiment, the mark points corresponding to the first depth map and the second depth map are determined; determining a first position of the marker point in the first depth map and a second position of the marker point in the second depth map; first motion information is determined based on the first location and the second location. The embodiment of the disclosure determines the first motion information according to the corresponding mark point, and realizes the effect of the laser odometer.
In one embodiment, the above process of determining the second motion information by performing optical flow calculation according to the first depth map and the second depth map may include: and performing optical flow calculation on the first depth map and the second depth map by using a block matching algorithm to obtain second motion information.
The block matching algorithm is an algorithm in motion estimation, and optical flow calculation can be performed quickly and accurately, so that second motion information of the target object is obtained.
As shown in FIG. 7, the process of near-optical flow calculation using the block matching algorithm may include the following steps:
step 3031, performing random initialization on each first pixel in the first depth map to obtain initial optical flow information of each first pixel.
Wherein the initial optical flow information includes initial velocity information and direction information.
For each first pixel in the first depth map, random initialization is performed first, so that each first pixel has corresponding initial optical flow information.
Step 3032, determining a second pixel corresponding to each first pixel in the second depth map according to the initial optical flow information of each first pixel.
From the initial optical flow information, a position of each first pixel after changing in accordance with the initial optical flow information can be specified, and the position is associated with the second depth map, so that a second pixel corresponding to the first pixel in the second depth map can be specified.
For example, when a first pixel (1, 1) in the first depth map is changed from the initial optical flow information and then the position is changed to (1, 2), it is determined that a second pixel having the position (1, 2) in the second depth map corresponds to the first pixel (1, 1).
Step 3033, if the first pixel is matched with the second pixel corresponding to the first pixel, determining the initial optical flow information of the first pixel as a feasible solution of the first pixel.
A pixel value similarity between the first pixel and the second pixel may be calculated, it is determined that the first pixel matches the second pixel if the pixel value similarity is greater than a preset threshold, and then the initial optical flow information of the first pixel is determined as a feasible solution for the first pixel. If the first pixel does not match the second pixel, the first pixel may obtain a feasible solution for propagation of neighboring pixels, re-determine the corresponding second pixel, and again determine whether there is a match.
In practical applications, it may also be determined whether the first pixel and the second pixel are matched in other manners. The embodiments of the present disclosure do not limit this.
And 3034, performing spatial propagation on the feasible solutions of the first pixels to obtain second motion information.
After the feasible solutions are determined for the first pixels, the feasible solutions for each first pixel are spatially propagated. For example, the first pixel (1, 1) propagates the feasible solution to the first pixel (1, 2) and the first pixel (2, 1), and so on. The first pixel (1, 2) may also propagate the feasible solution to each first pixel adjacent to itself. And in the process of carrying out space propagation on the feasible solutions, if the feasible solutions reach the convergence condition, summarizing the feasible solutions under the convergence condition to obtain second motion information.
The convergence condition may include that the propagation number reaches a preset number, and the like, which is not limited in the embodiment of the present disclosure.
Alternatively, the feasible solution may be propagated across boundaries. For example, the first pixel (1, 1) is the left boundary of the first row of pixels of the first depth map, and (1, n) is the left boundary of the first row of pixels of the first depth map. The first pixel (1, 1) may propagate the feasible solution to (1, n) across the boundary, and the first pixel (1, n) may also propagate the feasible solution to (1, 1) across the boundary.
In the above embodiment, each first pixel in the first depth map is initialized randomly to obtain initial optical flow information of each first pixel; determining a second pixel corresponding to each first pixel in the second depth map according to the initial optical flow information of each first pixel; if the first pixel is matched with a second pixel corresponding to the first pixel, determining that the initial optical flow information of the first pixel is a feasible solution of the first pixel; and carrying out space propagation on the feasible solution of each first pixel to obtain second motion information. The embodiment of the disclosure can quickly and accurately determine the second motion information by using a block matching algorithm, thereby improving the detection speed and the detection efficiency of target detection.
In one embodiment, as shown in fig. 8, a method for determining motion information is provided, which is described by taking the method as an example applied to the electronic device in fig. 1, and includes the following steps:
step 401, controlling a laser radar to perform target detection to obtain at least two frames of point cloud data.
Step 402, coordinate conversion processing is respectively carried out on the first point cloud data and the second point cloud data to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data.
In one embodiment, for the first point cloud data and the second point cloud data, three-dimensional coordinates of each point cloud point in a reference coordinate system are converted into a spherical coordinate system, and a first included angle and a second included angle of each point cloud point in the spherical coordinate system are obtained. The first included angle is an included angle between the point cloud point and the x axis of the spherical coordinate system; the second included angle is the included angle between the point cloud point and the z axis of the spherical coordinate system. And taking the first included angle as an abscissa, the second included angle as an ordinate, and the surface reflectivity information and/or the depth information of the point cloud point as pixel values to obtain a first depth map and a second depth map.
Step 403, determining first motion information according to the mark points corresponding to the first depth map and the second depth map.
In one embodiment, the mark points corresponding to the first depth map and the second depth map are determined; determining a first position of the marker point in the first depth map and a second position of the marker point in the second depth map; first motion information is determined based on the first location and the second location.
And step 404, performing optical flow calculation on the first depth map and the second depth map by using a block matching algorithm to obtain second motion information.
In one embodiment, each first pixel in the first depth map is initialized randomly, and initial optical flow information of each first pixel is obtained; determining a second pixel corresponding to each first pixel in the second depth map according to the initial optical flow information of each first pixel; if the first pixel is matched with a second pixel corresponding to the first pixel, determining that the initial optical flow information of the first pixel is a feasible solution of the first pixel; and carrying out space propagation on the feasible solution of each first pixel to obtain second motion information.
Step 405, performing interpolation processing on the first motion information to obtain a plurality of interpolated third motion information.
And the third motion information corresponds to the point cloud points on the target object one by one.
And 406, determining target motion information of each cloud point on the target object in the reference coordinate system according to the second motion information of each cloud point on the target object and the third motion information corresponding to each cloud point.
In the above embodiment, the laser radar is controlled to perform target detection, so as to obtain at least two frames of point cloud data; respectively carrying out coordinate conversion on the first point cloud data and the second point cloud data to obtain a first depth map and a second depth map; determining first motion information of the laser radar and second motion information of the target object according to the first depth map and the second depth map; and performing interpolation processing on the first motion information, and then determining the target motion information of the target object point by point. The motion information of the target object can be detected by using the laser radar, so that the functions of the laser radar are expanded, and the laser radar can be suitable for more scenes.
It should be understood that, although the steps in the flowcharts of fig. 2 to 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 to 8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the other steps or stages.
In one embodiment, as shown in fig. 9, there is provided a motion information determination apparatus including:
the data acquisition module 501 is configured to control a laser radar to perform target detection, so as to obtain at least two frames of point cloud data;
a first information determining module 502, configured to determine, according to the two frames of point cloud data, first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar;
and a second information determining module 503, configured to determine target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information.
In one embodiment, the second information determining module 503 is specifically configured to determine the target motion information of each cloud point on the target object in the reference coordinate system according to the first motion information of the laser radar in the reference coordinate system and the second motion information of each cloud point on the target object relative to the laser radar.
In one embodiment, as shown in fig. 10, the method further includes:
an interpolation module 504, configured to perform interpolation processing on the first motion information to obtain a plurality of interpolated third motion information; the third motion information corresponds to the point cloud points on the target object one by one;
correspondingly, the second information determining module 503 is specifically configured to determine the target motion information of each cloud point on the target object in the reference coordinate system according to the second motion information of each cloud point on the target object and the third motion information corresponding to each cloud point.
In one embodiment, the first information determining module 502 is specifically configured to perform coordinate transformation on the first point cloud data and the second point cloud data to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data; the coordinate conversion processing is processing for converting a three-dimensional coordinate into a two-dimensional coordinate; determining first motion information according to the mark points corresponding to the first depth map and the second depth map; and performing optical flow calculation according to the first depth map and the second depth map to determine second motion information.
In one embodiment, the first information determining module 502 is specifically configured to perform optical flow calculation on the first depth map and the second depth map by using a block matching algorithm to obtain the second motion information.
In one embodiment, the first information determining module 502 is specifically configured to perform random initialization on each first pixel in the first depth map to obtain initial optical flow information of each first pixel; determining a second pixel corresponding to each first pixel in the second depth map according to the initial optical flow information of each first pixel; if the first pixel is matched with a second pixel corresponding to the first pixel, determining that the initial optical flow information of the first pixel is a feasible solution of the first pixel; and carrying out space propagation on the feasible solution of each first pixel to obtain second motion information.
In one embodiment, the first information determining module 502 is specifically configured to determine a landmark point corresponding to the first depth map and the second depth map; determining a first position of the marker point in the first depth map and a second position of the marker point in the second depth map; first motion information is determined based on the first location and the second location.
In one embodiment, the first information determining module 502 is specifically configured to, for the first point cloud data and the second point cloud data, convert a three-dimensional coordinate of each cloud point in a reference coordinate system into a spherical coordinate system, so as to obtain a first included angle and a second included angle of each cloud point in the spherical coordinate system; the first included angle is an included angle between the point cloud point and the x axis of the spherical coordinate system; the second included angle is an included angle between the point cloud point and the z axis of the spherical coordinate system; and taking the first included angle as an abscissa, the second included angle as an ordinate, and the surface reflectivity information and/or the depth information of the point cloud point as pixel values to obtain a first depth map and a second depth map.
In one embodiment, the two frames of point cloud data are two adjacent frames of point cloud data.
For specific limitations of the motion information determination device, reference may be made to the above limitations of the motion information determination method, which are not described herein again. The modules in the motion information determination device can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the electronic device, and can also be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 11 is a block diagram illustrating an electronic device 1300 in accordance with an example embodiment. For example, the electronic device 1300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 11, electronic device 1300 may include one or more of the following components: a processing component 1302, a memory 1304, a power component 1306, a multimedia component 1308, an audio component 1310, an input/output (I/O) interface 1312, a sensor component 1314, and a communication component 1316. Wherein the memory has stored thereon a computer program or instructions for execution on the processor.
The processing component 1302 generally controls overall operation of the electronic device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1302 may include one or more processors 1320 to execute instructions to perform all or part of the steps of the method described above. Further, processing component 1302 can include one or more modules that facilitate interaction between processing component 1302 and other components. For example, the processing component 1302 may include a multimedia module to facilitate interaction between the multimedia component 1308 and the processing component 1302.
The memory 1304 is configured to store various types of data to support operation at the electronic device 1300. Examples of such data include instructions for any application or method operating on the electronic device 1300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1304 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1306 provides power to the various components of the electronic device 1300. Power components 1306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 1300.
The multimedia component 1308 includes a touch-sensitive display screen that provides an output interface between the electronic device 1300 and a user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1308 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the electronic device 1300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1310 is configured to output and/or input audio signals. For example, the audio component 1310 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1304 or transmitted via the communication component 1316. In some embodiments, the audio component 1310 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1314 includes one or more sensors for providing various aspects of state assessment for the electronic device 1300. For example, the sensor assembly 1314 may detect an open/closed state of the electronic device 1300, the relative positioning of components, such as a display and keypad of the electronic device 1300, the sensor assembly 1314 may also detect a change in the position of the electronic device 1300 or a component of the electronic device 1300, the presence or absence of user contact with the electronic device 1300, orientation or acceleration/deceleration of the electronic device 1300, and a change in the temperature of the electronic device 1300. The sensor component 1314 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1316 is configured to facilitate communications between the electronic device 1300 and other devices in a wired or wireless manner. The electronic device 1300 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1316 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1316 also includes a Near Field Communications (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described determination method of motion information.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1304 comprising instructions, executable by the processor 1320 of the electronic device 1300 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which, when executed by a processor, may carry out the above-mentioned method. The computer program product includes one or more computer instructions. When loaded and executed on a computer, some or all of the above-described methods may be implemented, in whole or in part, according to the procedures or functions of the embodiments of the disclosure.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided by the embodiments of the disclosure may include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above embodiments only express a few implementation modes of the embodiments of the present disclosure, and the description thereof is specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, variations and modifications can be made without departing from the concept of the embodiments of the present disclosure, and these are all within the scope of the embodiments of the present disclosure. Therefore, the protection scope of the patent of the embodiment of the present disclosure should be subject to the appended claims.

Claims (13)

1. A method for determining motion information, the method comprising:
controlling a laser radar to perform target detection to obtain at least two frames of point cloud data;
according to the two frames of point cloud data, determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar;
and determining target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information.
2. The method of claim 1, wherein determining the target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information comprises:
and determining the target motion information of each cloud point on the target object in the reference coordinate system according to the first motion information of the laser radar in the reference coordinate system and the second motion information of each cloud point on the target object relative to the laser radar.
3. The method of claim 2, wherein prior to the determining the target motion information for each cloud point on the target object in the reference coordinate system, the method further comprises:
performing interpolation processing on the first motion information to obtain a plurality of interpolated third motion information; the third motion information corresponds to point cloud points on the target object one by one;
correspondingly, the determining the target motion information of each cloud point on the target object in the reference coordinate system according to the first motion information of the laser radar in the reference coordinate system and the second motion information of each cloud point on the target object relative to the laser radar includes:
and determining the target motion information of each point cloud point on the target object in the reference coordinate system according to the second motion information of each point cloud point on the target object and the third motion information corresponding to each point cloud point.
4. The method according to claim 1, wherein the determining first motion information of the lidar in a reference coordinate system and second motion information of a target object to be detected relative to the lidar according to two frames of the point cloud data comprises:
respectively carrying out coordinate conversion processing on first point cloud data and second point cloud data to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data; wherein the coordinate conversion processing is processing of converting three-dimensional coordinates into two-dimensional coordinates;
determining the first motion information according to the mark points corresponding to the first depth map and the second depth map;
and determining the second motion information by performing optical flow calculation according to the first depth map and the second depth map.
5. The method of claim 4, wherein said determining the second motion information by performing optical flow calculations based on the first depth map and the second depth map comprises:
and performing optical flow calculation on the first depth map and the second depth map by using a block matching algorithm to obtain the second motion information.
6. The method of claim 5, wherein performing optical flow computation on the first depth map and the second depth map using a block matching algorithm to obtain the second motion information comprises:
performing random initialization on each first pixel in the first depth map to obtain initial optical flow information of each first pixel;
determining a second pixel corresponding to each first pixel in the second depth map according to the initial optical flow information of each first pixel;
if the first pixel is matched with a second pixel corresponding to the first pixel, determining that the initial optical flow information of the first pixel is a feasible solution of the first pixel;
and carrying out space propagation on the feasible solution of each first pixel to obtain the second motion information.
7. The method of claim 4, wherein the determining the first motion information according to the corresponding marker points of the first depth map and the second depth map comprises:
determining a mark point corresponding to the first depth map and the second depth map;
determining a first position of the marker point in the first depth map and a second position of the marker point in the second depth map;
determining the first motion information according to the first position and the second position.
8. The method of claim 4, wherein the performing coordinate transformation on the first point cloud data and the second point cloud data to obtain a first depth map corresponding to the first point cloud data and a second depth map corresponding to the second point cloud data comprises:
for the first point cloud data and the second point cloud data, converting the three-dimensional coordinates of the cloud points of each point in the reference coordinate system into a spherical coordinate system to obtain a first included angle and a second included angle of each cloud point in the spherical coordinate system; the first included angle is an included angle between the point cloud point and the x axis of the spherical coordinate system; the second included angle is an included angle between the point cloud point and the z axis of the spherical coordinate system;
and taking the first included angle as an abscissa, the second included angle as an ordinate and the surface reflectivity information and/or the depth information of the point cloud points as pixel values to obtain the first depth map and the second depth map.
9. The method of any one of claims 1-8, wherein the two frames of point cloud data are two adjacent frames of point cloud data.
10. An apparatus for determining motion information, the apparatus comprising:
the data acquisition module is used for controlling the laser radar to carry out target detection to obtain at least two frames of point cloud data;
the first information determining module is used for determining first motion information of the laser radar in a reference coordinate system and second motion information of a target object to be detected relative to the laser radar according to the two frames of point cloud data;
and the second information determining module is used for determining the target motion information of the target object in the reference coordinate system according to the first motion information and the second motion information.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 9 are implemented by the processor when executing the computer program.
12. A storage medium having a computer program stored thereon, characterized in that the computer program, when being executed by a processor, realizes the steps of the method of any one of claims 1 to 9.
13. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1-9 when executed by a processor.
CN202210563625.3A 2022-05-23 2022-05-23 Method, apparatus, device, storage medium and program product for determining motion information Pending CN115015955A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210563625.3A CN115015955A (en) 2022-05-23 2022-05-23 Method, apparatus, device, storage medium and program product for determining motion information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210563625.3A CN115015955A (en) 2022-05-23 2022-05-23 Method, apparatus, device, storage medium and program product for determining motion information

Publications (1)

Publication Number Publication Date
CN115015955A true CN115015955A (en) 2022-09-06

Family

ID=83069316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210563625.3A Pending CN115015955A (en) 2022-05-23 2022-05-23 Method, apparatus, device, storage medium and program product for determining motion information

Country Status (1)

Country Link
CN (1) CN115015955A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113200A1 (en) * 2016-09-20 2018-04-26 Innoviz Technologies Ltd. Variable flux allocation within a lidar fov to improve detection in a region
CN108594245A (en) * 2018-07-04 2018-09-28 北京国泰星云科技有限公司 A kind of object movement monitoring system and method
CN112258600A (en) * 2020-10-19 2021-01-22 浙江大学 Simultaneous positioning and map construction method based on vision and laser radar
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN113688282A (en) * 2021-07-23 2021-11-23 北京三快在线科技有限公司 Data processing method and device, electronic equipment and readable storage medium
CN114019473A (en) * 2021-11-09 2022-02-08 商汤国际私人有限公司 Object detection method and device, electronic equipment and storage medium
CN114119465A (en) * 2021-10-09 2022-03-01 北京亮道智能汽车技术有限公司 Point cloud data processing method and device
US20220099838A1 (en) * 2020-09-25 2022-03-31 Hyundai Motor Company Method and Device for Tracking Object Using Lidar Sensor, Vehicle Including the Device, and Recording Medium Storing Program to Execute the Method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113200A1 (en) * 2016-09-20 2018-04-26 Innoviz Technologies Ltd. Variable flux allocation within a lidar fov to improve detection in a region
CN108594245A (en) * 2018-07-04 2018-09-28 北京国泰星云科技有限公司 A kind of object movement monitoring system and method
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
US20220099838A1 (en) * 2020-09-25 2022-03-31 Hyundai Motor Company Method and Device for Tracking Object Using Lidar Sensor, Vehicle Including the Device, and Recording Medium Storing Program to Execute the Method
CN112258600A (en) * 2020-10-19 2021-01-22 浙江大学 Simultaneous positioning and map construction method based on vision and laser radar
CN113688282A (en) * 2021-07-23 2021-11-23 北京三快在线科技有限公司 Data processing method and device, electronic equipment and readable storage medium
CN114119465A (en) * 2021-10-09 2022-03-01 北京亮道智能汽车技术有限公司 Point cloud data processing method and device
CN114019473A (en) * 2021-11-09 2022-02-08 商汤国际私人有限公司 Object detection method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JUNSIK KIM: "LiDAR Point Cloud Compression by Vertically Placed Objects Based on Global Motion Prediction", 《IEEE ACCESS》, vol. 10, 31 January 2022 (2022-01-31), pages 15298, XP093144917, DOI: 10.1109/ACCESS.2022.3148252 *
吴勇 等: "远距离运动车辆快速跟踪与定位系统设计", 《测绘通报》, no. 01, 25 January 2021 (2021-01-25), pages 112 - 115 *
张森: "基于激光雷达与IMU融合的退化场景下定位建图方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 03, 15 March 2022 (2022-03-15), pages 136 - 1723 *

Similar Documents

Publication Publication Date Title
US11468581B2 (en) Distance measurement method, intelligent control method, electronic device, and storage medium
WO2020168742A1 (en) Method and device for vehicle body positioning
CN111105454B (en) Method, device and medium for obtaining positioning information
CN106778773B (en) Method and device for positioning target object in picture
CN111340766A (en) Target object detection method, device, equipment and storage medium
CN113205549B (en) Depth estimation method and device, electronic equipment and storage medium
CN109725329A (en) A kind of unmanned vehicle localization method and device
CN114267041B (en) Method and device for identifying object in scene
CN109582134B (en) Information display method and device and display equipment
CN109242782B (en) Noise processing method and device
CN110930351A (en) Light spot detection method and device and electronic equipment
CN115907566B (en) Evaluation method and device for automatic driving perception detection capability and electronic equipment
CN115407355B (en) Library position map verification method and device and terminal equipment
CN115015955A (en) Method, apparatus, device, storage medium and program product for determining motion information
CN115965935A (en) Object detection method, device, electronic apparatus, storage medium, and program product
CN115825979A (en) Environment sensing method and device, electronic equipment, storage medium and vehicle
CN113065392A (en) Robot tracking method and device
CN115469292B (en) Environment sensing method and device, electronic equipment and storage medium
CN116772894B (en) Positioning initialization method, device, electronic equipment, vehicle and storage medium
CN116934840A (en) Object detection method, device, electronic apparatus, storage medium, and program product
CN116883496B (en) Coordinate reconstruction method and device for traffic element, electronic equipment and storage medium
CN116540252B (en) Laser radar-based speed determination method, device, equipment and storage medium
CN117152693A (en) Object detection method, device, electronic apparatus, storage medium, and program product
CN112148815B (en) Positioning method and device based on shared map, electronic equipment and storage medium
CN116385528B (en) Method and device for generating annotation information, electronic equipment, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 300000 No. 326, No. 8, Third Street, international logistics District, Tianjin pilot free trade zone (Airport Economic Zone) (No. bcy702 entrusted by beichuangyi (Tianjin) business secretary Co., Ltd.)

Applicant after: Tianjin Carl Power Technology Co.,Ltd.

Address before: 300000 No. 326, No. 8, Third Street, international logistics District, Tianjin pilot free trade zone (Airport Economic Zone) (No. bcy702 entrusted by beichuangyi (Tianjin) business secretary Co., Ltd.)

Applicant before: Tianjin caldog Technology Co.,Ltd.