CN114001706B - Course angle estimation method and device, electronic equipment and storage medium - Google Patents

Course angle estimation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114001706B
CN114001706B CN202111626182.XA CN202111626182A CN114001706B CN 114001706 B CN114001706 B CN 114001706B CN 202111626182 A CN202111626182 A CN 202111626182A CN 114001706 B CN114001706 B CN 114001706B
Authority
CN
China
Prior art keywords
point cloud
cloud frame
frame
polar coordinate
representations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111626182.XA
Other languages
Chinese (zh)
Other versions
CN114001706A (en
Inventor
丁夏清
谭梦文
邓欢军
李名杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202111626182.XA priority Critical patent/CN114001706B/en
Publication of CN114001706A publication Critical patent/CN114001706A/en
Application granted granted Critical
Publication of CN114001706B publication Critical patent/CN114001706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a course angle estimation method, a course angle estimation device, electronic equipment and a storage medium. The course angle estimation method comprises the following steps: acquiring a first point cloud frame and a second point cloud frame acquired by a sensor; determining respective planar projection representations of the first point cloud frame and the second point cloud frame; transforming the respective plane projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations; and estimating the course angle of the sensor according to the polar coordinate representation of the first point cloud frame and the second point cloud frame. In the scheme of the embodiment of the invention, because the rotation factor and the translation factor in the moving process of the sensor are favorably decoupled according to the polar coordinate representation of the first point cloud frame and the second point cloud frame, the course angle of the sensor is estimated based on the polar coordinate representation, and reliable course angle estimation can be obtained, thereby being favorable for obtaining reliable global pose estimation.

Description

Course angle estimation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a course angle estimation method and device, electronic equipment and a storage medium.
Background
In the field of automatic driving, the laser sensor can be used as a common sensing device to perform pose estimation, and accurate positioning of a vehicle is realized. Generally, pose estimation of a vehicle typically feature matches or relies on a gradient descent pose estimator to perform pose estimation.
However, in the calculation process of feature matching, structural constraints of the Point cloud as a whole are not considered, so that reliable global pose estimation cannot be performed, and in addition, when the pose estimation is performed by using an Iterative Closest Point (ICP) registration method, for example, the convergence domain of registration is small, and depending on an initial value, reliable global pose estimation cannot be performed.
Therefore, a reliable global pose estimation scheme is needed.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, an electronic device, and a storage medium for estimating a heading angle, so as to at least partially solve the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a heading angle estimation method, including: acquiring a first point cloud frame and a second point cloud frame acquired by a sensor; determining respective planar projection representations of the first point cloud frame and the second point cloud frame; transforming the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations; estimating a course angle of the sensor according to respective polar coordinate representations of the first point cloud frame and the second point cloud frame, wherein the course angle indicates an angle between a first position where the sensor collects the first point cloud frame and a second position where the sensor collects the second point cloud frame.
In other examples, the transforming the planar projection representation of each of the first point cloud frame and the second point cloud frame into a polar coordinate representation includes: transforming the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations according to a radon transform.
In other examples, the estimating a heading angle of the sensor from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame includes: extracting respective translation invariant representations of the first point cloud frame and the second point cloud frame according to the respective polar coordinate representations of the first point cloud frame and the second point cloud frame; and estimating the course angle of the sensor according to the respective translation invariant representations of the first point cloud frame and the second point cloud frame.
In other examples, the extracting the respective translation invariant representations of the first point cloud frame and the second point cloud frame from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame includes: extracting spectral information representing translational invariants of the first point cloud frame and the second point cloud frame from the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame through one-dimensional discrete Fourier transform.
In other examples, the estimating a heading angle of the sensor from the respective translational invariant representations of the first point cloud frame and the second point cloud frame comprises: calculating frequency domain information represented by the respective translation invariants of the first point cloud frame and the second point cloud frame through one-dimensional discrete Fourier transform according to the frequency spectrum information indicating the translation invariants; calculating the product of frequency domain information represented by respective translation invariants of the first point cloud frame and the second point cloud frame; and converting the product of the frequency domain information into a time domain through one-dimensional inverse discrete Fourier transform, and calculating an angle value corresponding to the maximum response coordinate to be used as the heading angle of the sensor.
In other examples, the acquiring the first and second frames of point clouds acquired by the sensor includes: and acquiring a current point cloud frame and a historical point cloud frame which are acquired by a sensor and respectively used as a first point cloud frame and a second point cloud frame.
In other examples, the acquiring the first and second frames of point clouds acquired by the sensor includes: acquiring a current point cloud frame acquired by a sensor as a first point cloud frame; and acquiring a point cloud sub-graph generated based on fusion of a plurality of historical point cloud frames as a second point cloud frame.
In other examples, the determining a planar projection representation of the second point cloud frame includes: determining a planar projection representation of the point cloud subgraph; and inputting the plane projection representation of the point cloud subgraph into a pre-trained neural network to obtain the plane projection representation of the second point cloud frame.
According to a second aspect of the embodiments of the present invention, there is provided a heading angle estimation device including: the acquisition module acquires a first point cloud frame and a second point cloud frame acquired by a sensor; a determining module that determines respective planar projection representations of the first point cloud frame and the second point cloud frame; a transformation module that transforms the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations; and the estimation module is used for estimating a course angle of the sensor according to the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame, wherein the course angle indicates an angle between a first position where the sensor collects the first point cloud frame and a second position where the sensor collects the second point cloud frame.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the method of the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
In the scheme of the embodiment of the invention, because the rotation factor and the translation factor in the moving process of the sensor are favorably decoupled according to the polar coordinate representation of the first point cloud frame and the second point cloud frame, the course angle of the sensor is estimated based on the polar coordinate representation, and reliable course angle estimation can be obtained, thereby being favorable for obtaining reliable global pose estimation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1A is a schematic diagram of an exemplary vehicle movement trajectory.
Fig. 1B is a schematic diagram of another example vehicle movement trajectory.
FIG. 2 is a flowchart illustrating a method for estimating a heading angle according to another embodiment of the invention.
FIG. 3 is a schematic block diagram of a method of course angle estimation according to another embodiment of the present invention.
Fig. 4 is a block diagram of a heading angle estimation device according to another embodiment of the invention.
Fig. 5 is a schematic structural diagram of an electronic device according to another embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
FIG. 1A is a schematic diagram of a vehicle movement trajectory according to one example. As shown in fig. 1A, the path of the vehicle movement is a substantial translation, and in estimating the pose of the vehicle, the point cloud frame can be converted into a planar projection representation, and then heading angle estimation of the sensor is performed based on the planar projection representation, and since the path of the vehicle movement is a substantial translation, the movement track of the heading angle is also a substantial translation, and therefore, reliable heading angle estimation is represented by the planar projection.
Fig. 1B is a schematic diagram of a vehicle movement trajectory according to another example. As shown in fig. 1B, the moving route of the vehicle is a turn, when the pose of the vehicle is estimated, the point cloud frame may be converted into a planar projection representation, and then heading angle estimation is performed based on the planar projection representation.
FIG. 2 is a flowchart illustrating a method for estimating a heading angle according to another embodiment of the invention. The solution of the present embodiment may be applied to any suitable electronic device with data processing capability, including but not limited to: a server, a desktop computer, a mobile terminal such as a mobile phone, a car machine, etc., the electronic device may be deployed in a robot or a vehicle for performing pose estimation based on a heading angle.
The course angle estimation method of the embodiment comprises the following steps:
s210: and acquiring a first point cloud frame and a second point cloud frame acquired by a sensor.
It is to be understood that the number of the sensors may be one or more, and in the case of a plurality of sensors, the relative positional relationship of the plurality of sensors may be determined.
It should also be understood that the first point cloud frame and the second point cloud frame may be current point cloud frames as well as historical point cloud frames. In addition, the first point cloud frame and the second point cloud frame can be both one point cloud frame or a group of point cloud frames. For example, at least one of the first point cloud frame and the second point cloud frame is a point cloud sub-graph.
S220: determining respective planar projection representations of the first point cloud frame and the second point cloud frame.
It should be understood that the planar projection representation may be a Bird's Eye View (BEV), or other projection View based on the local coordinate system of the sensor that collects the frames of the point cloud. For example, the sensor collects a first point cloud frame at a first position to obtain a first projection view based on the first point cloud frame, and the sensor collects a second point cloud frame at a second position to obtain a second projection view based on the second point cloud frame.
S230: and transforming the plane projection representation of the first point cloud frame and the second point cloud frame into polar coordinate representation.
It is to be understood that the projection map may be generated by transforming the first point cloud frame and the second point cloud frame into a projection map based on the angle parameter and the distance parameter.
It is also understood that the polar coordinate representations of the first point cloud frame and the second point cloud frame may be derived based on a coordinate transformation, such as a radon transformation.
S240: and estimating a course angle of the sensor according to the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame, wherein the course angle indicates an angle between a first position where the sensor collects the first point cloud frame and a second position where the sensor collects the second point cloud frame.
It is to be understood that the heading angle of the sensor can be determined from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame by an algorithm such as discrete fourier transform. The heading angle of the sensor can also be determined based on the respective angular features by extracting the respective angular features of the first point cloud frame and the second point cloud frame from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame.
In the scheme of the embodiment of the invention, because the rotation factor and the translation factor in the moving process of the sensor are favorably decoupled according to the polar coordinate representation of the first point cloud frame and the second point cloud frame, the course angle of the sensor is estimated based on the polar coordinate representation, and reliable course angle estimation can be obtained, thereby being favorable for obtaining reliable global pose estimation.
Further, the vehicle or the robot may be arranged with at least one sensor, a heading angle of the at least one sensor may be estimated based on the heading angle estimation method, and then pose information of the vehicle or the robot based on the world coordinate system may be estimated based on the heading angle of the at least one sensor and a position of the at least one sensor on the vehicle or the robot.
Further, a first coordinate system for acquiring the first point cloud frame and a second coordinate system for acquiring the second point cloud frame are both sensor local coordinate systems. The first coordinate system and the world coordinate system have a first coordinate transformation relationship therebetween, e.g., the first transformation matrix indicates the first coordinate transformation relationship. The second coordinate system has a second coordinate transformation relationship with the world coordinate system, e.g., the second transformation matrix indicates the second coordinate transformation relationship. The first pose and the second pose of the sensor in the world coordinate system may be estimated based on the first coordinate transformation relationship and the second coordinate transformation relationship, respectively.
In some examples, the first coordinate transformation relationship may be transformed to the second coordinate transformation relationship, e.g., the first transformation matrix may be transformed to the second transformation matrix, based on the heading angle. Therefore, efficient transformation from the first coordinate transformation relation to the second coordinate transformation relation is achieved, data processing amount is reduced, and pose estimation is further facilitated.
It will be appreciated that the sensors that acquire the first point cloud frame and the second point cloud frame may be arranged as a first sensor that acquires a direction that is substantially parallel to a plane in which the wheels of the autonomous vehicle lie, and accordingly the estimated heading angle of the first sensor is indicative of the turning angle of the vehicle, i.e. the yaw angle yaw.
It will also be appreciated that the sensor that captures the first and second point cloud frames may be provided as a second sensor whose capture direction is perpendicular to a plane in which the wheels of the autonomous vehicle lie and substantially parallel to a plane in the fore-aft direction, and accordingly the estimated heading angle of the second sensor is indicative of the pitch angle pitch of the vehicle.
The heading angle includes three dimensions, the sensor that collects the first point cloud frame and the second point cloud frame may be set as a third sensor, a collection direction of the third sensor is perpendicular to a plane on which wheels of the autonomous vehicle are located and substantially parallel to a plane in a left-right direction, and accordingly, the estimated heading angle of the third sensor indicates a roll angle roll of the vehicle.
An automatic driving system or an Electronic Control Unit (ECU) of the vehicle may acquire at least one of the first sensor, the second sensor, and the third sensor, and perform pose estimation in automatic driving and road surface state estimation of a road on which the vehicle is traveling.
Further, the vehicle is a road surface testing vehicle, and a first sensor, a second sensor and a third sensor are arranged on the road surface testing vehicle. The road surface test vehicle obtains the polar coordinate representation of the corresponding point cloud frame from each sensor, the polar coordinate representation is beneficial to decoupling the rotation factor and the translation factor in the moving process of the sensors, so that the global representation of course angles at different positions can be obtained, namely the global representation based on a yaw angle, a pitch angle and a roll angle, and then the road surface test vehicle reports the global representation to a server for drawing a three-dimensional high-precision map.
In other examples, transforming the planar projection representation of each of the first point cloud frame and the second point cloud frame into a polar coordinate representation includes: according to the radon transform, the respective planar projection representations of the first point cloud frame and the second point cloud frame are transformed into polar coordinate representations. Thus, the radon transform improves data processing efficiency in performing the process of polar coordinate representation.
In particular, a sinogram (sinogram) derived from a radon transform is parameterized by the slope and offset of the scan line, and each pixel represents an integral value along the parameterized line.
In addition, the heading angle between the first point cloud frame and the second point cloud frame is reflected only as an offset along the slope-related axis, while the translation distance is reflected as an offset along the offset-related axis coupled with the slope.
In the radon transform, values at different scan intercepts can be characterized as the scan incidence angle. When the angle is calculated, the integral value corresponding to the scanning angle can be used as the characteristic of the angle, and the translation difference with the highest characteristic correlation can be calculated in an angular direction in a traversing manner, namely the relative angle between two point clouds.
In order to overcome the influence of translation on the angle characteristics, one-dimensional Fourier transform is carried out along the scanning intercept dimension, and the frequency spectrum of the Fourier transform is taken as the angle characteristics.
And calculating traversal operation with the highest global correlation for accelerating the angle direction, further performing one-dimensional FFT along the angle direction, performing product operation on a frequency domain to quickly calculate the correlation, and then performing inverse transformation to obtain a position corresponding to the maximum value of the result, namely the result with the highest corresponding correlation.
In other examples, estimating a heading angle of the sensor from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame includes: extracting respective translation invariant representations of the first point cloud frame and the second point cloud frame according to the respective polar coordinate representations of the first point cloud frame and the second point cloud frame; and estimating the course angle of the sensor according to the translation invariant representations of the first point cloud frame and the second point cloud frame. Because the translation invariant decouples the translation component, the accuracy of course angle estimation is improved.
In particular, the translation invariants further eliminate the offset along the offset correlation axis, leaving the resulting representation unchanged from the translation, and the problem can be effectively solved based on the circular cross-correlation along the slope correlation axis. The translation invariance represents a sinogram determination generated based on the radon transform.
In other examples, extracting the respective translation invariant representations of the first point cloud frame and the second point cloud frame from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame includes: and extracting the frequency spectrum information of the first point cloud frame and the second point cloud frame respectively representing the translation invariants from the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame through one-dimensional discrete Fourier transform.
In other examples, estimating a heading angle of the sensor from the respective translation invariant representations of the first point cloud frame and the second point cloud frame includes: calculating frequency domain information represented by the respective translation invariants of the first point cloud frame and the second point cloud frame through one-dimensional discrete Fourier transform according to the frequency spectrum information indicating the translation invariants; calculating the product of frequency domain information represented by respective translation invariants of the first point cloud frame and the second point cloud frame; and converting the product of the frequency domain information into a time domain through one-dimensional inverse discrete Fourier transform, and calculating an angle value corresponding to the maximum response coordinate to be used as a heading angle of the sensor.
A heading angle estimation method according to another embodiment of the present invention will be described in detail below with reference to fig. 3. As shown in figure 3 of the drawings,
first, a planar projection representation is performed on the first point cloud frame and the second point cloud frame.
Then, the first point cloud frame and the second point cloud frame are respectively transformed into polar coordinate representations by performing planar projection representation.
Then, according to the spectrum information indicating the translation invariants, frequency domain information represented by the respective translation invariants of the first point cloud frame and the second point cloud frame is calculated through one-dimensional discrete Fourier transform.
Then, the product of the frequency domain information represented by the respective translation invariants of the first point cloud frame and the second point cloud frame is calculated.
And then, converting the product of the frequency domain information into a time domain through one-dimensional inverse discrete Fourier transform, and calculating an angle value corresponding to the maximum response coordinate to be used as a heading angle of the sensor.
In other examples, acquiring a first point cloud frame and a second point cloud frame acquired by a sensor includes: and acquiring a current point cloud frame and a historical point cloud frame which are acquired by a sensor and respectively used as a first point cloud frame and a second point cloud frame. Specifically, the first point cloud frame and the second point cloud frame are dense to the same degree, for example, both the first point cloud frame and the second point cloud frame may be a single frame point cloud.
In other examples, acquiring a first point cloud frame and a second point cloud frame acquired by a sensor includes: acquiring a current point cloud frame acquired by a sensor as a first point cloud frame; and acquiring a point cloud sub-graph generated based on fusion of a plurality of historical point cloud frames as a second point cloud frame.
In other examples, determining a planar projection representation of the second point cloud frame includes: determining a planar projection representation of the point cloud subgraph; and inputting the plane projection representation of the point cloud subgraph into a pre-trained neural network to obtain the plane projection representation of the second point cloud frame. Therefore, the pre-trained neural network can perform dense feature extraction, the correspondence of the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame is improved, and the reliability of course angle estimation is improved. In other words, the introduced neural network extracts the consistency characteristics among different space distribution point clouds, and improves the angle estimation performance in application scenes such as occlusion, single-frame to point cloud subgraphs and the like.
Fig. 4 is a block diagram of a heading angle estimation device according to another embodiment of the invention. The solution of the present embodiment may be applied to any suitable electronic device with data processing capability, including but not limited to: a server, a desktop computer, a mobile terminal such as a mobile phone, a car machine, etc., the electronic device may be deployed in a robot or a vehicle for performing pose estimation based on a heading angle.
The course angle estimation device of the embodiment includes:
the acquisition module 410 acquires a first point cloud frame and a second point cloud frame acquired by a sensor.
A determining module 420 that determines respective planar projection representations of the first point cloud frame and the second point cloud frame.
A transformation module 430 that transforms the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations.
An estimation module 440 that estimates a heading angle of the sensor according to respective polar coordinate representations of the first point cloud frame and the second point cloud frame, the heading angle indicating an angle between a first location where the sensor collects the first point cloud frame and a second location where the sensor collects the second point cloud frame.
In the scheme of the embodiment of the invention, because the rotation factor and the translation factor in the moving process of the sensor are favorably decoupled according to the polar coordinate representation of the first point cloud frame and the second point cloud frame, the course angle of the sensor is estimated based on the polar coordinate representation, and reliable course angle estimation can be obtained, thereby being favorable for obtaining reliable global pose estimation.
Further, the vehicle or the robot may be arranged with at least one sensor, a heading angle of the at least one sensor may be estimated based on the heading angle estimation method, and then pose information of the vehicle or the robot based on the world coordinate system may be estimated based on the heading angle of the at least one sensor and a position of the at least one sensor on the vehicle or the robot.
In other examples, the transformation module is specifically configured to: transforming the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations according to a radon transform.
In other examples, the estimation module is specifically configured to: extracting respective translation invariant representations of the first point cloud frame and the second point cloud frame according to the respective polar coordinate representations of the first point cloud frame and the second point cloud frame; and estimating the course angle of the sensor according to the respective translation invariant representations of the first point cloud frame and the second point cloud frame.
In other examples, the estimation module is specifically configured to: extracting spectral information representing translational invariants of the first point cloud frame and the second point cloud frame from the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame through one-dimensional discrete Fourier transform.
In other examples, the estimation module is specifically configured to: calculating frequency domain information represented by the respective translation invariants of the first point cloud frame and the second point cloud frame through one-dimensional discrete Fourier transform according to the frequency spectrum information indicating the translation invariants; calculating the product of frequency domain information represented by respective translation invariants of the first point cloud frame and the second point cloud frame; and converting the product of the frequency domain information into a time domain through one-dimensional inverse discrete Fourier transform, and calculating an angle value corresponding to the maximum response coordinate to be used as the heading angle of the sensor.
In other examples, the obtaining module is specifically configured to: and acquiring a current point cloud frame and a historical point cloud frame which are acquired by a sensor and respectively used as a first point cloud frame and a second point cloud frame.
In some examples, the obtaining module is specifically configured to include: acquiring a current point cloud frame acquired by a sensor as a first point cloud frame; and acquiring a point cloud sub-graph generated based on fusion of a plurality of historical point cloud frames as a second point cloud frame.
In other examples, the determining module is specifically configured to: determining a planar projection representation of the point cloud subgraph; and inputting the plane projection representation of the point cloud subgraph into a pre-trained neural network to obtain the plane projection representation of the second point cloud frame.
The apparatus of this embodiment is used to implement the corresponding method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiment, and is not described herein again.
Referring to fig. 5, a schematic structural diagram of an electronic device according to another embodiment of the present invention is shown, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 5, the electronic device may include: a processor (processor)502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein: the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with other electronic devices or servers.
The processor 502 is configured to execute the program 510, and may specifically perform the relevant steps in the above method embodiments.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may specifically be used to cause the processor 502 to perform the following operations: acquiring a first point cloud frame and a second point cloud frame acquired by a sensor; determining respective planar projection representations of the first point cloud frame and the second point cloud frame; transforming the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations; estimating a course angle of the sensor according to respective polar coordinate representations of the first point cloud frame and the second point cloud frame, wherein the course angle indicates an angle between a first position where the sensor collects the first point cloud frame and a second position where the sensor collects the second point cloud frame.
In addition, for specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing method embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that a computer, processor, microprocessor controller, or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, implements the methods described herein. Further, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.

Claims (7)

1. A course angle estimation method is applied to an automatic driving vehicle and comprises the following steps:
acquiring a current point cloud frame acquired by a laser sensor as a first point cloud frame, wherein the laser sensor is arranged in the automatic driving vehicle;
acquiring a point cloud sub-graph generated based on fusion of a plurality of historical point cloud frames as a second point cloud frame;
determining a planar projection representation of the first point cloud frame;
determining a planar projection representation of the point cloud subgraph;
inputting the plane projection representation of the point cloud subgraph into a pre-trained neural network to obtain the plane projection representation of the second point cloud frame, wherein the pre-trained neural network can perform dense feature extraction to extract consistency features among different spatially distributed point clouds;
transforming the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations according to a radon transform;
estimating a course angle of the laser sensor according to respective polar coordinate representations of the first point cloud frame and the second point cloud frame, wherein the course angle indicates an angle between a first position where the laser sensor collects the first point cloud frame and a second position where the laser sensor collects the second point cloud frame, and the course angle is used for estimating the pose of the autonomous vehicle based on the position of the laser sensor on the autonomous vehicle.
2. The method of claim 1, wherein estimating a heading angle of the laser sensor from the respective polar coordinate representations of the first point cloud frame and the second point cloud frame comprises:
extracting respective translation invariant representations of the first point cloud frame and the second point cloud frame according to the respective polar coordinate representations of the first point cloud frame and the second point cloud frame;
and estimating the course angle of the laser sensor according to the respective translation invariant representations of the first point cloud frame and the second point cloud frame.
3. The method of claim 2, wherein said extracting a translation invariant representation of each of the first point cloud frame and the second point cloud frame from their respective polar coordinate representations comprises:
extracting spectral information representing translational invariants of the first point cloud frame and the second point cloud frame from the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame through one-dimensional discrete Fourier transform.
4. The method of claim 3, wherein estimating the heading angle of the laser sensor from the respective translational invariant representations of the first point cloud frame and the second point cloud frame comprises:
calculating frequency domain information represented by the respective translation invariants of the first point cloud frame and the second point cloud frame through one-dimensional discrete Fourier transform according to the frequency spectrum information representing the translation invariants;
calculating the product of frequency domain information represented by respective translation invariants of the first point cloud frame and the second point cloud frame;
and converting the product of the frequency domain information into a time domain through one-dimensional inverse discrete Fourier transform, and calculating an angle value corresponding to the maximum response coordinate to be used as the course angle of the laser sensor.
5. A heading angle estimation device, comprising:
the acquisition module is used for acquiring a current point cloud frame acquired by a laser sensor as a first point cloud frame, the laser sensor is arranged in the automatic driving vehicle, and a point cloud sub-graph generated based on fusion of a plurality of historical point cloud frames is acquired as a second point cloud frame;
the determining module is used for determining a plane projection representation of the first point cloud frame, determining a plane projection representation of the point cloud sub-graph, inputting the plane projection representation of the point cloud sub-graph into a pre-trained neural network to obtain a plane projection representation of the second point cloud frame, and the pre-trained neural network can perform dense feature extraction to extract consistency features among different spatially distributed point clouds;
a transformation module that transforms the respective planar projection representations of the first point cloud frame and the second point cloud frame into polar coordinate representations according to radon transformation;
and the estimation module is used for estimating a course angle of the laser sensor according to the polar coordinate representation of the first point cloud frame and the polar coordinate representation of the second point cloud frame, wherein the course angle indicates an angle between a first position where the laser sensor collects the first point cloud frame and a second position where the laser sensor collects the second point cloud frame, and the course angle is used for estimating the pose of the automatic driving vehicle based on the position of the laser sensor on the automatic driving vehicle.
6. An electronic device, comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the method of any one of claims 1-4.
7. A computer storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 4.
CN202111626182.XA 2021-12-29 2021-12-29 Course angle estimation method and device, electronic equipment and storage medium Active CN114001706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111626182.XA CN114001706B (en) 2021-12-29 2021-12-29 Course angle estimation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111626182.XA CN114001706B (en) 2021-12-29 2021-12-29 Course angle estimation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114001706A CN114001706A (en) 2022-02-01
CN114001706B true CN114001706B (en) 2022-04-29

Family

ID=79932109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111626182.XA Active CN114001706B (en) 2021-12-29 2021-12-29 Course angle estimation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114001706B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115267724B (en) * 2022-07-13 2023-08-29 浙江大学 Position re-identification method of mobile robot capable of estimating pose based on laser radar
CN115932868A (en) * 2022-11-21 2023-04-07 浙江大学 Mobile robot efficient robust global positioning method based on laser radar

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006186626A (en) * 2004-12-27 2006-07-13 Sanyo Electric Co Ltd Device and method for embedding watermark and for extracting it
CN104050631A (en) * 2013-11-25 2014-09-17 中国科学院上海应用物理研究所 Low-dose CT image reconstruction method
CN106682689A (en) * 2016-12-16 2017-05-17 西安汇明光电技术有限公司 Image matching method based on multiscale Fourier-Mellin transform
CN109844770A (en) * 2016-08-08 2019-06-04 高盛有限责任公司 Learn the system and method with predicted time sequence data for using inertia autocoder
CN110928312A (en) * 2019-12-16 2020-03-27 深圳市银星智能科技股份有限公司 Robot position determination method, non-volatile computer-readable storage medium, and robot
CN113419265A (en) * 2021-06-15 2021-09-21 湖南北斗微芯数据科技有限公司 Positioning method and device based on multi-sensor fusion and electronic equipment
CN214583238U (en) * 2021-01-29 2021-11-02 成都麦宁富研科技有限公司 Surveying and mapping device based on multiple sensors and mobile surveying and mapping vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657735B (en) * 2013-11-21 2018-01-23 比亚迪股份有限公司 Method for detecting lane lines, system, lane departure warning method and system
CN110221276B (en) * 2019-05-31 2023-09-29 文远知行有限公司 Laser radar calibration method, device, computer equipment and storage medium
CN112016441B (en) * 2020-08-26 2023-10-13 大连海事大学 Extraction method of Sentinel-1 image coastal zone culture pond based on Radon transformation multi-feature fusion
CN112669381B (en) * 2020-12-28 2021-09-21 北京达佳互联信息技术有限公司 Pose determination method and device, electronic equipment and storage medium
CN113567978B (en) * 2021-07-29 2023-04-25 电子科技大学 Multi-base distributed radar collaborative imaging method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006186626A (en) * 2004-12-27 2006-07-13 Sanyo Electric Co Ltd Device and method for embedding watermark and for extracting it
CN104050631A (en) * 2013-11-25 2014-09-17 中国科学院上海应用物理研究所 Low-dose CT image reconstruction method
CN109844770A (en) * 2016-08-08 2019-06-04 高盛有限责任公司 Learn the system and method with predicted time sequence data for using inertia autocoder
CN106682689A (en) * 2016-12-16 2017-05-17 西安汇明光电技术有限公司 Image matching method based on multiscale Fourier-Mellin transform
CN110928312A (en) * 2019-12-16 2020-03-27 深圳市银星智能科技股份有限公司 Robot position determination method, non-volatile computer-readable storage medium, and robot
CN214583238U (en) * 2021-01-29 2021-11-02 成都麦宁富研科技有限公司 Surveying and mapping device based on multiple sensors and mobile surveying and mapping vehicle
CN113419265A (en) * 2021-06-15 2021-09-21 湖南北斗微芯数据科技有限公司 Positioning method and device based on multi-sensor fusion and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
依靠自身传感器的室内无人机自主导航引导技术综述;倪磊等;《计算机应用与软件》;20120815(第08期);全文 *
基于零相位谱的雷达目标识别;曹向海等;《现代雷达》;20070915(第09期);全文 *

Also Published As

Publication number Publication date
CN114001706A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
CN114001706B (en) Course angle estimation method and device, electronic equipment and storage medium
CN112179330B (en) Pose determination method and device of mobile equipment
CN111812658B (en) Position determination method, device, system and computer readable storage medium
CN111443359B (en) Positioning method, device and equipment
CN112997187A (en) Two-dimensional object bounding box information estimation based on aerial view point cloud
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN111986261B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN111427028B (en) Parameter monitoring method, device, equipment and storage medium
CN113945937A (en) Precision detection method, device and storage medium
CN111627001A (en) Image detection method and device
CN114820749A (en) Unmanned vehicle underground positioning method, system, equipment and medium
CN115436920A (en) Laser radar calibration method and related equipment
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN117590362B (en) Multi-laser radar external parameter calibration method, device and equipment
JP5928010B2 (en) Road marking detection apparatus and program
CN113744236B (en) Loop detection method, device, storage medium and computer program product
CN108917768B (en) Unmanned aerial vehicle positioning navigation method and system
CN111784798A (en) Map generation method and device, electronic equipment and storage medium
KR20210030136A (en) Apparatus and method for generating vehicle data, and vehicle system
CN109711363A (en) Vehicle positioning method, device, equipment and storage medium
CN114820955B (en) Symmetric plane completion method, device, equipment and storage medium
CN113029166B (en) Positioning method, positioning device, electronic equipment and storage medium
CN115700507B (en) Map updating method and device
CN117649619B (en) Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220201

Assignee: Hangzhou Jinyong Technology Co.,Ltd.

Assignor: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Contract record no.: X2024980001317

Denomination of invention: Heading angle estimation method, device, electronic device, and storage medium

Granted publication date: 20220429

License type: Common License

Record date: 20240123

Application publication date: 20220201

Assignee: Golden Wheat Brand Management (Hangzhou) Co.,Ltd.

Assignor: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Contract record no.: X2024980001316

Denomination of invention: Heading angle estimation method, device, electronic device, and storage medium

Granted publication date: 20220429

License type: Common License

Record date: 20240123

Application publication date: 20220201

Assignee: Hangzhou Xinlong Huazhi Trademark Agency Co.,Ltd.

Assignor: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Contract record no.: X2024980001315

Denomination of invention: Heading angle estimation method, device, electronic device, and storage medium

Granted publication date: 20220429

License type: Common License

Record date: 20240123