CN114882115B - Vehicle pose prediction method and device, electronic equipment and storage medium - Google Patents

Vehicle pose prediction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114882115B
CN114882115B CN202210657149.1A CN202210657149A CN114882115B CN 114882115 B CN114882115 B CN 114882115B CN 202210657149 A CN202210657149 A CN 202210657149A CN 114882115 B CN114882115 B CN 114882115B
Authority
CN
China
Prior art keywords
image information
transformation matrix
target
checkerboard image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210657149.1A
Other languages
Chinese (zh)
Other versions
CN114882115A (en
Inventor
李禹亮
尚进
於大维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202210657149.1A priority Critical patent/CN114882115B/en
Publication of CN114882115A publication Critical patent/CN114882115A/en
Application granted granted Critical
Publication of CN114882115B publication Critical patent/CN114882115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a vehicle pose prediction method and device, electronic equipment and storage medium, wherein the method comprises the following steps: acquiring first checkerboard image information and second checkerboard image information; determining a first transformation matrix according to the first key points in the first checkerboard image information and the second key points in the second checkerboard image information; determining a second transformation matrix according to the first transformation matrix, the first plane and original checkerboard image information under original coordinates; and determining a pose prediction result of the target vehicle according to the second transformation matrix. The method solves the problems of time and labor consumption for calibrating the sensor and inaccurate prediction of the vehicle pose in the related technology.

Description

Vehicle pose prediction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of unmanned vehicles, and in particular, to a method and apparatus for predicting a pose of a vehicle, an electronic device, and a storage medium.
Background
Currently, positioning systems are an important module in unmanned systems. This module predicts the position and attitude of the vehicle during operation to provide data for subsequent path planning and the like. The accuracy of pose prediction is directly related to the operational stability of the unmanned system and is directly related to the safety of passengers in the vehicle.
In the current related technology, hardware such as a binocular camera, a laser radar, a vehicle body GPS IMU wheel speed meter and the like is generally used for predicting the vehicle pose by combining with optimization algorithms such as deep learning or machine learning, but CAN signals or other complex signal processing are required for acquiring data such as the IMU, the wheel speed meter and the like, so that certain requirements are put forward on the programming of staff; and more sensors are required to be used for calibration, so that time and effort are very consumed, and meanwhile, certain errors exist in the use process of the sensors.
Therefore, the problems of time and labor consumption for calibrating the sensor and inaccurate prediction of the vehicle pose exist in the related technology.
Disclosure of Invention
The application provides a vehicle pose prediction method and device, electronic equipment and a storage medium, which at least solve the problems of time and labor consumption for calibrating a sensor and inaccurate vehicle pose prediction in the related technology.
According to an aspect of an embodiment of the present application, there is provided a method for predicting a vehicle pose, the method including:
acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image information is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle;
Determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane where the first checkerboard image information is located and a second plane where the second checkerboard image information is located;
determining a second transformation matrix according to the first transformation matrix, the first plane and original checkerboard image information under original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between intersection point information in the second plane and intersection point information in the original checkerboard image information;
and determining a pose prediction result of the target vehicle according to the second transformation matrix.
According to another aspect of the embodiment of the present application, there is also provided a device for predicting a vehicle pose, the device including:
the first acquisition module is used for acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image information is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle;
The first determining module is used for determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane where the first checkerboard image information is located and a second plane where the second checkerboard image information is located;
the second determining module is used for determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information;
and the third determining module is used for determining the pose prediction result of the target vehicle according to the second transformation matrix.
Optionally, the first determining module includes:
an extracting unit, configured to extract a key point in the first checkerboard image information by using a first target scheme, as the first key point, and extract a key point in the second checkerboard image information by using the first target scheme, as the second key point;
The matching unit is used for carrying out feature matching on the first key point and the second key point by utilizing a second target scheme, and determining a target key point pair;
and the first determining unit is used for determining the first transformation matrix according to the target key point pair and a third target scheme.
Optionally, the extraction unit includes:
a searching sub-module for searching image positions on all scale spaces in the first checkerboard image information and the second checkerboard image information respectively by using the first target scheme;
the extraction sub-module is used for respectively extracting the image features on the image positions according to preset conditions to obtain the first key points and the second key points, wherein the first key points comprise the position information and the description attribute of the first key points, and the second key points comprise the position information and the description attribute of the second key points.
Optionally, the matching unit includes:
performing similarity matching on the first key point and a plurality of second key points;
the selecting sub-module is used for selecting a preset number of third key points according to the similarity matching result, wherein the third key points are contained in the second key points;
A determining submodule, configured to determine a target key point from a plurality of third key points according to a distance attribute in the description attribute of the first key point, a distance attribute in the description attribute of the third key point, and a preset distance ratio threshold;
and the generation sub-module is used for combining the target key point with the first key point to generate the target key point pair.
Optionally, in the case where the number of the target key point pairs is plural, the first determining unit includes:
and the obtaining submodule is used for calculating an optimal single mapping transformation matrix among a plurality of target key point pairs by utilizing the third target scheme so as to obtain a transformation matrix between two planes where the target key point pairs are positioned, and the transformation matrix is used as the first transformation matrix.
Optionally, the second determining module includes:
the second determining unit is used for determining a target intersection point belonging to the first checkerboard image information in the second plane according to the first transformation matrix and the first plane;
an acquisition unit for acquiring a pixel position of the target intersection point;
and the first obtaining unit is used for obtaining the second transformation matrix according to the pixel positions of the target intersection points and the pixel positions of all intersection points in the original checkerboard image information.
Optionally, the third determining module includes:
the generating unit is used for acquiring a plurality of second transformation matrixes and generating continuous pose change information;
and the second obtaining unit is used for obtaining the pose prediction result of the target vehicle according to the continuous pose change information.
According to still another aspect of the embodiments of the present application, there is provided an electronic device including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein the memory is used for storing a computer program; a processor for performing the method steps of any of the embodiments described above by running the computer program stored on the memory.
According to a further aspect of the embodiments of the present application there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the method steps of any of the embodiments described above when run.
In the embodiment of the application, the first checkerboard image information and the second checkerboard image information are acquired, wherein the first checkerboard image information is the image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image is the image information which is acquired at the current moment and is positioned at the top of the body of the target vehicle; determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane in which the first checkerboard image information is positioned and a second plane in which the second checkerboard image information is positioned; determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information; and determining a pose prediction result of the target vehicle according to the second transformation matrix. According to the embodiment of the application, the chessboard diagram is arranged at the top of the body of the target vehicle, the second chessboard image acquired at the current moment and the first chessboard image are subjected to key point acquisition and matrix mapping, and finally, the positions of the chessboard intersection points are acquired as key information for predicting the pose of the target vehicle, so that the requirement on equipment is small, the method belongs to a low-cost and high-efficiency automobile pose estimation method with accurate pose calculation, and the problems of time and labor consumption for sensor calibration and inaccurate vehicle pose prediction in the related art are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic illustration of a hardware environment of an alternative vehicle pose prediction method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of predicting vehicle pose according to embodiments of the application;
FIG. 3 is a schematic diagram of a checkerboard image according to an embodiment of the present application;
FIG. 4 is a block diagram of an alternative vehicle pose prediction apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an alternative electronic device in accordance with an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiment of the application, a vehicle pose prediction method is provided. Alternatively, in the present embodiment, the above-described vehicle pose prediction method may be applied to a hardware environment as shown in fig. 1. As shown in fig. 1, the terminal 102 may include a memory 104, a processor 106, and a display 108 (optional components). The terminal 102 may be communicatively coupled to a server 112 via a network 110, the server 112 being operable to provide services (e.g., application services, etc.) to the terminal or to clients installed on the terminal, and a database 114 may be provided on the server 112 or independent of the server 112 for providing data storage services to the server 112. In addition, a processing engine 116 may be run in the server 112, which processing engine 116 may be used to perform the steps performed by the server 112.
Alternatively, the terminal 102 may be, but is not limited to, a terminal capable of calculating data, such as a mobile terminal (e.g., a mobile phone, a tablet computer), a notebook computer, a PC (Personal Computer ) or the like, which may include, but is not limited to, a wireless network or a wired network. Wherein the wireless network comprises: bluetooth, WIFI (Wireless Fidelity ) and other networks that enable wireless communications. The wired network may include, but is not limited to: wide area network, metropolitan area network, local area network. The server 112 may include, but is not limited to, any hardware device that can perform calculations.
In addition, in the present embodiment, the method for predicting the vehicle pose may be applied to, but not limited to, an independent processing device with a relatively high processing capability, without data interaction. For example, the processing device may be, but is not limited to, a terminal device with a relatively high processing power, i.e., each operation in the above-described vehicle pose prediction method may be integrated into a single processing device. The above is merely an example, and is not limited in any way in the present embodiment.
Alternatively, in the present embodiment, the above-described vehicle pose prediction method may be performed by the server 112, may be performed by the terminal 102, or may be performed by both the server 112 and the terminal 102. The method for predicting the vehicle pose performed by the terminal 102 according to the embodiment of the present application may be performed by a client installed thereon.
Taking a server as an example, fig. 2 is a schematic flow chart of an alternative vehicle pose prediction method according to an embodiment of the present application, and as shown in fig. 2, the flow of the method may include the following steps:
step S201, acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image information is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle.
Optionally, in an embodiment of the present application, a road test sensing system is provided, where the road test sensing system includes a road side fixing rod, and a camera or a road test camera is disposed on the road side fixing rod, and then a computing unit is installed on the base. The camera is connected with the computing unit through a cable, the camera is powered by the computing unit, and data are transmitted back to the computing unit through the cable. The computing unit is an independently powered computer system (collectively called as a server in the embodiment of the application), and the computing unit is used for collecting the image data of the camera, computing the collected image data to give accurate vehicle pose data, and transmitting the vehicle pose data back to the tested vehicle through a 5G network. Therefore, the embodiment of the application can accurately estimate the gesture by only using one monocular camera, and has higher real-time performance.
The following embodiments of the present application mainly describe how a server calculates image data collected by a camera or a camera, so as to predict vehicle pose data:
first, an initial checkerboard (which may be a black-and-white checkerboard with a fixed size and formed by a plurality of black and white squares arranged at intervals, and the length of each checkerboard has been measured, for example, 1 cm) is selected, and the corresponding image is shown in fig. 3 as first checkerboard image information. The initial checkerboard is then fixed to the roof of the vehicle, and its positional relationship with the center of the vehicle can be obtained by measurement. When the target vehicle dynamically runs and comes within the shooting range of the camera, immediately shooting a checkerboard image positioned at the top of the body of the target vehicle at the current moment, and taking the checkerboard image as second checkerboard image information.
It can be appreciated that, during the forward movement of the target vehicle, the image such as the corner of the second checkerboard image information captured by the camera is different from the first checkerboard image information due to the uncertainty of the movement of the vehicle.
Step S202, determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane in which the first checkerboard image information is located and a second plane in which the second checkerboard image information is located.
Optionally, after the server obtains the first checkerboard image information and the second checkerboard image information, some feature extraction algorithms are adopted to extract key points contained in the first checkerboard image information as first key points, and key points contained in the second checkerboard image information as second key points. And generating a mapping matrix between the two planes based on the planes of the first key point and the second key point respectively, and determining the mapping matrix as a first transformation matrix.
Step S203, determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information.
Alternatively, under a known world coordinate system, the upper left corner of a certain spatial image is generally taken as the origin of coordinates, so as to obtain the real coordinates of the origin of coordinates corresponding to the N spatial points, and at this time, the origin of coordinates is taken as the original coordinates (0, 0), so as to obtain the original checkerboard image information corresponding to the original coordinates.
The intersections of the black cells and the white cells exist regardless of the first plane in which the first checkered image information exists, the second plane in which the second checkered image information exists, or the plane corresponding to the original checkered image information.
Because the first transformation matrix can represent the mapping relation between the first plane and the second plane, the target intersection point position (which can be the pixel position of the target intersection point) corresponding to each intersection point of the first checkerboard image information in the second plane can be reversely deduced based on the intersection point positions of the first transformation matrix and the first plane, the position calculation is carried out according to the pixel position of the target intersection point and the pixel position of each intersection point in the original checkerboard image information, the pose of the second checkerboard image information relative to the camera in the real world is predicted, and the relation can be expressed by using rotation and displacement [ R, t ] to obtain the second transformation matrix.
Step S204, determining the pose prediction result of the target vehicle according to the second transformation matrix.
Optionally, the server may calculate the second transformation matrix at this time, and may generate continuous pose change information based on the plurality of second transformation matrices when the plurality of second transformation matrices are acquired, so that the pose state of the target vehicle may be predicted according to the continuous pose change.
In the embodiment of the application, the first checkerboard image information and the second checkerboard image information are acquired, wherein the first checkerboard image information is the image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image is the image information which is acquired at the current moment and is positioned at the top of the body of the target vehicle; determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane in which the first checkerboard image information is positioned and a second plane in which the second checkerboard image information is positioned; determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information; and determining a pose prediction result of the target vehicle according to the second transformation matrix. According to the embodiment of the application, the chessboard diagram is arranged at the top of the body of the target vehicle, the second chessboard image acquired at the current moment and the first chessboard image are subjected to key point acquisition and matrix mapping, and finally, the positions of the chessboard intersection points are acquired as key information for predicting the pose of the target vehicle, so that the requirement on equipment is small, the method belongs to a low-cost and high-efficiency automobile pose estimation method with accurate pose calculation, and the problems of time and labor consumption for sensor calibration and inaccurate vehicle pose prediction in the related art are solved.
As an alternative embodiment, determining the first transformation matrix from the first keypoints in the first checkerboard image information and the second keypoints in the second checkerboard image information comprises:
extracting key points in the first checkerboard image information by using a first target scheme as first key points, and extracting key points in the second checkerboard image information by using the first target scheme as second key points;
performing feature matching on the first key point and the second key point by using a second target scheme, and determining a target key point pair;
and determining a first transformation matrix according to the target key point pair and the third target scheme.
Alternatively, in the embodiment of the present application, the key points in the first checkerboard image information and the key points in the second checkerboard image information may be extracted by using a first target scheme, such as SIFT (Scale-Invariant Feature Transform) algorithm. The key points extracted from the first checkerboard image information can be used as first key points, and the key points extracted from the second checkerboard image information can be used as second key points.
The key points extracted by the SIFT algorithm are usually points where the brightness of the two-dimensional image changes drastically or points where the curvature maximum value is on the image edge curve. The points can effectively reduce the data volume of the information while keeping the important characteristics of the image graph, so that the content of the information is very high, the calculation speed is effectively improved, the reliable matching of the image is facilitated, and the real-time processing becomes possible.
And then, matching the feature points of the extracted first key point and the second key point by using a second target scheme, such as a FLANN (Fast Library for Approximate Nearest Neighbors) algorithm, so as to obtain a target key point pair, wherein the matching degree between two key points contained in the target key point pair is larger than a preset matching degree threshold (such as 80%).
Finally, mapping between planes of two key points included in the target key point pair is performed based on a third target scheme, such as RANSAC (Random Sample Consensus, random sampling agreement) and the target key point pair, so as to obtain a mapping matrix as a first transformation matrix.
In the embodiment of the application, a conversion matrix between two planes is found by processing a series of characteristic points in the first checkerboard image information and the second checkerboard image information, so that a foundation is laid for accurately predicting the vehicle pose subsequently.
As an alternative embodiment, extracting the keypoints in the first checkerboard image information using the first target scheme, as the first keypoints, extracting the keypoints in the second checkerboard image information using the first target scheme, as the second keypoints, includes:
Searching for image positions on all scale spaces in the first checkerboard image information and the second checkerboard image information respectively by using a first target scheme;
and respectively extracting image features on the image positions according to preset conditions to obtain a first key point and a second key point, wherein the first key point comprises position information and description attributes of the first key point, and the second key point comprises position information and description attributes of the second key point.
Alternatively, in extracting the keypoints using the first target scheme, it is first necessary to search for image positions on all scale spaces in the first checkerboard image information and the second checkerboard image information.
And then selecting places with larger differences in picture pixels according to preset conditions, such as brightness and brightness change conditions, color change conditions and the like, as extracted key points. A keypoint is a point at a specific (x, y) position on the image.
And searching the description attribute of each key point. Wherein the description attribute is some attribute used to describe the key point, such as the characteristics of other points near the key point.
In the embodiment of the application, the first target scheme is utilized to automatically extract the respective key points from the first checkerboard image information and the second checkerboard image information, so that the effect of feature extraction automation is achieved.
As an alternative embodiment, performing feature matching on the first keypoint and the second keypoint by using the second target scheme, and determining the target keypoint pair includes:
performing similarity matching on the first key points and the second key points;
selecting a preset number of third key points according to the similarity matching result, wherein the third key points are contained in the second key points;
determining a target key point from a plurality of third key points according to the distance attribute in the description attribute of the first key point, the distance attribute in the description attribute of the third key point and a preset distance ratio threshold;
and combining the target key point and the first key point to generate a target key point pair.
Optionally, in the embodiment of the present application, feature matching between two key points from different planes is achieved by using a second target scheme. It should be noted that, in the embodiment of the present application, the first plane is the plane where the first checkered image information is located, and the second plane is the plane where the second checkered image information is located.
First, the index used is set up, for example, using KD-Tree algorithm to build the index, that is, random KD-Tree method.
The number of search traversals is then set. For example, the search traversal number is set to 50.
After the two steps are completed, the first key point and the second key point can be subjected to similarity matching by using a characteristic point matching algorithm based on KNN. The matching between a first key point (as a feature point to be matched) and a plurality of second key points can be performed firstly, and a preset number of the second key points, such as the approximately nearest feature points in the first k, can be found to obtain a third key point, wherein k is a set parameter of the method.
Since there is more than one feature point to be matched for one feature point to be matched, that is, the first key point corresponds to a plurality of third key points, the matching result is a two-dimensional array, and the matched key points meeting the conditions are stored in the array.
To obtain a more accurate, unique third keypoints, the distance attribute may be used to screen out the target keypoints. For example, we's algorithm is used to further screen matching points to obtain excellent matching points (i.e. target key points).
Specifically, since each of the key points includes a description attribute, a distance attribute included in each description attribute is found, and the first two key points closest to the first key point of the first plane among the plurality of third key points of the second plane are found, and if a ratio obtained by dividing the closest distance by the next closest distance is less than a preset distance ratio threshold value, the key point with the closest distance is selected as a target key point, and the target key point is combined with the first key point to generate a target key point pair.
It should be noted that, the preset distance ratio threshold is preferably 0.8, which is of course flexibly set according to the actual scene. Meanwhile, the ratio has a certain principle: ratio is optimal between 0.4 and 0.6, less than 0.4 has few matching points, and more than 0.6 has a large number of mismatching points, so ratio=0.4 is set: matching with high accuracy requirements; ratio=0.6: the number of matching points is required to be matched more; ratio=0.5: typically, this is the case.
In the embodiment of the application, the target key points are matched and selected based on the first key points to generate target key point pairs, so that the mapping between two planes is completed through the target key points better.
As an alternative embodiment, in a case where the number of target keypoint pairs is a plurality, determining the first transformation matrix according to the target keypoint pairs and the third target scheme includes:
and calculating an optimal single mapping transformation matrix among the plurality of target key point pairs by using a third target scheme to obtain a transformation matrix between two planes where the target key point pairs are positioned, and taking the transformation matrix as a first transformation matrix.
Alternatively, a third target scheme, such as the RANSAC method, is used to calculate an optimal single-map transformation matrix between multiple two-dimensional point pairs (i.e., target keypoint pairs), which finds a transformation matrix between the two planes that minimizes the back-projection error rate between the two planes. For example, if the number of the target key point pairs is plural, a mapping transformation matrix may be generated by performing single mapping based on the planes where the two key points included in each of the plurality of target key point pairs are located, so as to obtain the first transformation matrix.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM (Read-Only Memory)/RAM (Random Access Memory), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
According to another aspect of the embodiment of the application, there is also provided a vehicle pose prediction device for implementing the vehicle pose prediction method. Fig. 4 is a block diagram of an alternative apparatus for predicting a vehicle pose according to an embodiment of the present application, as shown in fig. 4, the apparatus may include:
a first obtaining module 401, configured to obtain first checkerboard image information and second checkerboard image information, where the first checkerboard image information is image information formed by a plurality of black squares and white squares that are arranged at intervals, and the second checkerboard image information is image information that is obtained at a current moment and is located at a top of a body of a target vehicle;
the first determining module 402 is connected to the first obtaining module 401, and is configured to determine a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, where the first transformation matrix is used to implement mapping between a first plane where the first checkerboard image information is located and a second plane where the second checkerboard image information is located;
a second determining module 403, connected to the first determining module 402, configured to determine a second transformation matrix according to the first transformation matrix, the first plane, and the original checkerboard image information under the original coordinates, where the second transformation matrix is used to characterize a correspondence between intersection point information in the second plane and intersection point information in the original checkerboard image information;
The third determining module 404 is connected to the second determining module 403, and is configured to determine a pose prediction result of the target vehicle according to the second transformation matrix.
It should be noted that, the first obtaining module 401 in this embodiment may be used to perform the above-mentioned step S201, the first determining module 402 in this embodiment may be used to perform the above-mentioned step S202, the second determining module 403 in this embodiment may be used to perform the above-mentioned step S203, and the third determining module 404 in this embodiment may be used to perform the above-mentioned step S204.
Through the module, the key point acquisition and matrix mapping are carried out on the second checkerboard image acquired at the current moment and the first original checkerboard image through setting the checkerboard image at the top of the body of the target vehicle, and finally, the positions of the checkerboard intersection points are acquired as key information for predicting the pose of the target vehicle, so that the requirement on equipment is small, the method belongs to a low-cost and high-efficiency automobile pose estimation method with accurate pose calculation, and further, the problems that the sensor calibration is time-consuming and labor-consuming and the vehicle pose prediction is inaccurate in the related art are solved.
As an alternative embodiment, the first determining module includes:
The extraction unit is used for extracting key points in the first checkerboard image information by using the first target scheme, and extracting key points in the second checkerboard image information by using the first target scheme, wherein the key points are used as first key points, and the key points are used as second key points;
the matching unit is used for carrying out feature matching on the first key point and the second key point by utilizing a second target scheme, and determining a target key point pair;
and the first determining unit is used for determining a first transformation matrix according to the target key point pair and the third target scheme.
As an alternative embodiment, the extraction unit comprises:
the searching sub-module is used for searching the image positions on all scale spaces in the first checkerboard image information and the second checkerboard image information respectively by utilizing the first target scheme;
the extraction sub-module is used for respectively extracting image features on the image positions according to preset conditions to obtain a first key point and a second key point, wherein the first key point comprises position information and description attributes of the first key point, and the second key point comprises position information and description attributes of the second key point.
As an alternative embodiment, the matching unit comprises:
performing similarity matching on the first key points and the second key points;
The selecting sub-module is used for selecting a preset number of third key points according to the similarity matching result, wherein the third key points are contained in the second key points;
the determining submodule is used for determining a target key point from a plurality of third key points according to the distance attribute in the description attribute of the first key point, the distance attribute in the description attribute of the third key point and a preset distance ratio threshold;
and the generation sub-module is used for combining the target key point and the first key point to generate a target key point pair.
As an alternative embodiment, in the case where the number of target key point pairs is plural, the first determining unit includes:
and the obtaining submodule is used for calculating an optimal single mapping transformation matrix between a plurality of target key point pairs by utilizing a third target scheme so as to obtain a transformation matrix between two planes where the target key point pairs are positioned, and the transformation matrix is used as the first transformation matrix.
As an alternative embodiment, the second determining module includes:
the second determining unit is used for determining target intersection points belonging to the first checkerboard image information in the second plane according to the first transformation matrix and the first plane;
an acquisition unit for acquiring a pixel position of a target intersection;
The first obtaining unit is used for obtaining a second transformation matrix according to the pixel positions of the target intersection points and the pixel positions of all intersection points in the original checkerboard image information.
As an alternative embodiment, the third determining module includes:
the generating unit is used for acquiring a plurality of second transformation matrixes and generating continuous pose change information;
and the second obtaining unit is used for obtaining the pose prediction result of the target vehicle according to the continuous pose change information.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to still another aspect of the embodiment of the present application, there is also provided an electronic device for implementing the above-mentioned vehicle pose prediction method, where the electronic device may be a server, a terminal, or a combination thereof.
Fig. 5 is a block diagram of an alternative electronic device, according to an embodiment of the application, as shown in fig. 5, comprising a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory 503 communicate with each other via the communication bus 504, wherein,
A memory 503 for storing a computer program;
the processor 501, when executing the computer program stored on the memory 503, performs the following steps:
acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image information is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle;
determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane in which the first checkerboard image information is positioned and a second plane in which the second checkerboard image information is positioned;
determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information;
and determining a pose prediction result of the target vehicle according to the second transformation matrix.
Alternatively, in the present embodiment, the above-described communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The memory may include RAM or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
As an example, as shown in fig. 5, the memory 503 may include, but is not limited to, a first obtaining module 401, a first determining module 402, a second determining module 403, and a third determining module 404 in the prediction apparatus of the vehicle pose. In addition, other module units in the vehicle pose prediction apparatus may be included, but are not limited to, and are not described in detail in this example.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In addition, the electronic device further includes: and the display is used for displaying the prediction result of the vehicle pose.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be understood by those skilled in the art that the structure shown in fig. 5 is only schematic, and the device implementing the method for predicting the vehicle pose may be a terminal device, and the terminal device may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 5 is not limited to the structure of the electronic device described above. For example, the terminal device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 5, or have a different configuration than shown in fig. 5.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
According to yet another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used for program code for executing the prediction method of the vehicle pose.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image information is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle;
Determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane in which the first checkerboard image information is positioned and a second plane in which the second checkerboard image information is positioned;
determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information;
and determining a pose prediction result of the target vehicle according to the second transformation matrix.
Alternatively, specific examples in the present embodiment may refer to examples described in the above embodiments, which are not described in detail in the present embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to yet another aspect of embodiments of the present application, there is also provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium; the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the steps of the method for predicting the vehicle pose in any of the embodiments described above.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a storage medium, and includes several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method for predicting vehicle pose according to various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A method for predicting vehicle pose, the method comprising:
acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image information is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle;
Determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane where the first checkerboard image information is located and a second plane where the second checkerboard image information is located;
determining a second transformation matrix according to the first transformation matrix, the first plane and original checkerboard image information under original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between intersection point information in the second plane and intersection point information in the original checkerboard image information;
and determining a pose prediction result of the target vehicle according to the second transformation matrix.
2. The method of claim 1, wherein the determining a first transformation matrix from a first keypoint in the first checkerboard image information and a second keypoint in the second checkerboard image information comprises:
extracting key points in the first checkerboard image information by using a first target scheme, wherein the key points are used as the first key points, and extracting key points in the second checkerboard image information by using the first target scheme, and the key points are used as the second key points;
Performing feature matching on the first key point and the second key point by using a second target scheme, and determining a target key point pair;
and determining the first transformation matrix according to the target key point pair and a third target scheme.
3. The method according to claim 2, wherein extracting the keypoints in the first checkerboard image information using the first target scheme as the first keypoints, extracting the keypoints in the second checkerboard image information using the first target scheme as the second keypoints includes:
searching for image positions on all scale spaces in the first checkerboard image information and the second checkerboard image information respectively by using the first target scheme;
and respectively extracting image features on the image positions according to preset conditions to obtain the first key points and the second key points, wherein the first key points comprise position information and description attributes of the first key points, and the second key points comprise position information and description attributes of the second key points.
4. The method of claim 3, wherein the feature matching the first keypoint and the second keypoint using a second target scheme, determining a target keypoint pair comprises:
Performing similarity matching on the first key point and a plurality of second key points;
selecting a preset number of third key points according to the similarity matching result, wherein the third key points are contained in the second key points;
determining a target key point from a plurality of third key points according to the distance attribute in the description attribute of the first key point, the distance attribute in the description attribute of the third key point and a preset distance ratio threshold;
and combining the target key point and the first key point to generate the target key point pair.
5. The method of claim 4, wherein, in the case where the number of target keypoint pairs is a plurality, the determining the first transformation matrix according to the target keypoint pairs and a third target scheme comprises:
and calculating an optimal single mapping transformation matrix among a plurality of target key point pairs by using the third target scheme so as to obtain a transformation matrix between two planes where the target key point pairs are positioned, wherein the transformation matrix is used as the first transformation matrix.
6. The method of claim 1, wherein determining a second transformation matrix from the first transformation matrix, the first plane, and original checkerboard image information in original coordinates comprises:
Determining a target intersection point belonging to the first checkerboard image information in the second plane according to the first transformation matrix and the first plane;
acquiring the pixel position of the target intersection point;
and obtaining the second transformation matrix according to the pixel positions of the target intersection points and the pixel positions of all intersection points in the original checkerboard image information.
7. The method according to any one of claims 1 to 6, wherein the determining the pose prediction result of the target vehicle according to the second transformation matrix includes:
acquiring a plurality of second transformation matrixes and generating continuous pose change information;
and obtaining the pose prediction result of the target vehicle according to the continuous pose change information.
8. A prediction apparatus for vehicle pose, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring first checkerboard image information and second checkerboard image information, wherein the first checkerboard image information is image information formed by a plurality of black and white squares which are arranged at intervals, and the second checkerboard image is image information which is acquired at the current moment and is positioned at the top of a body of a target vehicle;
The first determining module is used for determining a first transformation matrix according to a first key point in the first checkerboard image information and a second key point in the second checkerboard image information, wherein the first transformation matrix is used for realizing mapping between a first plane where the first checkerboard image information is located and a second plane where the second checkerboard image information is located;
the second determining module is used for determining a second transformation matrix according to the first transformation matrix, the first plane and the original checkerboard image information under the original coordinates, wherein the second transformation matrix is used for representing the corresponding relation between the intersection point information in the second plane and the intersection point information in the original checkerboard image information;
and the third determining module is used for determining the pose prediction result of the target vehicle according to the second transformation matrix.
9. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus, characterized in that,
the memory is used for storing a computer program;
The processor is configured to perform the method steps of any of claims 1 to 7 by running the computer program stored on the memory.
10. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program, wherein the computer program, when executed by a processor, implements the method steps of any of claims 1 to 7.
CN202210657149.1A 2022-06-10 2022-06-10 Vehicle pose prediction method and device, electronic equipment and storage medium Active CN114882115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210657149.1A CN114882115B (en) 2022-06-10 2022-06-10 Vehicle pose prediction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210657149.1A CN114882115B (en) 2022-06-10 2022-06-10 Vehicle pose prediction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114882115A CN114882115A (en) 2022-08-09
CN114882115B true CN114882115B (en) 2023-08-25

Family

ID=82680992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210657149.1A Active CN114882115B (en) 2022-06-10 2022-06-10 Vehicle pose prediction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114882115B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758498B (en) * 2023-05-08 2024-02-23 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816732A (en) * 2018-12-29 2019-05-28 百度在线网络技术(北京)有限公司 Scaling method, calibration system, antidote, correction system and vehicle
CN110264520A (en) * 2019-06-14 2019-09-20 北京百度网讯科技有限公司 Onboard sensor and vehicle position orientation relation scaling method, device, equipment and medium
CN110264395A (en) * 2019-05-20 2019-09-20 深圳市森国科科技股份有限公司 A kind of the camera lens scaling method and relevant apparatus of vehicle-mounted monocular panorama system
CN110675455A (en) * 2019-08-30 2020-01-10 的卢技术有限公司 Self-calibration method and system for car body all-around camera based on natural scene
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111400423A (en) * 2020-03-16 2020-07-10 郑州航空工业管理学院 Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry
CN111435540A (en) * 2019-01-15 2020-07-21 苏州沃迈智能科技有限公司 Annular view splicing method of vehicle-mounted annular view system
CN111735479A (en) * 2020-08-28 2020-10-02 中国计量大学 Multi-sensor combined calibration device and method
CN112132906A (en) * 2020-09-22 2020-12-25 西安电子科技大学 External reference calibration method and system between depth camera and visible light camera
CN112419385A (en) * 2021-01-25 2021-02-26 国汽智控(北京)科技有限公司 3D depth information estimation method and device and computer equipment
CN112669354A (en) * 2020-12-08 2021-04-16 重庆邮电大学 Multi-camera motion state estimation method based on vehicle incomplete constraint
CN112927301A (en) * 2021-02-04 2021-06-08 深圳市杉川机器人有限公司 Camera calibration method and device, computing equipment and readable storage medium
WO2021184218A1 (en) * 2020-03-17 2021-09-23 华为技术有限公司 Relative pose calibration method and related apparatus
CN113869407A (en) * 2021-09-27 2021-12-31 中科视语(北京)科技有限公司 Monocular vision-based vehicle length measuring method and device
CN114119758A (en) * 2022-01-27 2022-03-01 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN114549666A (en) * 2022-04-26 2022-05-27 杭州蓝芯科技有限公司 AGV-based panoramic image splicing calibration method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570475A (en) * 2018-06-05 2019-12-13 上海商汤智能科技有限公司 vehicle-mounted camera self-calibration method and device and vehicle driving method and device
US11557061B2 (en) * 2019-06-28 2023-01-17 GM Cruise Holdings LLC. Extrinsic calibration of multiple vehicle sensors using combined target detectable by multiple vehicle sensors
US11880997B2 (en) * 2020-08-28 2024-01-23 Samsung Electronics Co., Ltd. Method and apparatus with pose estimation

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816732A (en) * 2018-12-29 2019-05-28 百度在线网络技术(北京)有限公司 Scaling method, calibration system, antidote, correction system and vehicle
CN111435540A (en) * 2019-01-15 2020-07-21 苏州沃迈智能科技有限公司 Annular view splicing method of vehicle-mounted annular view system
CN110264395A (en) * 2019-05-20 2019-09-20 深圳市森国科科技股份有限公司 A kind of the camera lens scaling method and relevant apparatus of vehicle-mounted monocular panorama system
CN110264520A (en) * 2019-06-14 2019-09-20 北京百度网讯科技有限公司 Onboard sensor and vehicle position orientation relation scaling method, device, equipment and medium
CN110675455A (en) * 2019-08-30 2020-01-10 的卢技术有限公司 Self-calibration method and system for car body all-around camera based on natural scene
CN111259971A (en) * 2020-01-20 2020-06-09 上海眼控科技股份有限公司 Vehicle information detection method and device, computer equipment and readable storage medium
CN111400423A (en) * 2020-03-16 2020-07-10 郑州航空工业管理学院 Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry
WO2021184218A1 (en) * 2020-03-17 2021-09-23 华为技术有限公司 Relative pose calibration method and related apparatus
CN111735479A (en) * 2020-08-28 2020-10-02 中国计量大学 Multi-sensor combined calibration device and method
CN112132906A (en) * 2020-09-22 2020-12-25 西安电子科技大学 External reference calibration method and system between depth camera and visible light camera
CN112669354A (en) * 2020-12-08 2021-04-16 重庆邮电大学 Multi-camera motion state estimation method based on vehicle incomplete constraint
CN112419385A (en) * 2021-01-25 2021-02-26 国汽智控(北京)科技有限公司 3D depth information estimation method and device and computer equipment
CN112927301A (en) * 2021-02-04 2021-06-08 深圳市杉川机器人有限公司 Camera calibration method and device, computing equipment and readable storage medium
CN113869407A (en) * 2021-09-27 2021-12-31 中科视语(北京)科技有限公司 Monocular vision-based vehicle length measuring method and device
CN114119758A (en) * 2022-01-27 2022-03-01 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN114549666A (en) * 2022-04-26 2022-05-27 杭州蓝芯科技有限公司 AGV-based panoramic image splicing calibration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种自适应摄像机与激光雷达联合标定算法;姚文韬 等;《控制工程》(第S1期);75-79 *

Also Published As

Publication number Publication date
CN114882115A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN109241846B (en) Method and device for estimating space-time change of remote sensing image and storage medium
CN114063098A (en) Multi-target tracking method, device, computer equipment and storage medium
CN111080682A (en) Point cloud data registration method and device
CN111856499B (en) Map construction method and device based on laser radar
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN114882115B (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN114140592A (en) High-precision map generation method, device, equipment, medium and automatic driving vehicle
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN113012215A (en) Method, system and equipment for space positioning
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
CN116740668B (en) Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium
CN110880003B (en) Image matching method and device, storage medium and automobile
CN112689234A (en) Indoor vehicle positioning method and device, computer equipment and storage medium
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN114088103B (en) Method and device for determining vehicle positioning information
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN111582013A (en) Ship retrieval method and device based on gray level co-occurrence matrix characteristics
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
CN114913246A (en) Camera calibration method and device, electronic equipment and storage medium
CN114661028A (en) Intelligent driving controller test method and device, computer equipment and storage medium
CN114814875A (en) Robot positioning and image building method and device, readable storage medium and robot
CN113065521A (en) Object recognition method, device, apparatus, and medium
CN113470067A (en) Data processing method, device, storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant