CN115620264A - Vehicle positioning method and device, electronic equipment and computer readable medium - Google Patents

Vehicle positioning method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN115620264A
CN115620264A CN202211534182.1A CN202211534182A CN115620264A CN 115620264 A CN115620264 A CN 115620264A CN 202211534182 A CN202211534182 A CN 202211534182A CN 115620264 A CN115620264 A CN 115620264A
Authority
CN
China
Prior art keywords
dimensional
point
lamp post
dimensional point
post projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211534182.1A
Other languages
Chinese (zh)
Other versions
CN115620264B (en
Inventor
张�雄
李敏
黄家琪
廖明鉴
洪炽杰
齐新迎
罗鸿
蔡仲辉
申苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co Ltd filed Critical GAC Aion New Energy Automobile Co Ltd
Priority to CN202211534182.1A priority Critical patent/CN115620264B/en
Publication of CN115620264A publication Critical patent/CN115620264A/en
Application granted granted Critical
Publication of CN115620264B publication Critical patent/CN115620264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses a vehicle positioning method, a vehicle positioning device, electronic equipment and a computer readable medium. One embodiment of the method comprises: acquiring road images around a target vehicle by utilizing each vehicle-mounted camera mounted on the target vehicle to obtain a road image set; carrying out data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set; performing image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain associated lamp post projection point group sets; carrying out spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set; performing point cloud association processing on the target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence; and determining the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence. This embodiment may enable accurate positioning of the target vehicle.

Description

Vehicle positioning method and device, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a vehicle positioning method and apparatus, an electronic device, and a computer-readable medium.
Background
Vehicle positioning is a technique for accurately determining the position of a vehicle in the field of automatic driving. At present, when a vehicle is positioned with high precision, the method generally adopted is as follows: and carrying out feature point matching on the collected multi-frame road images to reconstruct vehicle motion information so as to determine the positioning information of the vehicle, or positioning the vehicle by using a visual camera.
However, when the positioning information of the vehicle is determined in the above manner, there are often technical problems as follows:
firstly, the feature points in the image are complex to calculate, the calculated amount is large, and excessive calculation resources are occupied.
Secondly, the output result of the vision camera is greatly influenced by illumination, and the high-precision positioning is difficult to realize in the environment with poor illumination.
Thirdly, the method excessively depends on the feature point matching results among the multi-frame road images, so that the positioning result has large deviation when the feature point matching results are wrong, and the automatic driving safety of the vehicle is influenced.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a vehicle positioning method, apparatus, electronic device and computer readable medium to solve one or more of the technical problems set forth in the background section above.
In a first aspect, some embodiments of the present disclosure provide a vehicle localization method, the method comprising: acquiring road images around a target vehicle by utilizing each vehicle-mounted camera mounted on the target vehicle to obtain a road image set; performing data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post; performing image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain associated lamp post projection point group sets; carrying out spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set; performing point cloud association processing on a target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one to one; and determining the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
In a second aspect, some embodiments of the present disclosure provide a vehicle locating device, the device comprising: an acquisition unit configured to acquire a road image around a target vehicle by using each of onboard cameras mounted on the target vehicle, resulting in a road image set; the preprocessing unit is configured to perform data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post; the image point association unit is configured to perform image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain an associated lamp post projection point group set; the reconstruction unit is configured to perform spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set; the point cloud association unit is configured to perform point cloud association processing on a target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one by one; a determination unit configured to determine the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device, on which one or more programs are stored, which when executed by one or more processors cause the one or more processors to implement the method described in any implementation of the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium on which a computer program is stored, wherein the program when executed by a processor implements the method described in any implementation of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: by the vehicle positioning method of some embodiments of the disclosure, a high-efficiency, robust and high-precision real-time positioning result can be provided for an automatic driving vehicle, and the safety of automatic driving is ensured. Specifically, the reasons for the inaccurate positioning result of the relevant vehicle are: the existing method has the defects of high calculation complexity, strong dependence on illumination conditions and insufficient feature association accuracy. Based on this, the vehicle positioning method according to some embodiments of the present disclosure first obtains a road image set by using each vehicle-mounted camera mounted on a target vehicle to obtain a road image around the target vehicle. And then, carrying out data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post. Therefore, two-dimensional lamp post projection points in road images shot by the vehicle-mounted cameras at the same time are extracted. Because the number of lamps shot by the road image is limited, the number of the extracted two-dimensional lamp post projection points is also limited, and the workload of subsequent matching and calculation can be reduced. And then, carrying out image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain an associated lamp post projection point group set. Therefore, the corresponding relation of the two-dimensional lamp post projection points in the road images is determined. And then, carrying out spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set. Therefore, through the reconstruction processing of the spatial point cloud, the relative relation of each two-dimensional lamp post projection point in the three-dimensional space can be obtained. And then, carrying out point cloud association processing on the target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one to one. Therefore, the space three-dimensional point set and the actual accurate target three-dimensional point set are matched. And the positioning information of the target vehicle can be accurately determined according to the relative positions of the points in the space three-dimensional point set and the actual accurate target three-dimensional point set. And finally, determining the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence. Therefore, accurate positioning of the target vehicle can be achieved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a vehicle localization method according to the present disclosure;
FIG. 2 is a schematic structural view of some embodiments of a vehicle locating device of the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a vehicle localization method according to the present disclosure. The vehicle positioning method comprises the following steps:
step 101, acquiring road images around a target vehicle by using each vehicle-mounted camera mounted on the target vehicle to obtain a road image set.
In some embodiments, the executing subject of the vehicle positioning method may obtain the road image around the target vehicle by using each vehicle-mounted camera mounted on the target vehicle, so as to obtain the road image set.
The target vehicle may be an autonomous vehicle that requires high-precision positioning. The high precision positioning may be a positioning with a positioning error between 0.1 meter and 0.3 meter. The target vehicle may be mounted with at least three onboard cameras. And the inside and outside parameters and the distortion coefficient are obtained among all vehicle-mounted cameras installed on the target vehicle through camera calibration. The respective onboard cameras mounted on the above-mentioned subject vehicle can synchronously acquire the road image around the subject vehicle at the same frequency. All the road images in the obtained road image set are shot at the same time. The road images in the road image set respectively correspond to the cameras of the vehicles which can be installed on the target vehicle one by one.
And 102, performing data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set.
In some embodiments, the two-dimensional light pole projection points of each of the sets of two-dimensional light pole projection points described above characterize the end points of a light pole. The end points of the lamp post may include a top point and a bottom point of the lamp post.
The executing body may perform data preprocessing on the road image in the road image set to obtain each two-dimensional light pole projection point set, and may include the following steps:
firstly, each road image in the road image set is subjected to distortion removal processing to generate a corrected road image, and a corrected road image set is obtained.
The road image may first be converted from the image pixel coordinate system to the camera coordinate system. Then, the road image in the camera coordinate system is subjected to distortion removal processing by using the distortion coefficient. And then, converting the road image subjected to distortion removal processing in the camera coordinate system into an image pixel coordinate system to obtain a road image to be differenced. And finally, carrying out difference processing on the road image to be subjected to difference processing by using the original road image to obtain a corrected road image.
And secondly, carrying out lamp post identification processing on each corrected road image in the corrected road image set by using a neural network model to obtain each lamp post identification result set. The Neural network model may be CNN (Convolutional Neural Networks). The lamp post recognition result in the lamp post recognition result set can be a mark frame used for marking the lamp post in the corrected road image. Each lamp post recognition result set corresponds to one corrected road image
And thirdly, determining the end points of the lamp poles identified by the lamp pole identification results in each lamp pole identification result set as two-dimensional lamp pole projection points to obtain each two-dimensional lamp pole projection point set. Wherein, can confirm the upper peak and the lower peak of marking the frame as two-dimentional lamp pole projection point.
And 103, performing image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain associated lamp post projection point group sets.
In some embodiments, the executing body performs image point association processing on the two-dimensional lamp post projection points between the two-dimensional lamp post projection point sets to obtain an associated lamp post projection point group set, and may include the following steps:
the method comprises the following steps of firstly, converting two-dimensional lamp post projection points in each two-dimensional lamp post projection point set into a camera coordinate system of a corresponding vehicle-mounted camera to generate a camera coordinate point group, and obtaining a camera coordinate point group set. The two-dimensional lamp post projection points are collected by the two-dimensional lamp post projection points, and the two-dimensional lamp post projection points are converted to the corresponding camera coordinate system of the vehicle-mounted camera by the aid of the camera internal parameter matrix of the vehicle-mounted camera corresponding to the corrected road image where the two-dimensional lamp post projection points are located, so that a camera coordinate point group is obtained.
And secondly, generating an antipodal constraint equation set by using a basic matrix between the vehicle-mounted cameras corresponding to every two associated lamp pole projection point sets in the associated lamp pole projection point set.
And thirdly, determining the incidence relation of the camera coordinate points in each camera coordinate point group in the camera coordinate point group set based on the epipolar constraint equation set so as to generate a correlated camera coordinate point group set. Each camera coordinate point in each camera coordinate point group, which makes the epipolar constraint equation set to be true, may be determined as a camera coordinate point having an association relationship, so as to obtain an associated camera coordinate point group. The association relationship may be a correspondence relationship of the same point in the real space in different images.
And fourthly, grouping the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set according to the associated camera coordinate points in the associated camera coordinate point set to obtain an associated lamp post projection point set.
Each two-dimensional lamp pole projection point in each two-dimensional lamp pole projection point set corresponding to the same group of associated camera coordinate points in the associated camera coordinate point group set can be used as a group. Thereby, a set of associated light pole proxel sets is obtained.
And 104, performing spatial point cloud reconstruction processing on the associated light pole projection points in the associated light pole projection point group set to obtain a spatial three-dimensional point set.
In some embodiments, the executing body may perform spatial point cloud reconstruction on the associated light pole projection points in the associated light pole projection point group set to obtain a spatial three-dimensional point set. The relevant lamp post projection points in the relevant lamp post projection point group set can be triangulated to obtain a spatial three-dimensional point set.
And 105, performing point cloud association processing on the target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence.
In some embodiments, the first three-dimensional point in the first three-dimensional point sequence and the second three-dimensional point in the second three-dimensional point sequence correspond to each other one to one.
The performing of the point cloud association processing on the target three-dimensional point set and the spatial three-dimensional point set by the performing body to obtain a first three-dimensional point sequence and a second three-dimensional point sequence may include the following steps:
and step one, randomly sequencing the target three-dimensional point set and the space three-dimensional point set to obtain a target three-dimensional point sequence and a space three-dimensional point sequence.
And secondly, generating a correlation matrix by using the target three-dimensional point sequence and the space three-dimensional point sequence. The correlation matrix may be composed of correlation values, and an initial value of each correlation value in the correlation matrix may be zero. In the above-mentioned target three-dimensional point sequence incidence matrixiLine and firstkThe associated value at the column may be the first in the above-mentioned spatial three-dimensional point sequenceiThe three-dimensional point of the space and the first point in the target three-dimensional point sequencekAnd (4) correlation values among the target three-dimensional points.
And thirdly, combining any two spatial three-dimensional points in the spatial three-dimensional point sequence to obtain a spatial three-dimensional point group set.
And fourthly, determining a distance value between two space three-dimensional points in each space three-dimensional point group in the space three-dimensional point group set to obtain a first distance value set. Wherein, the first distance value in the first distance value set may be a euclidean distance.
And fifthly, combining any two of the target three-dimensional points in the target three-dimensional point sequence to obtain a target three-dimensional point group set.
And sixthly, determining a distance value between two target three-dimensional points in each target three-dimensional point group in the target three-dimensional point group set to obtain a second distance value set. Wherein the second distance value in the second distance value set may be a euclidean distance.
And seventhly, updating the incidence matrix by using the first distance value set and the second distance value set.
Optionally, the updating, by the executing entity, the association matrix by using the first distance value set and the second distance value set may include:
first, a first distance value is selected from the first distance value set to serve as a first reference distance value.
A second step of performing the following updating step on the association matrix by using the first reference distance value and the second distance value set:
a first updating step of selecting a second distance value from the second distance value set as a second reference distance value, and performing the following sub-steps of updating the association matrix by using the first reference distance value and the second reference distance value:
a first updating sub-step of determining an absolute value of a difference between the first reference distance value and the second reference distance value, resulting in an absolute difference value.
And a second updating sub-step, in response to determining that the absolute difference is smaller than the preset threshold, of incrementally updating the correlation values in the correlation matrix at positions corresponding to the first reference distance value and the second reference distance value. In practice, the preset threshold may be set according to actual application requirements, and is not limited herein. The incremental update may be to increment the associated value by a preset value.
If the first reference distance value is the first one in the target three-dimensional point sequenceiA three-dimensional point of an object andjthe distance value between the target three-dimensional points, and the second reference distance value is the first reference distance value in the above-mentioned space three-dimensional point sequencekA spatial three-dimensional point andlthe distance value between the three-dimensional points in the space, and the correlation value at the position corresponding to the first reference distance value and the second reference distance value in the correlation matrix can be the first reference distance value in the correlation matrixiLine and firstkColumn, firstiLine and firstlColumn, firstjLine and firstkColumn and firstjLine and firstlThe associated value at the column.
A third updating sub-step, in response to determining that there is an unselected second distance value in the second set of distance values, of re-selecting the unselected second distance value from the second set of distance values as a second reference distance value, and continuing the updating sub-step using the re-selected second reference distance value.
A second updating step of, in response to determining that there is an unselected first distance value in the first distance value set, reselecting the unselected first distance value from the first distance value set as a first reference distance value, and continuing to perform the updating step using the reselected first reference distance value.
And a third updating step, wherein the incidence matrix is determined as an updated incidence matrix in response to determining that the unselected first distance values do not exist in the first distance value set.
And thirdly, respectively taking the space three-dimensional point and the target three-dimensional point corresponding to the maximum correlation value in each row in the updated correlation matrix as a first three-dimensional point and a second three-dimensional point, and adding the first three-dimensional point sequence and the second three-dimensional point sequence.
If it is first in the correlation matrixiLine and firstkThe correlation value at the column position is the first in the correlation matrixiThe largest of the correlation values in the row can be the first in the spatial three-dimensional point sequenceiAnd adding the space three-dimensional points serving as first three-dimensional points into the first three-dimensional point sequence. The first in the target three-dimensional point sequencekAnd adding the target three-dimensional point serving as a second three-dimensional point into the second three-dimensional point sequence.
The above-mentioned step of performing point cloud association is taken as an invention point of the embodiment of the present disclosure, and solves the technical problem mentioned in the background art that "the feature point matching result between multiple frames of road images is excessively relied on, so that the positioning result has a large deviation when the feature point matching result is wrong, which affects the automatic driving safety of the vehicle". The factors that lead to the above technical problems tend to be as follows: the reliability of feature association between multi-frame images shot by a monocular camera is low, and the method is not suitable for association of sparse point clouds. If the above factors are solved, the effects of improving the correlation accuracy and the vehicle positioning accuracy can be achieved. In order to achieve the effect, the sparse point cloud association method is introduced to improve the point cloud association accuracy and further improve the vehicle positioning precision. First, a correlation matrix is generated by using the target three-dimensional point sequence and the spatial three-dimensional point sequence. And using the values in the correlation matrix to represent the degree of correlation between the target three-dimensional point and the space three-dimensional point. The larger the correlation value is, the higher the degree of correlation between the two corresponding target three-dimensional points and the space three-dimensional point is represented, and the more likely the two target three-dimensional points and the space three-dimensional point are to represent the same point in the space. Therefore, the incidence matrix is updated through the distance value between the point in the target three-dimensional point sequence and the point and the distance value between the point in the space three-dimensional point sequence and the point. Thus, the incidence relation between the target three-dimensional point and the space three-dimensional point is determined by the updated incidence matrix. And matching between the sparse space three-dimensional point set and the actual target three-dimensional point set is realized. Furthermore, the uncertainty of association of the multi-frame images is reduced, and the association of the sparse point cloud can be better adapted. So that the single-frame image of each vehicle-mounted camera can be used for more accurate positioning.
In some optional implementation manners of some embodiments, the executing body performs point cloud association processing on the target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, and may further perform the following steps:
the method comprises the steps of firstly, obtaining lamp post information from an electronic map to obtain a lamp post information set. The electronic map may be a high-precision electronic map. The precision of the high-precision electronic map can be within 1 meter.
And secondly, performing point cloud extraction processing on the lamp post information set to obtain a target three-dimensional point set. The point cloud processing can extract three-dimensional coordinates of the top point and the bottom point of the lamp pole in the high-precision electronic map as a target three-dimensional point set.
And 106, determining the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
In some embodiments, the determining, by the executing body, the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence may include:
first, attitude information and position information of the target vehicle are determined using the first three-dimensional point sequence and the second three-dimensional point sequence. The solution that minimizes the preset residual function can be solved by using the first three-dimensional point sequence and the second three-dimensional point sequence, so as to obtain the attitude information and the position information. As an example, the preset residual function is as follows:
Figure 485801DEST_PATH_IMAGE001
wherein the content of the first and second substances,E() Representing the residual function.RThe posture information is represented by a plurality of posture information,Rcan be represented as a vector.TThe information on the position is represented by the position information,Tcan be represented as a vector.tIndicating the number of first three-dimensional points included in the first three-dimensional point sequence.iIndicating a serial number.CRepresenting the first three-dimensional point sequence.C i Representing the second in the first three-dimensional point sequenceiCoordinates of the first three-dimensional point.C Representing the second three-dimensional point sequence.C i Representing the second three-dimensional point sequenceiCoordinates of a second three-dimensional point.Dist() Indicating taking a distance value.
And a second step of determining the attitude information and the position information as positioning information of the target vehicle.
The above embodiments of the present disclosure have the following advantages: by the vehicle positioning method of some embodiments of the disclosure, a high-efficiency, robust and high-precision real-time positioning result can be provided for an automatic driving vehicle, and the safety of automatic driving is ensured. In particular, the reason for the associated vehicle positioning results being less accurate is that: the existing method has too high calculation complexity, strong dependence on illumination conditions and insufficient correlation accuracy. Based on this, the vehicle positioning method according to some embodiments of the present disclosure first obtains a road image set by using each vehicle-mounted camera mounted on a target vehicle to obtain a road image around the target vehicle. And then, performing data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post. Therefore, two-dimensional lamp post projection points in road images shot by the vehicle-mounted cameras simultaneously are extracted. Because the number of lamps shot by the road image is limited, the number of the extracted two-dimensional lamp post projection points is also limited, and the workload of subsequent matching and calculation can be reduced. And then, carrying out image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain an associated lamp post projection point group set. Therefore, the corresponding relation of the two-dimensional lamp post projection points in the road images is determined. And then, carrying out spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set. Therefore, through the reconstruction processing of the spatial point cloud, the relative relation of each two-dimensional lamp post projection point in the three-dimensional space can be obtained. And then, carrying out point cloud association processing on the target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one to one. Therefore, the spatial three-dimensional point set and the actual accurate target three-dimensional point set are matched. And the positioning information of the target vehicle can be accurately determined according to the relative positions of the points in the space three-dimensional point set and the actual accurate target three-dimensional point set. And finally, determining the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence. Therefore, accurate positioning of the target vehicle can be achieved.
With further reference to fig. 2, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a vehicle localization apparatus, which correspond to those method embodiments illustrated in fig. 1, and which may be particularly applicable in various electronic devices.
As shown in fig. 2, a vehicle positioning apparatus 200 of some embodiments includes: an acquisition unit 201, a preprocessing unit 202, an image point association unit 203, a reconstruction unit 204, a point cloud association unit 205 and a determination unit 206. The system comprises an acquisition unit 201 and a control unit, wherein the acquisition unit is configured to acquire road images around a target vehicle by using each vehicle-mounted camera mounted on the target vehicle to obtain a road image set; the preprocessing unit 202 is configured to perform data preprocessing on the road images in the road image set to obtain two-dimensional lamp post projection point sets, where the two-dimensional lamp post projection points in the two-dimensional lamp post projection point sets represent end points of lamp posts; an image point association unit 203 configured to perform image point association processing on the two-dimensional lamp post projection points between the two-dimensional lamp post projection point sets to obtain an associated lamp post projection point group set; a reconstruction unit 204 configured to perform spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set; a point cloud associating unit 205 configured to perform point cloud association processing on a target three-dimensional point set and the spatial three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, where a first three-dimensional point in the first three-dimensional point sequence and a second three-dimensional point in the second three-dimensional point sequence are in one-to-one correspondence; a determining unit 206 configured to determine the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
It is understood that the units recited in the vehicle localization apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and benefits of the method described above are also applicable to the vehicle positioning apparatus 200 and the units included therein, and are not described herein again.
Referring now to fig. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, electronic device 300 may include a processing device (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring road images around a target vehicle by utilizing each vehicle-mounted camera mounted on the target vehicle to obtain a road image set; performing data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post; performing image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain associated lamp post projection point group sets; carrying out spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set; performing point cloud association processing on a target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one to one; and determining the positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, which may be described as: a processor includes an acquisition unit, a preprocessing unit, an image point association unit, a reconstruction unit, a point cloud association unit, and a determination unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires a road image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.

Claims (9)

1. A vehicle localization method, comprising:
acquiring road images around a target vehicle by utilizing each vehicle-mounted camera installed on the target vehicle to obtain a road image set;
performing data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post;
performing image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain associated lamp post projection point group sets;
carrying out spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set;
performing point cloud association processing on a target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one to one;
determining positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
2. The method of claim 1, wherein the pre-processing the road image in the road image set to obtain each two-dimensional light pole projection point set comprises:
carrying out distortion removal processing on each road image in the road image set to generate a corrected road image, so as to obtain a corrected road image set;
carrying out lamp post identification processing on each corrected road image in the corrected road image set by utilizing a neural network model to obtain each lamp post identification result set;
and determining the end points of the lamp poles identified by the lamp pole identification results in each lamp pole identification result set as two-dimensional lamp pole projection points to obtain each two-dimensional lamp pole projection point set.
3. The method of claim 1, wherein said performing image point association on the two-dimensional light pole proxels between the respective sets of two-dimensional light pole proxels to obtain a set of associated light pole proxel groups comprises:
converting the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set to be under a camera coordinate system of a corresponding vehicle-mounted camera to generate a camera coordinate point group to obtain a camera coordinate point group set;
generating an epipolar constraint equation set by using a basic matrix between vehicle-mounted cameras corresponding to every two associated lamp post projection point sets in the associated lamp post projection point set;
determining an incidence relation of the camera coordinate points in each camera coordinate point group in the camera coordinate point group set based on the epipolar constraint equation set to generate an incidence camera coordinate point group set;
and according to the associated camera coordinate points in the associated camera coordinate point group set, grouping the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set to obtain an associated lamp post projection point group set.
4. The method as claimed in claim 1, wherein the performing spatial point cloud reconstruction processing on the associated light pole projection points in the associated light pole projection point group set to obtain a spatial three-dimensional point set comprises:
and triangularizing the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set.
5. The method of claim 1, wherein prior to said point cloud associating said target three-dimensional point set and said spatial three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, said method further comprises:
obtaining lamp post information from an electronic map to obtain a lamp post information set;
and carrying out point cloud extraction processing on the lamp post information set to obtain a target three-dimensional point set.
6. The method of claim 1, wherein the determining the location information of the target vehicle based on the first and second sequences of three-dimensional points comprises:
determining attitude information and position information of the target vehicle by using the first three-dimensional point sequence and the second three-dimensional point sequence;
determining the attitude information and the position information as positioning information of the target vehicle.
7. A vehicle locating device comprising:
an acquisition unit configured to acquire road images around a target vehicle using respective vehicle-mounted cameras mounted on the target vehicle, resulting in a road image set;
the preprocessing unit is configured to perform data preprocessing on the road images in the road image set to obtain each two-dimensional lamp post projection point set, wherein the two-dimensional lamp post projection points in each two-dimensional lamp post projection point set represent end points of a lamp post;
the image point association unit is configured to perform image point association processing on the two-dimensional lamp post projection points among the two-dimensional lamp post projection point sets to obtain an associated lamp post projection point group set;
the reconstruction unit is configured to perform spatial point cloud reconstruction processing on the associated lamp post projection points in the associated lamp post projection point group set to obtain a spatial three-dimensional point set;
the point cloud association unit is configured to perform point cloud association processing on a target three-dimensional point set and the space three-dimensional point set to obtain a first three-dimensional point sequence and a second three-dimensional point sequence, wherein a first three-dimensional point in the first three-dimensional point sequence corresponds to a second three-dimensional point in the second three-dimensional point sequence one by one;
a determination unit configured to determine positioning information of the target vehicle based on the first three-dimensional point sequence and the second three-dimensional point sequence.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-6.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN202211534182.1A 2022-12-02 2022-12-02 Vehicle positioning method and device, electronic equipment and computer readable medium Active CN115620264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211534182.1A CN115620264B (en) 2022-12-02 2022-12-02 Vehicle positioning method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211534182.1A CN115620264B (en) 2022-12-02 2022-12-02 Vehicle positioning method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN115620264A true CN115620264A (en) 2023-01-17
CN115620264B CN115620264B (en) 2023-03-07

Family

ID=84879647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211534182.1A Active CN115620264B (en) 2022-12-02 2022-12-02 Vehicle positioning method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115620264B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116007637A (en) * 2023-03-27 2023-04-25 北京集度科技有限公司 Positioning device, method, in-vehicle apparatus, vehicle, and computer program product

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033489A (en) * 2018-01-12 2019-07-19 华为技术有限公司 A kind of appraisal procedure, device and the equipment of vehicle location accuracy
US20200193195A1 (en) * 2018-12-18 2020-06-18 Here Global B.V. Automatic positioning of 2d image sign sightings in 3d space
CN111563138A (en) * 2020-04-30 2020-08-21 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111986261A (en) * 2020-08-13 2020-11-24 清华大学苏州汽车研究院(吴江) Vehicle positioning method and device, electronic equipment and storage medium
CN113240813A (en) * 2021-05-12 2021-08-10 北京三快在线科技有限公司 Three-dimensional point cloud information determination method and device
US11087494B1 (en) * 2019-05-09 2021-08-10 Zoox, Inc. Image-based depth data and localization
CN113721254A (en) * 2021-08-11 2021-11-30 武汉理工大学 Vehicle positioning method based on road fingerprint space incidence matrix
CN113763475A (en) * 2021-09-24 2021-12-07 北京百度网讯科技有限公司 Positioning method, device, equipment, system, medium and automatic driving vehicle
CN114399589A (en) * 2021-12-20 2022-04-26 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114413881A (en) * 2022-01-07 2022-04-29 中国第一汽车股份有限公司 Method and device for constructing high-precision vector map and storage medium
CN114993328A (en) * 2022-05-18 2022-09-02 禾多科技(北京)有限公司 Vehicle positioning evaluation method, device, equipment and computer readable medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033489A (en) * 2018-01-12 2019-07-19 华为技术有限公司 A kind of appraisal procedure, device and the equipment of vehicle location accuracy
US20200193195A1 (en) * 2018-12-18 2020-06-18 Here Global B.V. Automatic positioning of 2d image sign sightings in 3d space
US11087494B1 (en) * 2019-05-09 2021-08-10 Zoox, Inc. Image-based depth data and localization
CN111563138A (en) * 2020-04-30 2020-08-21 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN111986261A (en) * 2020-08-13 2020-11-24 清华大学苏州汽车研究院(吴江) Vehicle positioning method and device, electronic equipment and storage medium
CN113240813A (en) * 2021-05-12 2021-08-10 北京三快在线科技有限公司 Three-dimensional point cloud information determination method and device
CN113721254A (en) * 2021-08-11 2021-11-30 武汉理工大学 Vehicle positioning method based on road fingerprint space incidence matrix
CN113763475A (en) * 2021-09-24 2021-12-07 北京百度网讯科技有限公司 Positioning method, device, equipment, system, medium and automatic driving vehicle
CN114399589A (en) * 2021-12-20 2022-04-26 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114413881A (en) * 2022-01-07 2022-04-29 中国第一汽车股份有限公司 Method and device for constructing high-precision vector map and storage medium
CN114993328A (en) * 2022-05-18 2022-09-02 禾多科技(北京)有限公司 Vehicle positioning evaluation method, device, equipment and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何磊: "面向动态场景的车辆建图与定位方法研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116007637A (en) * 2023-03-27 2023-04-25 北京集度科技有限公司 Positioning device, method, in-vehicle apparatus, vehicle, and computer program product

Also Published As

Publication number Publication date
CN115620264B (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN110427917B (en) Method and device for detecting key points
JP6745328B2 (en) Method and apparatus for recovering point cloud data
CN111174799B (en) Map construction method and device, computer readable medium and terminal equipment
CN109387186B (en) Surveying and mapping information acquisition method and device, electronic equipment and storage medium
CN111415409B (en) Modeling method, system, equipment and storage medium based on oblique photography
CN110826549A (en) Inspection robot instrument image identification method and system based on computer vision
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN116182878B (en) Road curved surface information generation method, device, equipment and computer readable medium
CN115620264B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN113781478B (en) Oil tank image detection method, oil tank image detection device, electronic equipment and computer readable medium
CN109034214B (en) Method and apparatus for generating a mark
CN112598731B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN109816791B (en) Method and apparatus for generating information
CN114119973A (en) Spatial distance prediction method and system based on image semantic segmentation network
CN112597174B (en) Map updating method and device, electronic equipment and computer readable medium
CN112597788B (en) Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN115393423A (en) Target detection method and device
CN111383337B (en) Method and device for identifying objects
CN112766068A (en) Vehicle detection method and system based on gridding labeling
CN113744361A (en) Three-dimensional high-precision map construction method and device based on trinocular vision
CN116630436B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium
CN113542800B (en) Video picture scaling method, device and terminal equipment
CN115307652B (en) Vehicle pose determination method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant