CN113435392A - Vehicle positioning method and device applied to automatic parking and vehicle - Google Patents

Vehicle positioning method and device applied to automatic parking and vehicle Download PDF

Info

Publication number
CN113435392A
CN113435392A CN202110780085.XA CN202110780085A CN113435392A CN 113435392 A CN113435392 A CN 113435392A CN 202110780085 A CN202110780085 A CN 202110780085A CN 113435392 A CN113435392 A CN 113435392A
Authority
CN
China
Prior art keywords
information
image
frame
vehicle
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110780085.XA
Other languages
Chinese (zh)
Inventor
刘奇胜
常松涛
曾清喻
王玉斌
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202110780085.XA priority Critical patent/CN113435392A/en
Publication of CN113435392A publication Critical patent/CN113435392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/06Automatic manoeuvring for parking

Abstract

The disclosure provides a vehicle positioning method and device applied to automatic parking and a vehicle, and relates to the technical fields of autonomous parking, automatic driving and intelligent traffic in the technical field of artificial intelligence. The method comprises the following steps: acquiring an image set, wherein the image set comprises a first image acquired by a first camera arranged on a vehicle and a second image acquired by a second camera arranged on the vehicle, and the first camera and the second camera are cameras with different purposes; performing two-dimensional point feature extraction on each frame of first image to obtain two-dimensional point information of each frame of first image, and detecting each frame of second image to obtain perception information of each frame of second image; and determining the positioning pose of the vehicle according to the information of each two-dimensional point and each perception information. The vehicle positioning method has the advantages that the vehicle positioning method with wider applicability is realized, the flexibility of vehicle positioning is improved, and the accuracy and the reliability of vehicle positioning are improved.

Description

Vehicle positioning method and device applied to automatic parking and vehicle
Technical Field
The present disclosure relates to the technical field of autonomous parking, autonomous driving, and intelligent transportation in the technical field of artificial intelligence, and in particular, to a vehicle positioning method and apparatus applied to autonomous parking, and a vehicle.
Background
The automatic parking means that the automatic parking of the vehicle does not need manual control, and the automatic parking can eliminate the trouble of complicated parking operation and complex parking environment to the driver, so the automatic parking becomes one of important technologies in the automatic driving technology.
When the vehicle is parked automatically, the positioning pose (such as the displacement and attitude angle of the vehicle) of the vehicle needs to be determined so as to control the vehicle to park in the corresponding parking space based on the positioning pose of the vehicle. In the prior art, the method generally adopted is as follows: the rough position of the vehicle is determined through a Global Positioning System (GPS), and the positioning pose of the vehicle is determined by combining the rough position and a preset laser radar point cloud map.
Disclosure of Invention
The disclosure provides a vehicle positioning method and device applied to automatic parking for improving vehicle positioning accuracy and a vehicle.
According to a first aspect of the present disclosure, there is provided a vehicle positioning method applied to automatic parking, including:
acquiring an image set, the image set comprising: the method comprises the steps that a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle are different in purpose;
performing two-dimensional point feature extraction on each frame of the first image to obtain two-dimensional point information of each frame of the first image, and detecting each frame of the second image to obtain perception information of each frame of the second image;
and determining the positioning pose of the vehicle according to the two-dimensional point information and the perception information.
According to a second aspect of the present disclosure, there is provided a vehicle positioning device applied to automatic parking, including:
an acquisition unit for acquiring an image set, the image set comprising: the method comprises the steps that a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle are different in purpose;
the feature extraction unit is used for performing feature extraction of two-dimensional points on each frame of the first image to obtain two-dimensional point information of each frame of the first image;
the detection unit is used for detecting each frame of the second image to obtain perception information of each frame of the second image;
and the determining unit is used for determining the positioning pose of the vehicle according to the two-dimensional point information and the perception information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to a sixth aspect of the present disclosure, there is provided a vehicle including: an image acquisition apparatus and an apparatus according to the second aspect, wherein the image acquisition apparatus is configured to acquire a set of images.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a scene diagram of a vehicle positioning method applied to automatic parking, in which an embodiment of the present disclosure may be implemented;
FIG. 2 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a vehicle according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of inter-frame relative poses according to an embodiment of the present disclosure;
FIG. 6 is a schematic illustration of a degree of coincidence according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of the same temporal relationship according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 9 is a schematic diagram according to a fourth embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device for implementing the vehicle positioning method applied to automatic parking according to the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of artificial intelligence technology, automatic driving technology has new breakthroughs, automatic parking is one of important technologies in automatic driving technology, and determining the positioning of a vehicle (such as determining the positioning pose of the vehicle) is an important link in automatic parking.
For example, in the application scenario shown in fig. 1, vehicle 101 may be parked in parking space a by automatic parking.
In the related art, the vehicle is usually located by a global positioning system method, a point cloud location method, and a semantic map method.
Wherein, the global positioning system method is as follows: and determining the positioning pose of the vehicle through a global positioning system-carrier phase differential technology (GPS-RTK).
However, the global positioning system method is adopted to determine the positioning pose of the vehicle, the accuracy of the positioning pose is in the centimeter level, and the technical problem of relatively low accuracy exists.
Wherein, the point cloud positioning method is as follows: the method comprises the steps of obtaining point clouds by a laser radar, determining the rough position of a vehicle by a Global Positioning System (GPS), and determining the positioning pose of the vehicle by the point clouds obtained by the laser radar and a point cloud map constructed in advance by adopting an iterative nearest neighbor (ICP) equal point cloud alignment method in the rough position range.
However, the point cloud positioning method is adopted to determine the positioning pose of the vehicle, and a laser radar is needed, which is high in cost, so that the technical problem of high cost exists.
Wherein, the semantic map method is as follows: and constructing a global semantic map, constructing a local semantic map through the position information of the vehicle acquired by the global positioning system, and performing particle filtering processing on the global semantic map according to the local semantic map so as to obtain the positioning pose of the vehicle.
For example, for an application scene of a parking lot, when the vehicle is determined to reach the vicinity of an entrance of the parking lot according to the position information of the vehicle, a local semantic map is established in the process of traveling, and the positioning pose of the vehicle in the global semantic map is finally obtained through particle filtering processing and the like.
However, the semantic mapping is adopted to determine the positioning pose of the vehicle, so that the method has more severe requirements on application scenes, and the positioning accuracy and reliability are low for a parking lot with a complex structure.
In order to avoid at least one of the above technical problems, the present disclosure provides an inventive concept: the method comprises the steps of acquiring images acquired by cameras with different purposes on a vehicle, and determining the positioning pose of the vehicle by adopting two-dimensional point information of the image acquired by the camera with a certain purpose and perception information of the image acquired by the camera with another purpose.
Based on the inventive concept, the invention provides a vehicle positioning method and device applied to automatic parking and a vehicle, which are applied to the technical fields of autonomous parking, automatic driving and intelligent traffic in the technical field of artificial intelligence so as to achieve the accuracy and reliability of vehicle positioning.
Referring to fig. 2, fig. 2 is a schematic diagram according to a first embodiment of the disclosure.
As shown in fig. 2, a vehicle positioning method applied to automatic parking according to an embodiment of the present disclosure includes:
s201: a set of images is acquired.
Wherein the image set comprises: the camera system comprises a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle, wherein the first camera and the second camera are cameras with different purposes.
For example, the execution subject of the embodiment of the present disclosure may be a vehicle positioning device (hereinafter, simply referred to as a vehicle positioning device) that applies automatic parking, where the vehicle positioning device may be an on-board terminal provided in a vehicle, may also be a computer provided in the vehicle, may also be a processor provided in the vehicle, and may also be a chip (such as an internet of vehicles chip) provided in the vehicle, and this embodiment is not limited.
The vehicle can be specifically an automatic driving automobile, the automatic driving automobile is also called as an unmanned automobile, the automatic driving automobile can identify traffic signal lamps, traffic signs, lane lines, short-distance low-speed obstacles and the like in real time mainly by adopting the cooperative cooperation of artificial intelligence, computer vision, radars, monitoring devices and a navigation positioning system and combining a monocular or monocular camera and a machine vision technology, and meanwhile, the automatic driving automobile can communicate with a road infrastructure and a cloud database, so that the automobile can run on a planned route according to traffic rules.
The purpose of the cameras may be understood as capturing images in different visual dimensions, i.e. the visual dimensions of the images captured by the first camera and the second camera are not the same. In this embodiment, the number of the first cameras and/or the number of the second cameras are not limited.
For example, the first camera is a forward-looking camera for capturing forward-looking images of a forward-looking dimension of the vehicle; the second camera is a downward-looking camera for capturing an upward-looking image of the downward-looking dimension of the vehicle.
The number of the front cameras can be one or more; the number of the overhead cameras may be one or more.
S202: and performing two-dimensional point feature extraction on each frame of first image to obtain two-dimensional point information of each frame of first image.
Illustratively, the pixel points in the first image are two-dimensional points based on a camera coordinate system of the first camera, and the two-dimensional point information of the first image may include coordinates of the two-dimensional points of the first image.
S203: and detecting each frame of second image to obtain the perception information of each frame of second image.
Illustratively, the perceptual information of the second image may include: the coordinates of each pixel in the second image, the information related to the environment of the vehicle, the information related to the speed of the vehicle, and the like are not listed here.
S204: and determining the positioning pose of the vehicle according to the information of each two-dimensional point and each perception information.
The two-dimensional point information is determined based on a first image captured by a first camera, the perception information is determined based on a second image captured by a second camera, and the first camera and the second camera are different purpose cameras. Thus, this embodiment can be understood as: the positioning pose of the vehicle is determined by adopting cameras with different purposes to acquire images and combining contents with different dimensions in the acquired images of the cameras with different purposes.
Based on the above analysis, the embodiment of the present disclosure provides a vehicle positioning method applied to automatic parking, including: acquiring an image set, the image set comprising: the method comprises the steps of collecting a first image by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle, wherein the first camera and the second camera are cameras with different purposes, extracting the characteristics of two-dimensional points of each frame of the first image to obtain two-dimensional point information of each frame of the first image, detecting each frame of the second image to obtain perception information of each frame of the second image, and determining the positioning pose of the vehicle according to each two-dimensional point information and each perception information, and in the embodiment, the method comprises the following steps of: the method comprises the steps of determining two-dimensional point information of a first image, determining perception information of a second image, wherein the first image and the second image are collected by cameras with different purposes, and determining the technical characteristics of the positioning pose of a vehicle according to the two-dimensional point information and the perception information; on the other hand, in the embodiment, the positioning pose of the vehicle is determined by combining the two-dimensional point information and the perception information, and the method can be generally applied to various scenes including parking lots with complex structures, so that the technical problem of scene limitation in the related technology can be solved, the wide applicability is realized, and the technical effect of flexibility of vehicle positioning is improved; on the other hand, in the embodiment, the positioning pose of the vehicle is determined by combining the two-dimensional point information and the perception information, so that the technical problem of low precision in the related technology can be avoided, and the technical effects of accuracy and reliability of vehicle positioning are improved.
Referring to fig. 3, fig. 3 is a schematic diagram according to a second embodiment of the disclosure.
As shown in fig. 3, a vehicle positioning method applied to automatic parking according to an embodiment of the present disclosure includes:
s301: a set of images is acquired.
Wherein the image set comprises: the camera system comprises a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle, wherein the first camera and the second camera are cameras with different purposes.
For an exemplary implementation principle of S301, reference may be made to the first embodiment, which is not described herein again.
In some embodiments, the first camera is a forward looking camera for acquiring forward looking images and the second camera is an overhead looking camera for acquiring overhead looking images.
Illustratively, as shown in fig. 4 (fig. 4 is a schematic view of a vehicle according to an embodiment of the present disclosure), a front-view camera 401 and a top-view camera 402 are provided on a vehicle 400.
As shown in fig. 4, the number of front cameras 401 may be one, and the number of top cameras 402 may be four.
Wherein, the front-view camera 401 acquires a first image of each frame.
Each of the top view cameras 402 may acquire each corresponding top view image, and taking the top view image of the current frame as an example, the top view images of the four current frames may be stitched to obtain a second image.
It is worth to be noted that, as can be seen from fig. 4, in this embodiment, the second image is obtained by stitching the images collected by the plurality of overhead view cameras, so that the second image has a characteristic of a large coverage area, and therefore, more information related to the vehicle can be obtained as much as possible, and when the vehicle is positioned based on the second image, the vehicle positioning has a technical effect of high reliability and accuracy.
It should be understood that the example shown in fig. 4 is only for exemplarily illustrating the number of possible settings of the first camera and the second camera, and the positions of the possible settings, and is not to be construed as a limitation on the settings of the first camera and the second camera.
S302: and performing two-dimensional point feature extraction on each frame of first image to obtain two-dimensional point information of each frame of first image.
Illustratively, the pixel points in the first image are two-dimensional points based on a camera coordinate system of the first camera, and the two-dimensional point information of the first image may include coordinates of the two-dimensional points of the first image.
S303: and detecting each frame of second image to obtain the perception information of each frame of second image.
For example, regarding the implementation principle of S302 to S303, reference may be made to the first embodiment, which is not described herein again.
S304: and determining three-dimensional point information of image points on the first image of each frame according to the map of the current position of the vehicle.
For example, when the method of the embodiment of the present disclosure is applied to an application scenario as shown in fig. 1, the vehicle positioning device may retrieve a map of the current position of the vehicle. The map of the current position of the vehicle may be pre-stored in the vehicle (that is, the map of the current position may be an offline map), or may be acquired by the vehicle from a cloud server (that is, the map of the current position may be an online map), and this embodiment is not limited.
S305: and constructing a point pair of the image points of each frame of the first image according to the three-dimensional point information and the two-dimensional information of the image points on each frame of the first image.
For example, if m frames of first images are shared, a 3D-2D point pair of each frame of first image is constructed for each frame of first image in the m frames of first images, and a specific method for constructing a 3D-2D point pair may refer to related technologies, which is not described herein again. And in some embodiments, if the running speed of the vehicle is less than a preset fifth threshold value, merging the point pairs of the first image of each frame.
S306: and determining the positioning pose of the vehicle according to the point pairs of each frame of the first image and each perception information.
In combination with the analysis, the point pair of the first image can represent the feature of the map (represented by three-dimensional information) and can also represent the two-dimensional information of the first image, which is equivalent to determining the positioning pose of the vehicle by combining the three dimensions of the feature of the map, the two-dimensional information of the first image and the perception information, so that the vehicle positioning dimension is relatively comprehensive, and the accuracy and the reliability of the vehicle positioning can be improved.
In some embodiments, S306 may include the steps of:
the first step is as follows: and determining the inter-frame relative pose of the vehicle in each two adjacent frames of the first images according to the point pairs of each frame of the first image and each perception information.
In some embodiments, the sensory information includes wheel speed data that is used to characterize wheel speed of the vehicle.
The first step may include: and according to the wheel speed data and the point pairs of the first images, carrying out recursion to obtain the inter-frame relative pose of the first images of the two adjacent frames.
The second step is as follows: and carrying out N-point perspective calculation on the point pairs of each frame of the first image to obtain the camera pose of each frame of the first image.
The third step: and determining the relative pose information of the first image of the jth frame according to the inter-frame relative pose of each two adjacent first images between the first image of the ith frame and the first image of the jth frame on the basis of the camera pose of the first image of the ith frame to obtain a group of relative pose information.
Wherein i and j are positive integers which are more than or equal to 1 and less than or equal to Q, Q is the total frame number of each first image, and Q is a positive integer which is more than or equal to 2.
For example, as shown in fig. 5, the total number of frames of the first image is four frames, and is image 1, image 2, image 3, and image 4, respectively, as shown in fig. 5.
The inter-frame relative pose between the image 1 and the image 2 is an inter-frame relative pose a, the inter-frame relative pose between the image 2 and the image 3 is an inter-frame relative pose B, and the inter-frame relative pose between the image 3 and the image 4 is an inter-frame relative pose C.
Illustratively, relative pose information of the image 2 is determined according to the camera pose of the image 1 and the inter-frame relative pose A; determining relative pose information of an image 3 according to the camera pose of the image 1, the relative pose A between frames and the relative pose B between frames; determining relative pose information of an image 3 according to the camera pose of the image 1, the inter-frame relative pose A, the inter-frame relative pose B and the relative pose C; the relative pose information of the image 1 is the camera pose of the image 1, so that a group of relative pose information is obtained, and by analogy, each group of relative pose information is obtained.
The fourth step: and constructing a nonlinear optimization model according to each group of relative pose information, each perception information and the point pair of each frame of the first image, and determining the positioning pose of the vehicle based on the nonlinear optimization model.
It should be noted that, in this embodiment, the inter-frame relative pose of the first image of each adjacent frame is determined, and each set of relative pose information is determined based on each inter-frame relative pose, so as to position the vehicle by combining each set of relative pose, thereby improving the accuracy and reliability of vehicle positioning.
In some embodiments, the fourth step may comprise the sub-steps of:
the first substep: and determining a camera coordinate system where the first camera is located, and projecting the coordinate system conversion information to a map coordinate system where the map is located.
This step may be understood as determining a transformation relationship between the camera coordinate system and the map coordinate system, i.e. taking the map coordinate system as a reference coordinate system, the coordinates of the camera coordinate system in the reference coordinate system.
The second substep: and determining the optimal seed pose information from each group of relative pose information according to the coordinate system conversion information.
And the optimal seed pose information is the relative pose information with the highest confidence level in each group of relative pose information.
In some embodiments, the second substep may comprise: and determining the number of point pairs in each group of relative pose information, determining the average reprojection error of the point pairs in each group of relative pose information according to the coordinate system conversion information, and determining the group of relative pose information with the largest number of point pairs and the smallest average reprojection error as the optimal seed pose information.
For example, for any point pair in each group of relative pose information, a reprojection error of the any point pair is determined according to the coordinate transformation information (for a specific calculation method, see related art, which is not described herein again), and the reprojection errors are averaged to obtain an average reprojection error of the point pair in each group of relative pose information.
If the number of the point pairs in the multiple groups of relative pose information is the maximum number, the group of relative pose information with the minimum average re-projection error can be determined as the optimal seed pose information.
Similarly, if the average reprojection error of the multiple sets of relative pose information is the minimum error, the set of relative pose information with the largest number of point pairs can be determined as the optimal seed pose information.
It should be noted that, in this embodiment, the optimal seed pose information is the relative pose information with a large number of pairs, so that the content of the pairs represented by the optimal seed pose information is relatively large, that is, the optimal seed pose information has a technical effect of strong representation capability; the optimal seed pose information is the relative pose information with the minimum average re-projection error, so that the error of the optimal seed pose information is relatively small, namely the optimal seed pose information has the technical effect of higher accuracy.
The third substep: and constructing a nonlinear optimization model according to the optimal seed pose information and each perception information, and determining the positioning pose of the vehicle based on the nonlinear optimization model.
Based on the analysis, the optimal seed pose information has stronger representation capability and higher accuracy, so that when the positioning pose of the vehicle is determined by combining the optimal seed pose information, the positioning pose of the vehicle can have the technical effects of higher accuracy and reliability.
In some embodiments, the perceptual information comprises: the characteristic point of the parking space related information in the second image corresponds to a first coordinate of the vehicle body coordinate system; the third substep may comprise the following refinement steps:
a first thinning step: and determining second coordinates of the feature points in each frame of the second image under a map coordinate system where the map is located.
A second refining step: for each identical feature point in each frame of the second image, a contact ratio between the first coordinate and the second coordinate is calculated (specifically, refer to fig. 6), and observation point pair information of each frame of the second image is constructed, where the contact ratio is greater than a preset contact ratio threshold.
Wherein, the observation point pair information includes: coordinates of elements such as parking space angular points, vehicle position lines and arrows.
It should be noted that, in this embodiment, by determining the observation point pair information in combination with the contact ratio, filtering processing on the observation point pair information can be implemented, so that the observation point pair information has higher reliability, and the technical effects of accuracy and reliability in subsequent determination of the positioning pose of the vehicle based on the observation point pair information can be improved.
A third refining step: and determining distribution information of each observation point pair information, wherein the distribution information represents the distribution proportion of the characteristic points in each observation point pair information in the image.
For example, the image may be divided into three regions, i.e., a left region, a middle region, and a right region, and if a characteristic points in the observation point-pair information are distributed in the left region, b characteristic points in the observation point-pair information are distributed in the middle region, and c characteristic points in the observation point-pair information are distributed in the right region, the distribution information (i.e., the distribution ratio) may be (a + b)/(c + b).
A fourth refining step: and judging whether the distribution ratio is larger than a first threshold value, if so, executing a fifth refining step, and if not, executing a ninth refining step.
The first threshold may be set by the vehicle positioning device according to a requirement, a history, a test, and the like, which is not limited in this embodiment.
A fifth refining step: and judging whether the number of the characteristic points in the observation point pair information is larger than a second threshold value, if so, executing a sixth thinning step, and if not, re-positioning the vehicle.
Similarly, the second threshold may be set by the vehicle positioning device according to the needs, history, and tests, which is not limited in this embodiment.
A sixth refining step: and determining the first image and the second image with the same time relation according to the perception information and the point pair of the first image of each frame.
It should be understood that, in the present embodiment, the same time relationship refers to approximate same in time, and specifically, it can be understood that: if the time of a first image is relatively close to the time of a second image compared with the time of each second image, the time of the first image and the time of the second image are the same time, and the first image and the second image are the first image and the second image with the same time relationship.
For example, as shown in fig. 7, the first image includes an image a, an image B, and an image C, and the second image includes an image a, an image B, and an image C.
Based on T1, T2, T3, and T4 on the time axis T shown in fig. 7, it can be seen that: image a has the same temporal relationship as image a, image B has the same temporal relationship as image B, and image C has the same temporal relationship as image C.
A seventh refining step: the relative pose of the vehicle between each of the first and second images having the same temporal relationship is determined.
In conjunction with the above embodiment, the image a and the image a have the same time relationship, then the step of refining can be understood as: and determining the vehicle pose of the vehicle in the image a (called a first vehicle pose for distinguishing) and determining the vehicle pose of the vehicle in the image A (called a second vehicle pose for distinguishing), determining the relative vehicle pose according to the first vehicle pose and the second vehicle pose, and the like, wherein the description is omitted.
An eighth refining step: and constructing a nonlinear optimization model by taking the relative pose of the vehicle, the relative pose among frames, the observation point pair information and the point pair of each frame of the first image with the average reprojection error smaller than a preset fourth threshold as residual constraint items, taking the optimal seed pose information as an initial value of the state quantity to be optimized, and determining the positioning pose of the vehicle based on the nonlinear optimization model.
The nonlinear optimization model comprises known parameters and unknown parameters, wherein the known parameters comprise residual constraint terms and initial values of state quantities to be optimized. In this embodiment, the residual constraint item includes a relative pose of the vehicle, a relative pose between frames, observation point pair information, and a point pair of each frame of the first image with an average reprojection error smaller than a preset fourth threshold, the initial value of the state quantity to be optimized is the best seed pose information, and the unknown parameter is the positioning pose of the vehicle.
In some embodiments, height coordinates may also be introduced in the residual constraint term. For example: and determining a teaching track point closest to the optimal seed pose information in a preset teaching track, determining the height coordinate of the teaching track point as the height coordinate in the constructed nonlinear optimization model, and determining the positioning pose of the vehicle according to the nonlinear optimization model comprising the height coordinate.
It is worth to say that, in the embodiment, by introducing the height coordinate, residual constraint items for determining the positioning pose of the vehicle can be enriched, so that the accuracy and reliability of the determined positioning pose of the vehicle can be improved.
It should be noted that, in this embodiment, the positioning pose of the vehicle is determined based on the residual constraint and the initial value of the state quantity to be optimized, so that the positioning pose of the vehicle is a relatively optimized result, and therefore, the accuracy and reliability of the positioning pose of the vehicle can be improved.
A ninth refining step: and judging whether the distribution ratio is larger than a preset third threshold value, if so, executing a tenth thinning step, and if not, repositioning the vehicle.
Similarly, the third threshold may be set by the vehicle positioning device according to the needs, history, and tests, which is not limited in this embodiment.
That is, if the distribution ratio is greater than the third threshold value and less than the first threshold value, the tenth refinement step is performed.
A tenth refinement step: and calculating the average contact ratio of the observation points to the information.
An eleventh refining step: and judging whether the average contact ratio is greater than a preset fourth threshold value, if so, executing a sixth refining step, and if not, repositioning the vehicle.
Similarly, the fourth threshold may be set by the vehicle positioning device according to the needs, history, and tests, which is not limited in this embodiment.
In some embodiments, after the positioning pose of the vehicle is determined, a driving strategy for parking the vehicle into the parking space can be determined according to the positioning pose of the vehicle, and the vehicle is controlled to park into the parking space based on the driving strategy.
For example, in connection with the application scenario shown in fig. 1, after the positioning pose of the vehicle is determined, it may be determined based on the positioning pose of the vehicle: and finishing the driving strategy that the vehicle is automatically parked to the parking space A, and controlling the vehicle to execute the driving strategy, thereby finishing the parking of the vehicle to the parking space A.
Based on the above analysis, it can be seen that, by using the vehicle positioning method applied to automatic parking according to the embodiment of the present disclosure, the accuracy and reliability of vehicle positioning can be improved, and therefore, when the vehicle is controlled to automatically park in the parking space a based on the driving strategy determined by the positioning pose of the vehicle, a collision event between the vehicle and another vehicle can be avoided, for example, the vehicle 101 in fig. 1 collides with the vehicle 102 and/or the vehicle 103 when automatically parking in the parking space a, so as to improve the technical effects of the accuracy, reliability and safety of vehicle parking.
Referring to fig. 8, fig. 8 is a schematic diagram according to a third embodiment of the disclosure.
As shown in fig. 8, a vehicle positioning device 800 applied to automatic parking according to an embodiment of the present disclosure includes:
an acquisition unit 801 configured to acquire an image set, the image set including: the camera system comprises a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle, wherein the first camera and the second camera are cameras with different purposes.
The feature extraction unit 802 is configured to perform feature extraction on two-dimensional points of each frame of the first image to obtain two-dimensional point information of each frame of the first image.
The detecting unit 803 is configured to detect each frame of the second image, and obtain perceptual information of each frame of the second image.
And the determining unit 804 is configured to determine a positioning pose of the vehicle according to the two-dimensional point information and the perception information.
Referring to fig. 9, fig. 9 is a schematic diagram according to a fourth embodiment of the disclosure.
As shown in fig. 9, a vehicle positioning device 900 applied to automatic parking according to an embodiment of the present disclosure includes:
an acquiring unit 901 configured to acquire an image set, where the image set includes: the camera system comprises a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle, wherein the first camera and the second camera are cameras with different purposes.
In some embodiments, the first camera is a forward looking camera for acquiring forward looking images and the second camera is an overhead looking camera for acquiring overhead looking images.
In some embodiments, the number of overhead cameras is multiple; as can be seen from fig. 9, the acquiring unit 901 includes:
an acquiring subunit 9011, configured to acquire a current frame image acquired by each downward-looking camera.
And the splicing subunit 9012 is configured to splice the current frame images to obtain a second current frame image.
The feature extraction unit 902 is configured to perform feature extraction on two-dimensional points of each frame of the first image to obtain two-dimensional point information of each frame of the first image.
And a detecting unit 903, configured to detect each frame of the second image, so as to obtain perceptual information of each frame of the second image.
And a determining unit 904, configured to determine a positioning pose of the vehicle according to the two-dimensional point information and the perception information.
As can be seen in conjunction with fig. 9, in some embodiments, the determining unit 904 includes:
a first determining subunit 9041, configured to determine, according to the map of the current position of the vehicle, three-dimensional point information of an image point on each frame of the first image.
A constructing subunit 9042, configured to construct a point pair of the image points of each frame of the first image according to the three-dimensional point information and the two-dimensional information of the image points on each frame of the first image.
And the second determining subunit 9043 is configured to determine the positioning pose of the vehicle according to the point pair of each frame of the first image and each piece of sensing information.
In some embodiments, as shown in fig. 9, the determining unit 904 may further include:
a merging unit 9044, configured to merge the point pairs of the first image of each frame if the traveling speed of the vehicle is less than a preset fifth threshold.
Accordingly, the second determining subunit 9043 may be configured to determine the positioning pose of the vehicle according to the merged point pairs and the respective perception information.
In some embodiments, the second determining subunit 9043 includes:
and the first determining module is used for determining the inter-frame relative pose of the vehicle in each two adjacent frames of the first images according to the point pairs of each frame of the first images and each perception information.
And the resolving module is used for carrying out N-point perspective resolving on the point pairs of each frame of the first image to obtain the camera pose of each frame of the first image.
And the second determining module is used for determining the relative pose information of the jth frame first image according to the inter-frame relative pose of each two adjacent first images between the ith frame first image and the jth frame first image on the basis of the camera pose of the ith frame first image so as to obtain a group of relative pose information, wherein i and j are positive integers which are more than or equal to 1 and less than or equal to Q, Q is the total frame number of each first image, and Q is a positive integer which is more than or equal to 2.
The construction module is used for constructing a nonlinear optimization model according to each group of relative pose information, each perception information and the point pair of each frame of first image;
and the third determination module is used for determining the positioning pose of the vehicle based on the nonlinear optimization model.
In some embodiments, the construction module is configured to determine a coordinate system of the camera where the first camera is located, coordinate system transformation information projected to the coordinate system of the map where the map is located, determine optimal seed pose information from each set of relative pose information according to the coordinate system transformation information, where the optimal seed pose information is the relative pose information with the highest confidence level among the sets of relative pose information, construct a nonlinear optimization model according to the optimal seed pose information and each sensing information, and determine the location pose of the vehicle based on the nonlinear optimization model.
In some embodiments, the construction module is configured to determine a number of point pairs in each set of relative pose information, determine an average reprojection error of the point pairs in each set of relative pose information according to the coordinate system transformation information, and determine a set of relative pose information with the largest number of point pairs and the smallest average reprojection error as the best seed pose information.
In some embodiments, the perceptual information comprises: the characteristic point of the parking space related information in the second image corresponds to a first coordinate of the vehicle body coordinate system; the construction module is used for determining a second coordinate of the feature point in each frame of second image under a map coordinate system where the map is located, calculating the contact ratio between the first coordinate and the second coordinate for each same feature point in each frame of second image, constructing observation point pair information of each frame of second image with the contact ratio larger than a preset contact ratio threshold value, and constructing a nonlinear optimization model according to the observation point pair information, the optimal seed pose information and each perception information.
In some embodiments, the construction module is configured to determine distribution information of each observation point pair information, where the distribution information represents a distribution ratio of feature points in each observation point pair information in an image, and if the distribution ratio is greater than a preset first threshold and the number of feature points in each observation point pair information is greater than a preset second threshold, construct the nonlinear optimization model according to the optimal seed pose information, the observation point pair information, and each sensing information.
In some embodiments, the building module is configured to, if the distribution ratio is smaller than the first threshold and larger than a preset third threshold, calculate an average contact ratio of the observation point pair information, and if the average contact ratio is larger than a preset fourth threshold, build the nonlinear optimization model according to the optimal seed pose information, the observation point pair information, and each sensing information.
In some embodiments, the construction module is configured to determine, according to the respective pieces of sensing information and the point pairs of each frame of the first image, the first image and the second image with the same time relationship, determine a vehicle relative pose of the vehicle between each of the first image and the second image with the same time relationship, use the vehicle relative pose, the inter-frame relative pose, the observation point pair information, and the point pairs of each frame of the first image with the average reprojection error smaller than a preset fourth threshold as residual constraint terms, and use the best seed pose information as initial values of the state quantity to be optimized to construct the nonlinear optimization model.
In some embodiments, the construction module is configured to construct a nonlinear optimization model according to the optimal seed pose information and each piece of perception information, determine a teaching track point closest to the optimal seed pose information in a preset teaching track, and determine a height coordinate of the teaching track point as a height coordinate in the nonlinear optimization model.
And the third determination module is used for determining the positioning pose of the vehicle according to the nonlinear optimization model comprising the height coordinate.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
FIG. 10 illustrates a schematic block diagram of an example electronic device 1000 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the electronic device 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the device 1000 can also be stored. The calculation unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 executes the respective methods and processes described above, for example, a vehicle positioning method applied to automatic parking. For example, in some embodiments, the vehicle location method applied to automated parking may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communications unit 1009. When the computer program is loaded into the RAM 1003 and executed by the computing unit 1001, one or more steps of the vehicle positioning method applied to automatic parking described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the vehicle positioning method applied to automatic parking by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
According to another aspect of the disclosed embodiments, there is provided a vehicle including: the image acquisition device and the vehicle positioning device applied to automatic parking as described in any one of the above embodiments, wherein the image acquisition device is used for acquiring an image set.
In some embodiments, the image capture device includes a front-view camera and a top-view camera.
In other embodiments, the image acquisition device may further include cameras of other uses, and if three or more cameras of different uses are provided on the vehicle, the above method may be adopted to process the images acquired by the cameras of two uses respectively to obtain the positioning poses of the respective corresponding vehicles, and calculate the average positioning pose of each positioning pose, thereby realizing the positioning of the vehicle.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this application may be performed in parallel, sequentially, or in a different order, and are not limited herein as long as the desired results of the technical solutions provided by the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (30)

1. A vehicle positioning method applied to automatic parking, comprising:
acquiring an image set, the image set comprising: the method comprises the steps that a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle are different in purpose;
performing two-dimensional point feature extraction on each frame of the first image to obtain two-dimensional point information of each frame of the first image, and detecting each frame of the second image to obtain perception information of each frame of the second image;
and determining the positioning pose of the vehicle according to the two-dimensional point information and the perception information.
2. The method according to claim 1, wherein determining the positioning pose of the vehicle from each of the two-dimensional point information and each of the perception information includes:
determining three-dimensional point information of image points on each frame of the first image according to the map of the current position of the vehicle;
constructing a point pair of the image points of each frame of the first image according to the three-dimensional point information and the two-dimensional information of the image points of each frame of the first image;
and determining the positioning pose of the vehicle according to the point pairs of each frame of the first image and each perception information.
3. The method of claim 2, wherein determining the location pose of the vehicle from the point pairs of the first image and the respective perception information per frame comprises:
determining the inter-frame relative pose of the vehicle in each two adjacent frames of first images according to the point pairs of each frame of the first images and each piece of perception information, and performing N-point perspective calculation on the point pairs of each frame of the first images to obtain the camera pose of each frame of the first images;
determining relative pose information of a jth frame of first image according to the inter-frame relative pose of each two adjacent frames of first images between the ith frame of first image and the jth frame of first image on the basis of the camera pose of the ith frame of first image to obtain a group of relative pose information, wherein i and j are positive integers which are more than or equal to 1 and less than or equal to Q, Q is the total frame number of each first image, and Q is a positive integer which is more than or equal to 2;
and constructing a nonlinear optimization model according to each group of relative pose information, each perception information and the point pair of each frame of the first image, and determining the positioning pose of the vehicle based on the nonlinear optimization model.
4. The method of claim 3, wherein constructing a non-linear optimization model from the sets of relative pose information, the perceptual information, and the point pairs of the first image for each frame, and determining the positioning pose of the vehicle based on the non-linear optimization model comprises:
determining a camera coordinate system where the first camera is located, and projecting coordinate system conversion information of the map coordinate system where the map is located;
determining the optimal seed pose information from each group of relative pose information according to the coordinate system conversion information, wherein the optimal seed pose information is the relative pose information with the maximum confidence level in each group of relative pose information;
and constructing a nonlinear optimization model according to the optimal seed pose information and each perception information, and determining the positioning pose of the vehicle based on the nonlinear optimization model.
5. The method of claim 4, wherein determining optimal seed pose information from the sets of relative pose information based on the coordinate system transformation information comprises:
determining the number of point pairs in each group of relative pose information, and determining the average reprojection error of the point pairs in each group of relative pose information according to the coordinate system conversion information;
and determining a group of relative pose information with the maximum point pair number and the minimum average re-projection error as the optimal seed pose information.
6. The method of claim 4, wherein perceiving information comprises: the characteristic point of the parking space related information in the second image corresponds to a first coordinate of the vehicle body coordinate system of the vehicle; constructing a nonlinear optimization model according to the optimal seed pose information and each perception information, wherein the nonlinear optimization model comprises the following steps:
determining a second coordinate of the feature point in each frame of second image under a map coordinate system where the map is located;
aiming at each identical feature point in each frame of the second image, calculating the contact ratio between the first coordinate and the second coordinate, and constructing observation point pair information of each frame of the second image, wherein the contact ratio is greater than a preset contact ratio threshold value;
and constructing a nonlinear optimization model according to the observation point pair information, the optimal seed pose information and the perception information.
7. The method of claim 6, wherein constructing a nonlinear optimization model from the observation point pair information, the optimal seed pose information, and each of the perception information comprises:
determining distribution information of each observation point pair information, wherein the distribution information represents the distribution proportion of characteristic points in each observation point pair information in an image;
and if the distribution ratio is greater than a preset first threshold value and the number of the characteristic points in the observation point pair information is greater than a preset second threshold value, constructing a nonlinear optimization model according to the optimal seed pose information, the observation point pair information and each perception information.
8. The method of claim 7, further comprising:
if the distribution proportion is smaller than the first threshold and larger than a preset third threshold, calculating the average contact ratio of the observation points to the information;
and if the average contact ratio is greater than a preset fourth threshold value, constructing a nonlinear optimization model according to the optimal seed pose information, the observation point pair information and each perception information.
9. The method of claim 7, wherein constructing a nonlinear optimization model from the optimal seed pose information, the observation point pair information, and each of the perceptual information comprises:
determining a first image and a second image with the same time relation according to each perception information and the point pair of each frame of the first image;
determining a vehicle relative pose of the vehicle between each first image and second image having the same time relationship;
and constructing a nonlinear optimization model by taking the relative pose of the vehicle, the relative pose among frames, the observation point pair information and the point pair of each frame of first image with the average reprojection error smaller than a preset fourth threshold as residual constraint items and taking the optimal seed pose information as initial values of the state quantity to be optimized.
10. The method of any of claims 4 to 9, wherein constructing a non-linear optimization model from the optimal seed pose information and each of the perception information, and determining a positioning pose of the vehicle based on the non-linear optimization model comprises:
constructing a nonlinear optimization model according to the optimal seed pose information and each perception information;
determining a teaching track point closest to the optimal seed pose information in a preset teaching track, and determining the height coordinate of the teaching track point as the height coordinate in the nonlinear optimization model;
and determining the positioning pose of the vehicle according to a nonlinear optimization model comprising a height coordinate.
11. The method of any one of claims 1 to 10, wherein the first camera is a forward looking camera for acquiring forward looking images and the second camera is an overhead looking camera for acquiring overhead looking images.
12. The method of claim 11, wherein the number of overhead cameras is plural; acquiring an image set, comprising:
acquiring a current frame image acquired by each overlooking camera;
and splicing the current frame images to obtain a current frame second image.
13. The method of any of claims 2 to 12, further comprising:
and if the running speed of the vehicle is less than a preset fifth threshold value, merging the point pairs of each frame of the first image.
14. A vehicle positioning apparatus applied to automatic parking, comprising:
an acquisition unit for acquiring an image set, the image set comprising: the method comprises the steps that a first image collected by a first camera arranged on a vehicle and a second image collected by a second camera arranged on the vehicle are different in purpose;
the feature extraction unit is used for performing feature extraction of two-dimensional points on each frame of the first image to obtain two-dimensional point information of each frame of the first image;
the detection unit is used for detecting each frame of the second image to obtain perception information of each frame of the second image;
and the determining unit is used for determining the positioning pose of the vehicle according to the two-dimensional point information and the perception information.
15. The apparatus of claim 14, wherein the determining unit comprises:
the first determining subunit is used for determining three-dimensional point information of image points on each frame of the first image according to the map of the current position of the vehicle;
a construction subunit, configured to construct a point pair of image points of each frame of the first image according to three-dimensional point information and two-dimensional information of the image points on each frame of the first image;
and the second determining subunit is used for determining the positioning pose of the vehicle according to the point pairs of the first image of each frame and the perception information.
16. The apparatus of claim 15, wherein the second determining subunit comprises:
the first determining module is used for determining the inter-frame relative pose of the vehicle in every two adjacent frames of first images according to the point pairs of each frame of the first images and the perception information;
the resolving module is used for carrying out N-point perspective resolving on the point pairs of each frame of the first image to obtain the camera pose of each frame of the first image;
the second determining module is used for determining the relative pose information of the jth frame of first image according to the inter-frame relative pose of each two adjacent frames of first images between the ith frame of first image and the jth frame of first image on the basis of the camera pose of the ith frame of first image to obtain a group of relative pose information, wherein i and j are positive integers which are more than or equal to 1 and less than or equal to Q, Q is the total frame number of each first image, and Q is a positive integer which is more than or equal to 2;
the construction module is used for constructing a nonlinear optimization model according to each group of relative pose information, each perception information and the point pairs of each frame of the first image;
a third determination module for determining a positioning pose of the vehicle based on the nonlinear optimization model.
17. The apparatus of claim 16, wherein the construction module is configured to determine a camera coordinate system in which the first camera is located, coordinate system transformation information projected to a map coordinate system in which the map is located, determine optimal seed pose information from each set of relative pose information according to the coordinate system transformation information, the optimal seed pose information being the relative pose information with the highest confidence in each set of relative pose information, construct a nonlinear optimization model according to the optimal seed pose information and each piece of perception information, and determine the positioning pose of the vehicle based on the nonlinear optimization model.
18. The apparatus of claim 17, wherein the construction module is configured to determine a number of point pairs in each set of relative pose information, determine an average reprojection error for the point pairs in each set of relative pose information based on the coordinate system transformation information, and determine a set of relative pose information with the largest number of point pairs and the smallest average reprojection error as the optimal seed pose information.
19. The apparatus of claim 17, wherein the perceptual information comprises: the characteristic point of the parking space related information in the second image corresponds to a first coordinate of the vehicle body coordinate system of the vehicle; the construction module is used for determining a second coordinate of the feature point in each frame of second image under a map coordinate system where the map is located, calculating the contact ratio between the first coordinate and the second coordinate for each same feature point in each frame of second image, constructing observation point pair information of each frame of second image with the contact ratio larger than a preset contact ratio threshold value, and constructing a nonlinear optimization model according to the observation point pair information, the optimal seed pose information and the perception information.
20. The apparatus according to claim 19, wherein the constructing module is configured to determine distribution information of each observation point pair information, the distribution information characterizing a distribution ratio of feature points in each observation point pair information in an image, and if the distribution ratio is greater than a preset first threshold and the number of feature points in the observation point pair information is greater than a preset second threshold, construct a non-linear optimization model according to the optimal seed pose information, the observation point pair information, and each perception information.
21. The apparatus according to claim 20, wherein the building module is configured to calculate an average overlap ratio of the observation point pair information if the distribution ratio is smaller than the first threshold and larger than a preset third threshold, and build a nonlinear optimization model according to the optimal seed pose information, the observation point pair information, and each piece of the perception information if the average overlap ratio is larger than a preset fourth threshold.
22. The apparatus according to claim 20, wherein the construction module is configured to determine a first image and a second image with the same time relationship according to each piece of perceptual information and a point pair of each frame of the first image, determine a vehicle relative pose of the vehicle between each first image and the second image with the same time relationship, use the vehicle relative pose, the inter-frame relative pose, the observation point pair information, and a point pair of each frame of the first image with an average reprojection error smaller than a preset fourth threshold as residual constraint terms, and use the optimal seed pose information as initial values of the state quantities to be optimized to construct the nonlinear optimization model.
23. The device according to any one of claims 17 to 22, wherein the constructing module is configured to construct a nonlinear optimization model according to the optimal seed pose information and each piece of perception information, determine a teaching track point closest to the optimal seed pose information in a preset teaching track, and determine a height coordinate of the teaching track point as a height coordinate in the nonlinear optimization model;
the third determination module is used for determining the positioning pose of the vehicle according to a nonlinear optimization model comprising a height coordinate.
24. The apparatus of any one of claims 14 to 23, wherein the first camera is a forward looking camera for acquiring forward looking images and the second camera is an overhead looking camera for acquiring overhead looking images.
25. The apparatus of claim 24, wherein the number of overhead cameras is plural; the acquisition unit includes:
the acquisition subunit is used for acquiring the current frame image acquired by each overlooking camera;
and the splicing subunit is used for splicing the current frame images to obtain a second image of the current frame.
26. The apparatus of any of claims 15 to 25, wherein the determining unit further comprises:
and the merging subunit is configured to merge the point pairs of each frame of the first image if the driving speed of the vehicle is less than a preset fifth threshold.
27. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 13.
28. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 13.
29. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 13.
30. A vehicle, comprising: an image acquisition apparatus and an apparatus as claimed in any one of claims 14 to 26, wherein the image acquisition apparatus is for acquiring a set of images.
CN202110780085.XA 2021-07-09 2021-07-09 Vehicle positioning method and device applied to automatic parking and vehicle Pending CN113435392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110780085.XA CN113435392A (en) 2021-07-09 2021-07-09 Vehicle positioning method and device applied to automatic parking and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110780085.XA CN113435392A (en) 2021-07-09 2021-07-09 Vehicle positioning method and device applied to automatic parking and vehicle

Publications (1)

Publication Number Publication Date
CN113435392A true CN113435392A (en) 2021-09-24

Family

ID=77759867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110780085.XA Pending CN113435392A (en) 2021-07-09 2021-07-09 Vehicle positioning method and device applied to automatic parking and vehicle

Country Status (1)

Country Link
CN (1) CN113435392A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116019A (en) * 2022-07-13 2022-09-27 阿波罗智能技术(北京)有限公司 Lane line processing method, lane line processing device, lane line processing apparatus, and storage medium
CN115959122A (en) * 2023-03-10 2023-04-14 杭州枕石智能科技有限公司 Vehicle positioning method and device in parking scene, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110082779A (en) * 2019-03-19 2019-08-02 同济大学 A kind of vehicle pose localization method and system based on 3D laser radar
CN111559314A (en) * 2020-04-27 2020-08-21 长沙立中汽车设计开发股份有限公司 Depth and image information fused 3D enhanced panoramic looking-around system and implementation method
WO2020168668A1 (en) * 2019-02-22 2020-08-27 广州小鹏汽车科技有限公司 Slam mapping method and system for vehicle
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112819893A (en) * 2021-02-08 2021-05-18 北京航空航天大学 Method and device for constructing three-dimensional semantic map
CN112907659A (en) * 2019-11-19 2021-06-04 阿里巴巴集团控股有限公司 Mobile device positioning system, method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020168668A1 (en) * 2019-02-22 2020-08-27 广州小鹏汽车科技有限公司 Slam mapping method and system for vehicle
CN110082779A (en) * 2019-03-19 2019-08-02 同济大学 A kind of vehicle pose localization method and system based on 3D laser radar
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN112907659A (en) * 2019-11-19 2021-06-04 阿里巴巴集团控股有限公司 Mobile device positioning system, method and device
CN111559314A (en) * 2020-04-27 2020-08-21 长沙立中汽车设计开发股份有限公司 Depth and image information fused 3D enhanced panoramic looking-around system and implementation method
CN112819893A (en) * 2021-02-08 2021-05-18 北京航空航天大学 Method and device for constructing three-dimensional semantic map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张善彬;袁金钊;陈辉;王玉荣;王杰;屠长河;: "基于标准路牌的车辆自定位", 计算机科学, no. 07 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116019A (en) * 2022-07-13 2022-09-27 阿波罗智能技术(北京)有限公司 Lane line processing method, lane line processing device, lane line processing apparatus, and storage medium
CN115959122A (en) * 2023-03-10 2023-04-14 杭州枕石智能科技有限公司 Vehicle positioning method and device in parking scene, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113378760A (en) Training target detection model and method and device for detecting target
CN113435392A (en) Vehicle positioning method and device applied to automatic parking and vehicle
CN112525147B (en) Distance measurement method for automatic driving equipment and related device
CN114663529B (en) External parameter determining method and device, electronic equipment and storage medium
CN113126120B (en) Data labeling method, device, equipment, storage medium and computer program product
CN114298908A (en) Obstacle display method and device, electronic equipment and storage medium
CN117612132A (en) Method and device for complementing bird's eye view BEV top view and electronic equipment
CN113722342A (en) High-precision map element change detection method, device and equipment and automatic driving vehicle
CN113706704A (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN113091737A (en) Vehicle-road cooperative positioning method and device, automatic driving vehicle and road side equipment
CN117168488A (en) Vehicle path planning method, device, equipment and medium
CN113033456B (en) Method and device for determining grounding point of vehicle wheel, road side equipment and cloud control platform
CN115240150A (en) Lane departure warning method, system, device and medium based on monocular camera
CN114689061A (en) Navigation route processing method and device of automatic driving equipment and electronic equipment
CN112507964A (en) Detection method and device for lane-level event, road side equipment and cloud control platform
CN113591569A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN114407916B (en) Vehicle control and model training method and device, vehicle, equipment and storage medium
CN114612544B (en) Image processing method, device, equipment and storage medium
CN114565681B (en) Camera calibration method, device, equipment, medium and product
CN115294234B (en) Image generation method and device, electronic equipment and storage medium
CN117557643A (en) Vehicle video auxiliary information generation method, device, equipment and medium
CN117765067A (en) Vehicle motion index measurement method, device, equipment and automatic driving vehicle
CN117341732A (en) Automatic driving assistance method, device, equipment and storage medium
CN114511840A (en) Perception data processing method and device for automatic driving and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination