CN114663529B - External parameter determining method and device, electronic equipment and storage medium - Google Patents

External parameter determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114663529B
CN114663529B CN202210288815.9A CN202210288815A CN114663529B CN 114663529 B CN114663529 B CN 114663529B CN 202210288815 A CN202210288815 A CN 202210288815A CN 114663529 B CN114663529 B CN 114663529B
Authority
CN
China
Prior art keywords
coordinate
determining
target
coordinate system
external parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210288815.9A
Other languages
Chinese (zh)
Other versions
CN114663529A (en
Inventor
王丕阁
何叶
曾清喻
李友浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202210288815.9A priority Critical patent/CN114663529B/en
Publication of CN114663529A publication Critical patent/CN114663529A/en
Application granted granted Critical
Publication of CN114663529B publication Critical patent/CN114663529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an external parameter determining method, an external parameter determining device, electronic equipment, a readable storage medium and a computer program product, and relates to the field of artificial intelligence such as automatic driving and autonomous parking. The specific implementation scheme is as follows: under the condition that the target image acquisition equipment needs to perform external parameter calibration, determining a first coordinate corresponding to a target vanishing point in a camera coordinate system, wherein the target vanishing point is a vanishing point corresponding to at least two specified road routes in the road image acquired by the target image acquisition equipment; and determining an external parameter corresponding to the target image acquisition device by using the first coordinate and a second coordinate, wherein the second coordinate is a coordinate of an infinity point in a bird's-eye view in the bird's-eye view coordinate system, and the bird's-eye view is generated based on the road image. According to the scheme, the complexity of determining the external parameters corresponding to the target image acquisition equipment is reduced, and the efficiency of determining the external parameters corresponding to the target image acquisition equipment is improved.

Description

External parameter determining method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of data processing, in particular to the field of artificial intelligence such as automatic driving, autonomous parking and the like, and can be particularly used for scenes such as automatic driving, autonomous parking and the like.
Background
In the running process of the automatic driving vehicle, the obstacle information of the vehicle periphery needs to be perceived by means of the image acquisition equipment arranged on the vehicle periphery so as to plan corresponding control decisions. For example: an autonomous vehicle needs to detect the distance between the vehicle and an obstacle through an onboard camera mounted on the periphery of the vehicle to plan a corresponding obstacle avoidance strategy. Therefore, the perception of obstacle information around the vehicle becomes an important factor for safe driving of the automatically driven vehicle.
The accurate perception of obstacle information around the vehicle often depends on the calibration of external parameters of the image acquisition equipment. In the process of calibrating the external parameters of the image acquisition equipment, how to determine the external parameters becomes a technical problem to be solved urgently.
Disclosure of Invention
The present disclosure provides an external parameter determining method, apparatus, electronic device, readable storage medium and computer program product to improve the determination efficiency of external parameters of a target image acquisition device.
According to an aspect of the present disclosure, there is provided an external reference determination method, which may include the steps of:
under the condition that the target image acquisition equipment needs to perform external parameter calibration, determining a first coordinate corresponding to a target vanishing point in a camera coordinate system, wherein the target vanishing point is a vanishing point corresponding to at least two specified road routes in the road image acquired by the target image acquisition equipment;
And determining an external parameter corresponding to the target image acquisition equipment by using the first coordinate and the second coordinate, wherein the second coordinate is the coordinate of an infinity point in the aerial view coordinate system, and the aerial view is generated based on the road image.
According to a second aspect of the present disclosure, there is provided an external parameter determination apparatus, which may include:
the first coordinate determining unit is used for determining a first coordinate corresponding to a target vanishing point in a camera coordinate system under the condition that the target image acquisition equipment needs to perform external parameter calibration, wherein the target vanishing point is a vanishing point corresponding to at least two specified road routes in the road image acquired by the target image acquisition equipment;
and the external parameter determining unit is used for determining the external parameter corresponding to the target image acquisition equipment by using the first coordinate and the second coordinate, wherein the second coordinate is the coordinate of an infinite point in the aerial view, and the aerial view is generated based on the road image.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program/instruction, characterized in that the computer program/instruction, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
According to the technology, first, a first coordinate corresponding to a target vanishing point in a camera coordinate system is determined, a second coordinate corresponding to an infinity point in a bird's-eye view is determined, and then the first coordinate and the second coordinate are utilized to determine an external parameter corresponding to a target image acquisition device. The external parameters corresponding to the target image acquisition equipment can be automatically determined by utilizing the first coordinates and the second coordinates, so that a calibration plate is not required to be manually arranged, and a high-precision motion track corresponding to the target image acquisition equipment and the like is not required to be provided. Therefore, the complexity of determining the external parameters corresponding to the target image acquisition equipment is reduced, and the efficiency of determining the external parameters corresponding to the target image acquisition equipment is improved. And the calibration complexity of the external parameters corresponding to the target image acquisition equipment can be reduced, and the calibration efficiency of the external parameters of the target image acquisition equipment can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method for determining external parameters according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of a bird's eye view provided in an embodiment of the present disclosure;
FIG. 3 is a flow chart of a first coordinate determination method provided in an embodiment of the present disclosure;
FIG. 4 is a flow chart of a second coordinate determination method provided in an embodiment of the present disclosure;
FIG. 5 is a flow chart of another method of determining extrinsic parameters provided in an embodiment of the present disclosure;
FIG. 6 is a flow chart of a method of determining pitch and yaw angles provided in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a coordinate system provided in an embodiment of the present disclosure;
FIG. 8 is a flow chart of a method of calibrating an external parameter provided in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of an external parameter determination apparatus according to an embodiment of the present disclosure;
Fig. 10 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Referring to fig. 1, a flowchart of an external parameter determining method is provided in an embodiment of the disclosure. The method shown in fig. 1 may comprise the steps of:
step S101: under the condition that the target image acquisition equipment needs to perform external parameter calibration, determining a first coordinate corresponding to a target vanishing point in a camera coordinate system, wherein the target vanishing point is a vanishing point corresponding to at least two specified road routes in the road image acquired by the target image acquisition equipment.
Step S102: and determining an external parameter corresponding to the target image acquisition equipment by using the first coordinate and the second coordinate, wherein the second coordinate is the coordinate of an infinity point in the aerial view coordinate system, and the aerial view is generated based on the road image.
According to the external parameter determining method provided by the embodiment of the disclosure, first, a first coordinate corresponding to a target vanishing point in a camera coordinate system is determined, a second coordinate corresponding to an infinity point in a bird's-eye view coordinate system is determined, and then the first coordinate and the second coordinate are utilized to determine the external parameter corresponding to the target image acquisition equipment. The external parameters corresponding to the target image acquisition equipment can be automatically determined by utilizing the first coordinates and the second coordinates, so that a calibration plate is not required to be manually arranged, and a high-precision motion track corresponding to the target image acquisition equipment and the like is not required to be provided. Therefore, the complexity of determining the external parameters corresponding to the target image acquisition equipment is reduced, and the efficiency of determining the external parameters corresponding to the target image acquisition equipment is improved. And the calibration complexity of the external parameters corresponding to the target image acquisition equipment can be reduced, and the calibration efficiency of the external parameters of the target image acquisition equipment can be improved.
In embodiments of the present disclosure, a specific implementation of the target image capturing apparatus is generally an image capturing apparatus mounted on a target vehicle. Wherein the image acquisition device includes, but is not limited to, a camera, a video camera. In the case that the target image capturing device is a vehicle-mounted camera mounted on a target vehicle, the external parameter corresponding to the target image capturing device is an external parameter of the vehicle-mounted camera, which is referred to as a camera external parameter for short. In the case that the target image capturing device is a vehicle-mounted camera mounted on the target vehicle, the external parameter corresponding to the target image capturing device is an external parameter of the vehicle-mounted camera, which is referred to as a camera external parameter for short.
The target vehicle may be a motor vehicle or a non-motor vehicle. In practical applications, a common target vehicle is typically an autonomous vehicle, and correspondingly, the target image capturing device is typically an onboard camera mounted on the autonomous vehicle.
In the embodiments of the present disclosure, the so-called designated road line is a road line preselected for the target image capturing apparatus. For example: in the case where the target image capturing apparatus is an in-vehicle camera mounted on an autonomous vehicle, the specified lane line is a lane line. And the following steps: in the case where the target image capturing apparatus is an on-vehicle camera mounted on a non-motor vehicle, the specified road route is a road route planned for the non-motor vehicle.
In the embodiment of the present disclosure, the specific type of the target vehicle, the specific type of the target image capturing apparatus, and the type of the specified lane line are not particularly limited. The following describes in detail the external parameter determination method provided in the embodiment of the present disclosure, taking a case where the target vehicle is specifically an autonomous vehicle, the target image capturing apparatus is an in-vehicle camera mounted on the autonomous vehicle, and the specified road line is a lane line as an example. For the case where the target vehicle is not an autonomous vehicle, the target image capturing apparatus is not an in-vehicle camera mounted on the autonomous vehicle, or the specified road line is not a lane line, detailed description thereof will not be given here.
The vanishing point is: a set of parallel lines in three-dimensional space intersect at the same infinity point, which is sometimes referred to as a vanishing point, and the corresponding imaging point in the planar image. That is, the corresponding intersection point of a set of parallel lines in the three-dimensional stereoscopic space is referred to as an infinity point, and the corresponding intersection point of a set of parallel lines in the planar image is referred to as a vanishing point. Therefore, in order to be able to obtain the target vanishing point in the road image, it is necessary to ensure that at least two lane lines are included in the acquired road image.
In the case where the target image capturing apparatus is an in-vehicle camera mounted on an autonomous vehicle, the target image capturing apparatus may include at least one of a front-view in-vehicle camera, a rear-view in-vehicle camera, or a side-view in-vehicle camera.
In particular, in the case where the target image capturing device includes a forward-looking vehicle-mounted camera, in order to ensure that the captured road image includes at least two lane lines, it may be provided that the road image includes at least two lane lines of the current driving lane of the autonomous vehicle. In the case of a target image acquisition device comprising a side looking vehicle-mounted camera, it may be provided that the road image needs to contain at least two lane lines on the respective sides of the autonomous vehicle in order to ensure that the acquired road image contains at least two lane lines.
In the embodiment of the disclosure, the bird's eye view coordinate system is a virtual three-dimensional coordinate system preset for external parameter calibration, and the bird's eye view is an imaging result of the road image in the virtual three-dimensional coordinate system.
Specifically, a specific implementation manner of generating the bird's eye view based on the road image may include the following steps: first, a preset coordinate conversion relationship is determined. Then, a third coordinate of the pixel point in the road image in the image coordinate system is obtained, and a fourth coordinate of the pixel point in the aerial view coordinate system is determined based on the third coordinate and the coordinate conversion relation. And finally, correspondingly endowing texture information corresponding to the pixel points with fourth coordinates to generate a bird's eye view.
In addition, a specific implementation manner of generating the bird's eye view based on the road image may be another manner, and the bird's eye view may be generated based on the road image by using a projection manner.
In an embodiment of the present disclosure, the step of determining whether the target image capturing device is in a condition requiring performing external parameter calibration includes: firstly, determining whether the deviation between the current external parameters of the target image acquisition equipment exceeds a corresponding deviation threshold value. The standard external parameters and the deviation threshold are generally determined by using prior values. And then, under the condition that the deviation reaches a deviation threshold, determining that the target image acquisition equipment is in the condition of needing external parameter calibration, otherwise, determining that the target image acquisition equipment is not in the condition of needing external parameter calibration.
Under the condition that the deviation between the current external parameters reaches a deviation threshold, the condition that the external parameters are required to be calibrated is determined, and the workload of external parameter determination on the external parameters corresponding to the target image acquisition equipment can be reduced.
In embodiments of the present disclosure, a specific implementation of determining whether a deviation exceeds a corresponding deviation threshold is generally as follows:
first, it is determined whether the body of the autonomous vehicle is parallel to the lane line of the traveling direction. Secondly, under the condition that the body of the automatic driving vehicle is parallel to the lane lines in the driving direction, obtaining the road image to be detected, which is acquired by the target image acquisition equipment and comprises at least two lane lines. And thirdly, generating a corresponding bird's-eye view image to be detected based on the road image to be detected, and extracting a lane line parallel to the vehicle body in the bird's-eye view image to be detected as a lane line to be detected through a trained lane line extraction model. And finally, determining whether the deviation exceeds a corresponding deviation threshold value based on whether the lane line to be detected is abnormal or not.
Specifically, a specific implementation manner of determining whether a vehicle body of an autonomous vehicle is parallel to a lane line in a traveling direction includes: first, the current pitch angle speed of the autonomous vehicle is calculated by an on-board gyroscope or a wheel speed meter. Then, it is determined whether the autonomous vehicle is traveling straight based on the current pitch angle rate. Finally, under the condition that the automatic driving vehicle moves straight, the body of the automatic driving vehicle is determined to be parallel to the lane line in the running direction, otherwise, the body of the automatic driving vehicle is determined to be not parallel to the lane line in the running direction.
The trained lane line extraction model is a neural network model trained in advance by using a bird's eye view sample and a lane line extraction result corresponding to the training. In particular, the trained lane line extraction model may be a network model based on an edge drawn line (Edge Drawing Lines, edline) algorithm. The trained lane line extraction model may also be a network model based on straight line segment detection (A Line Segment Detector, LSD). In the embodiments of the present disclosure, the specific type of the lane line extraction model is not particularly limited.
In the embodiment of the present disclosure, the manner of generating the bird's-eye view image to be detected based on the detected road image is the same as the manner of generating the bird's-eye view image based on the road image, and a specific implementation manner of generating the bird's-eye view image to be detected based on the detected road image is not described in detail herein. The to-be-detected aerial view generated based on the detected road image may specifically refer to fig. 2, and fig. 2 is a schematic diagram of an aerial view provided in an embodiment of the disclosure. The bird's-eye view to be detected shown in the figure is a bird's-eye view generated based on the road image acquired by the left-view image acquisition device. That is, a bird's eye view is generated based on a road image collected by an in-vehicle camera mounted on the left side of the autonomous vehicle. Correspondingly, two lane lines outlined by the rectangular frame in the figure are lane lines positioned on the left side of the automatic driving vehicle.
In the embodiment of the disclosure, based on whether an abnormality occurs in a lane line to be detected, determining whether the deviation exceeds a corresponding deviation threshold value is: if the lane line to be detected is abnormal, determining that the deviation exceeds the corresponding deviation threshold value, otherwise, determining that the deviation does not exceed the corresponding deviation threshold value. Specifically, the method for determining whether the lane line to be detected is abnormal specifically includes the following steps:
first, it is determined whether a first proportion of an edge line corresponding to a lane line to be detected that is parallel to a vehicle body is larger than a second proportion that is not parallel to the vehicle body. And then, under the condition that the first proportion is larger than the second proportion, determining whether the width difference value between at least two different lane lines is smaller than a preset width difference value, and if the width difference value is not smaller than the preset width difference value, determining that the lane line to be detected is abnormal.
It should be noted that, when the first ratio is not greater than the second ratio, or when the first ratio is greater than the second ratio but the width difference is smaller than the preset width difference, it is determined that no abnormality occurs in the lane line to be detected.
In an embodiment of the present disclosure, a manner of determining a first coordinate corresponding to a target vanishing point in a camera coordinate system may be as shown in fig. 3, and fig. 3 is a flowchart of a first coordinate determining method provided in an embodiment of the present disclosure. The method shown in fig. 3 may comprise the steps of:
Step S301: at least two target edge lines meeting preset conditions are screened out from edge lines of the designated road line.
Step S302: and determining a linear equation corresponding to the target edge line in a camera coordinate system.
Step S303: the first coordinate is determined using a linear equation.
By determining the first coordinate using a linear equation, convenience of the first coordinate can be ensured. Therefore, the first coordinates are determined in the mode, so that the determination efficiency of the external parameters corresponding to the target image acquisition equipment can be further improved.
In practice, the target edge line may be screened based on a random sample consensus (Random Sample Consensus, RANSAC) algorithm. Namely, based on a random sampling coincidence algorithm, at least two target edge lines meeting preset conditions are screened out from edge lines of a specified road line. And a more reasonable target edge line can be rapidly screened out based on a random sampling coincidence algorithm.
In an embodiment of the present disclosure, based on a random sampling coincidence algorithm, the step of screening at least two target edge lines meeting preset conditions from edge lines of a specified road line is as follows:
firstly, before a preset iteration stop condition is met, each round of iteration in the random sampling coincidence algorithm is carried out according to the following steps:
First, two straight lines (denoted as line 1 and line 2) are randomly determined in a camera coordinate system, and linear equations (denoted as linear equation 1 and linear equation 2) corresponding to the two straight lines are determined, respectively.
Second, in the camera coordinate system, normal vectors (denoted as normal vector 1 and normal vector 2) corresponding to the linear equation 1 and the linear equation 2 are determined, and vector cross-multiplication calculation is performed on the normal vector 1 and the normal vector 2 to obtain corresponding direction vectors (denoted as direction vector 1).
Thirdly, the coordinates corresponding to the direction vector 1 in the camera coordinate system are determined as target coordinates, and the straight line distances from coordinate points corresponding to the target coordinates to different edge lines are calculated.
Fourth, based on the straight line distance, determining candidate edge lines meeting preset conditions in different edge lines, and generating a candidate edge line set. The preset condition is generally a distance condition preset for a distance from a point to a straight line.
And then stopping iteration after the preset iteration stopping condition is met, obtaining a candidate edge line set generated by each round of iteration, and determining the number of candidate edge lines contained in the candidate edge line set.
Finally, among the candidate edge line sets up to 2 in number, the candidate edge line set having the largest number of candidate edge lines is determined as a target edge line set, and the candidate edge lines in the target edge line set are determined as target edge lines.
In the embodiment of the present disclosure, in the case where the target edge line is two, the step of determining the first coordinates using the linear equation is shown in fig. 4, and fig. 4 is a flowchart of a second coordinate determining method provided in the embodiment of the present disclosure. The method shown in fig. 4 may comprise the steps of:
step S401: in the camera coordinate system, a first normal vector is determined for a linear equation corresponding to the first target edge line.
Step S402: in the camera coordinate system, a second normal vector is determined for a linear equation corresponding to the second target edge line.
Step S403: and performing vector cross multiplication calculation on the first normal vector and the second normal vector to obtain a target vector.
Step S404: the coordinates of the target vector corresponding in the camera coordinate system are determined as first coordinates.
Under the condition that two target edge lines are provided, vector cross multiplication is carried out on the first normal vector and the second normal vector to obtain a target vector, and the first coordinate is determined based on the corresponding coordinate of the target vector in the camera coordinate system, so that the complexity of determining the first coordinate can be reduced, and the efficiency of determining the first coordinate can be improved. And further, the complexity of determining the external parameters can be reduced, and the efficiency of determining the external parameters can be improved.
In the embodiment of the disclosure, the process of determining the first normal vector and the second normal vector is the same, and the equation coefficient of the linear equation is determined first, and then the corresponding normal vector is determined based on the equation coefficient. Specifically, in the case where the linear equation corresponding to the first target edge line is "ax+by+c=0", the first normal vector is: (A, B, C).
In addition, a specific implementation manner of determining the coordinates corresponding to the target vector in the camera coordinate system as the first coordinates may be: if the target vector is (M, N, O), the first coordinate is (M, N, O).
In an embodiment of the present disclosure, in a case where the target edge line is a plurality of, the step of determining the first coordinate using a linear equation includes: the first coordinate is determined using the following formula:
wherein l i May be used to represent a linear equation corresponding to the i-th item target edge line,equation coefficients that may be used to represent the ith linear equation, p may be used to represent the first coordinate, and i may be an integer greater than or equal to 3.
Under the condition that the number of the target edge lines is multiple, the first coordinates are determined by adopting the mode, so that the relatively accurate first coordinates can be determined.
In an embodiment of the present disclosure, the step of determining the external parameter corresponding to the target image capturing device by using the first coordinate and the second coordinate may be as shown in fig. 5, and fig. 5 is a flowchart of another external parameter determining method provided in the embodiment of the present disclosure. The method shown in fig. 5 may include the steps of:
Step S501: and determining the pitch angle and the yaw angle in the external parameters by using the first coordinates and the second coordinates.
Step S502: based on the pitch angle and the yaw angle, the turning angle in the external parameter is determined by utilizing the pre-constructed equal-width constraint of the specified road line.
In the embodiment of the disclosure, a pitch angle and a yaw angle in the external parameter are determined first, and then a turnover angle in the external parameter is determined by utilizing a pre-constructed equal-width constraint of a specified road line based on the pitch angle and the yaw angle. Three degrees of rotational freedom in the external reference can thus be determined.
After determining the pitch angle, yaw angle, and roll angle in the external parameters, other external parameters other than the pitch angle, yaw angle, and roll angle in the external parameters may be determined further based on pre-constructed constraints.
In the embodiment of the disclosure, based on a pitch angle and a yaw angle, a specific implementation manner of determining a flip angle in an external parameter by utilizing a pre-constructed specified road line equal width constraint comprises the following steps:
first, a first flip angle is constructed.
And secondly, calibrating the target image acquisition equipment by using the pitch angle, the yaw angle and the first turning angle, and acquiring road images by using the calibrated target image acquisition equipment.
Thirdly, determining whether the width difference of the lane lines between any two lane lines in the current road image accords with the preset equal width constraint of the specified road line.
Fourth, if the lane line width difference meets the pre-constructed specified lane line equal width constraint, the first flip angle is determined as the flip angle.
Fifthly, if the lane line width difference does not accord with the preset equal width constraint of the designated lane line, the first turning angle is adjusted to obtain the second turning angle, the target image acquisition equipment is calibrated by using the pitch angle, the yaw angle and the second turning angle, and road images are acquired by using the calibrated target image acquisition equipment.
And sequentially executing the steps until the lane line width difference accords with the preset equal width constraint of the specified lane line, and determining the turning angle under the condition that the lane line width difference accords with the preset equal width constraint of the specified lane line.
In an embodiment of the present disclosure, a specific implementation manner of determining a pitch angle and a yaw angle in an external parameter by using a first coordinate and a second coordinate may be shown in fig. 6, and fig. 6 is a flowchart of a method for determining a pitch angle and a yaw angle provided in an embodiment of the present disclosure. The method shown in fig. 6 may include the steps of:
Step S601: and determining a first rotation matrix between the camera coordinate system and the aerial view coordinate system by using the first coordinate and the second coordinate.
Step S602: a second rotation matrix between the aerial coordinate system and the world coordinate system is obtained.
Step S603: based on the first rotation matrix and the second rotation matrix, a pitch angle and a yaw angle in the external parameters are determined.
In the embodiment of the disclosure, based on the first rotation matrix and the second rotation matrix, the pitch angle and the yaw angle in the external parameters can be simply and directly determined.
In the embodiment of the disclosure, the aerial view coordinate system is specifically a virtual three-dimensional coordinate system set for the world coordinate system, and the coordinate system constraint relationship between the aerial view coordinate system and the world coordinate system is as follows: the XY plane between the aerial view coordinate system and the world coordinate system is parallel, and the X axis of the aerial view coordinate system is parallel to the Y axis of the world coordinate system, the Z axis of the world coordinate system is vertically upward, and the Z axis of the aerial view coordinate system is vertically downward.
Based on the coordinate system constraint relation between the bird's-eye view coordinate system and the world coordinate system, the second rotation matrix can be determined and obtained after the bird's-eye view coordinate system is constructed.
In the embodiment of the present disclosure, it may be further set that the origin of the coordinate system between the bird's eye coordinate system and the camera coordinate system coincides, in which case three translational degrees of freedom in the external parameters are 0. At this time, only three degrees of rotational freedom in the external reference need be determined.
In addition, in the case where the target vehicle is an autonomous vehicle or an automobile, the vehicle coordinate system may be determined as the world coordinate system. Referring specifically to fig. 7, fig. 7 is a schematic diagram of a coordinate system provided in an embodiment of the disclosure. The coordinate system seen in fig. 7 includes: a vehicle coordinate system, and a camera coordinate system and a bird's eye coordinate system constructed for the front-view vehicle-mounted camera. The X axis in the vehicle coordinate system is parallel to the ground and points to the front of the vehicle, the Z axis is directed upwards through the center of the rear wheel, and the Y axis is directed to the left of the driver.
Wherein, the X axis, Y axis and Z axis in the vehicle coordinate system are respectively denoted by X1, Y1 and Z1 in FIG. 7; the X, Y and Z axes in the camera coordinate system are denoted by X2, Y2 and Z2 in FIG. 7, respectively; the X, Y, and Z axes in the bird's eye-view coordinate system are denoted by X3, Y3, and Z3, respectively, in fig. 7.
In an embodiment of the present disclosure, the formula for determining the first rotation matrix between the camera coordinate system and the bird's-eye coordinate system using the first coordinates and the second coordinates is as follows:
p r =R rc p c
wherein p is r Can be used to represent the first coordinate, R rc Can be used to represent a first rotation matrix, p c Can be used to represent the second coordinate, p c =(0,1,0)。
In the embodiment of the disclosure, after determining the external parameters, various optimization factors may be further constructed to iteratively optimize the external parameters, and in the case that the external parameters are converged, determining the converged external parameters as final external parameters.
The optimization factors include, but are not limited to, that the corresponding straight lines of the lane lines in the vehicle body coordinate system have no longitudinal inclination, the corresponding straight lines of the lane lines in the vehicle body coordinate system have the same width, or the corresponding straight lines of the lane lines in the vehicle body coordinate system are parallel to each other. The iteration method adopted by the iteration optimization comprises the following steps: nonlinear least squares (Levenberg-Marquadt Method).
In addition, since the movement state of the target vehicle is uncertain in the process of determining the external parameters, variations in the external parameters are often caused. Therefore, in the embodiment of the present disclosure, a plurality of external parameters may be determined as candidate external parameters, and it is assumed that the candidate external parameters follow normal distribution, and then a target external parameter closest to a mean value of the normal distribution is selected from the candidate external parameters, and the target external parameter is determined as a final external parameter of the target image acquisition device, so as to be used for external parameter calibration.
In the embodiment of the disclosure, a complete process of determining the external parameters and performing external sampling calibration is shown in fig. 8, and fig. 8 is a flowchart of an external parameter calibration method provided in the embodiment of the disclosure. The method shown in fig. 8 may include the steps of:
step S801: it is determined whether the vehicle body is parallel to the lane line. Specifically, first, the current pitch angle speed of the autonomous vehicle is calculated by an on-vehicle gyroscope or wheel speed meter. Then, it is determined whether the autonomous vehicle is traveling straight based on the current pitch angle rate.
Step S802: and obtaining a road image to be detected under the condition that the vehicle body is parallel to the lane line. Specifically, under the condition that the body of the automatic driving vehicle is parallel to the lane lines in the driving direction, the road image to be detected, which is acquired by the target image acquisition device and contains at least two lane lines, is obtained.
Step S803: and generating a bird's eye view to be detected. Specifically, based on the road image to be detected, a corresponding bird's eye view to be detected is generated.
Step S804: and extracting the lane line to be detected. Specifically, a lane line parallel to a vehicle body in the bird's eye view to be detected is extracted as a lane line to be detected through a trained lane line extraction model.
Step S805: judging whether external parameter calibration is needed. Specifically, whether the deviation exceeds a corresponding deviation threshold value is determined based on whether the lane line to be detected is abnormal or not, whether the deviation exceeds the corresponding deviation threshold value is determined, and whether external parameter calibration is needed or not is determined.
Step S806: and determining the external parameters under the condition that the external parameters are required to be calibrated. Specifically, a first coordinate corresponding to the target vanishing point in the camera coordinate system is determined, and an external parameter corresponding to the target image acquisition device is determined by using the first coordinate and the second coordinate.
Step S807: and carrying out iterative optimization on the external parameters. Specifically, after determining the external parameters, a plurality of optimization factors are constructed to iteratively optimize the external parameters, and in the case of convergence of the external parameters, the converged external parameters are determined as final external parameters.
Step S808: and (5) performing external parameter calibration. Specifically, the external parameter calibration is performed on the target image acquisition equipment based on the final external parameter.
As shown in fig. 9, an embodiment of the present disclosure provides a 9 apparatus comprising:
the first coordinate determining unit 901 is configured to determine, when the target image capturing device needs to perform external parameter calibration, a first coordinate corresponding to a target vanishing point in a camera coordinate system, where the target vanishing point is a vanishing point corresponding to at least two specified road routes in the road image captured by the target image capturing device;
and an external parameter determining unit 902, configured to determine an external parameter corresponding to the target image capturing device using a first coordinate and a second coordinate, where the second coordinate is a coordinate of an infinity point in a bird's-eye view image in the bird's-eye view coordinate system, and the bird's-eye view image is generated based on the road image.
In one embodiment, the first coordinate determination unit 901 may include:
the target edge line determining subunit is used for screening at least two target edge lines meeting preset conditions from the edge lines of the specified road line based on a random sampling coincidence algorithm;
A linear equation determining subunit, configured to determine a linear equation corresponding to the target edge line in the camera coordinate system;
the coordinate determination first subunit is configured to determine the first coordinate using a linear equation.
In one embodiment, the coordinate determination first subunit may include:
the first normal vector determining subunit is used for determining a first normal vector according to a linear equation corresponding to the first target edge line in a camera coordinate system under the condition that the target edge lines are two;
a second normal vector determination subunit, configured to determine a second normal vector for a linear equation corresponding to a second target edge line in a camera coordinate system;
the target vector calculation subunit is used for carrying out vector cross multiplication calculation on the first normal vector and the second normal vector to obtain a target vector;
and the coordinate determination second subunit is used for determining the corresponding coordinate of the target vector in the camera coordinate system as the first coordinate.
In one embodiment, the first coordinate determination unit 901 may include:
the coordinate determining third subunit is configured to determine, when the target edge lines are multiple, the first coordinate using the following formula:
wherein, the linear equation corresponding to the first target edge line is represented, the equation coefficient of the first linear equation is represented, the first coordinate is represented, and the integer is greater than or equal to 3.
In one embodiment, the external parameter determining unit 902 may include:
the external parameter determining first subunit is used for determining a pitch angle and a yaw angle in the external parameter by using the first coordinate and the second coordinate;
the external parameter determining second subunit is used for determining the turning angle in the external parameter by utilizing the pre-constructed equal-width constraint of the specified road line based on the pitch angle and the yaw angle.
In one embodiment, the extrinsic determination of the first subunit may include:
a first rotation matrix determining subunit configured to determine a first rotation matrix between the camera coordinate system and the bird's-eye coordinate system using the first coordinate and the second coordinate;
a second rotation matrix obtaining subunit configured to obtain a second rotation matrix between the bird's eye coordinate system and the world coordinate system;
the external parameter determining third subunit is configured to determine a pitch angle and a yaw angle in the external parameter based on the first rotation matrix and the second rotation matrix.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 10 shows a schematic block diagram of an example electronic device 1000 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the device 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and communication unit 1009 such as a network card, modem, wireless communication transceiver, etc. Communication unit 1009 allows device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the respective methods and processes described above, such as the external reference determination method. For example, in some embodiments, the exogenous determining method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communication unit 1009. When the computer program is loaded into RAM 1003 and executed by computing unit 1001, one or more steps of the above-described external reference determination method may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the extrinsic determination method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable reference determination means such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram block or blocks to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. A method of determining an external parameter, comprising:
obtaining a road image to be detected, which is acquired by a target image acquisition device and comprises at least two lane lines, generating a corresponding bird's-eye view image to be detected based on the road image to be detected, extracting lane lines parallel to a vehicle body in the bird's-eye view image to be detected as lane lines to be detected through a trained lane line extraction model, and determining first coordinates corresponding to target vanishing points in a camera coordinate system under the condition that the deviation between the current external parameters of the target image acquisition device and the preset standard external parameters reaches a deviation threshold value based on the lane lines to be detected, wherein the target vanishing points are vanishing points corresponding to at least two specified road routes in the road image acquired by the target image acquisition device;
Determining an external parameter corresponding to the target image acquisition device by using the first coordinate and a second coordinate, wherein the second coordinate is a coordinate of an infinity point in a bird's-eye view in the bird's-eye view coordinate system, and the bird's-eye view is generated based on the road image;
the aerial view coordinate system is a preset virtual three-dimensional coordinate system, and the aerial view is an imaging result of the road image in the virtual three-dimensional coordinate system;
wherein the method further comprises: determining whether a width difference value between at least two different lane lines is smaller than a preset width difference value or not under the condition that a first proportion of an edge line corresponding to the lane line to be detected, which is parallel to the vehicle body, is larger than a second proportion of the edge line not parallel to the vehicle body, and determining that the lane line to be detected is abnormal if the width difference value is not smaller than the preset width difference value; if the lane line to be detected is abnormal, determining that the deviation between the current external parameters of the target image acquisition equipment and the preset standard external parameters reaches a deviation threshold value.
2. The method of claim 1, wherein the determining the corresponding first coordinates of the target vanishing point in the camera coordinate system comprises:
Screening at least two target edge lines meeting preset conditions from the edge lines of the specified road line;
determining a linear equation corresponding to the target edge line in a camera coordinate system;
the first coordinate is determined using the linear equation.
3. The method of claim 2, wherein, in the case where the target edge line is two, the determining the first coordinate using the linear equation comprises:
in the camera coordinate system, determining a first normal vector according to a linear equation corresponding to a first target edge line;
determining a second normal vector according to a linear equation corresponding to a second target edge line in the camera coordinate system;
performing vector cross multiplication calculation on the first normal vector and the second normal vector to obtain a target vector;
and determining the corresponding coordinate of the target vector in the camera coordinate system as the first coordinate.
4. A method according to claim 2 or 3, wherein, in the case where the target edge line is a plurality, the determining the first coordinate using the linear equation comprises:
the first coordinate is determined using the following formula:
wherein l i For representing the linear equation corresponding to the i-th item target edge line, And an equation coefficient for expressing the ith linear equation, wherein p is used for expressing the first coordinate, and i is an integer greater than or equal to 3.
5. A method according to any one of claims 1 to 3, wherein said determining the external parameters corresponding to the target image capturing device using the first coordinates and the second coordinates comprises:
determining a pitch angle and a yaw angle in the external parameter by using the first coordinate and the second coordinate;
and determining the turning angle in the external parameter by utilizing the pre-constructed equal width constraint of the specified road line based on the pitch angle and the yaw angle.
6. The method of claim 5, wherein the determining pitch and yaw angles in the external parameter using the first and second coordinates comprises:
determining a first rotation matrix between the camera coordinate system and the aerial view coordinate system by using the first coordinate and the second coordinate;
obtaining a second rotation matrix between the aerial view coordinate system and a world coordinate system;
and determining a pitch angle and a yaw angle in the external parameters based on the first rotation matrix and the second rotation matrix.
7. An external parameter determination apparatus comprising:
the first coordinate determining unit is used for obtaining a road image to be detected, which is acquired by the target image acquisition equipment and comprises at least two lane lines, generating a corresponding bird's-eye view image to be detected based on the road image to be detected, extracting lane lines parallel to a vehicle body in the bird's-eye view image to be detected as lane lines to be detected through a trained lane line extraction model, and determining a first coordinate corresponding to a target vanishing point in a camera coordinate system under the condition that the deviation between the current external parameters of the target image acquisition equipment and the preset standard external parameters reaches a deviation threshold value based on the lane lines to be detected, wherein the target vanishing point is a vanishing point corresponding to at least two specified lane lines in the road image acquired by the target image acquisition equipment;
an external parameter determining unit, configured to determine an external parameter corresponding to the target image capturing device using the first coordinate and a second coordinate, where the second coordinate is a coordinate of an infinity point in a bird's-eye view image in a bird's-eye view coordinate system, and the bird's-eye view image is generated based on the road image;
the aerial view coordinate system is a preset virtual three-dimensional coordinate system, and the aerial view is an imaging result of the road image in the virtual three-dimensional coordinate system;
The first coordinate determining unit is configured to determine whether a width difference between at least two different lane lines is smaller than a preset width difference when a first ratio of an edge line corresponding to the lane line to be detected to be parallel to the vehicle body is greater than a second ratio of the edge line not parallel to the vehicle body, and determine that the lane line to be detected is abnormal if the width difference is not smaller than the preset width difference; if the lane line to be detected is abnormal, determining that the deviation between the current external parameters of the target image acquisition equipment and the preset standard external parameters reaches a deviation threshold value.
8. The apparatus of claim 7, wherein the first coordinate determination unit comprises:
a target edge line determining subunit, configured to screen at least two target edge lines meeting preset conditions from edge lines of the specified road line;
a linear equation determining subunit, configured to determine a linear equation corresponding to the target edge line in a camera coordinate system;
and a coordinate determination first subunit configured to determine the first coordinate using the linear equation.
9. The apparatus of claim 8, wherein the coordinate determination first subunit comprises:
The first normal vector determination subunit is used for determining a first normal vector aiming at a linear equation corresponding to the first target edge line in the camera coordinate system under the condition that the target edge lines are two;
a second normal vector determining subunit, configured to determine a second normal vector for a linear equation corresponding to a second target edge line in the camera coordinate system;
the target vector calculation subunit is used for carrying out vector cross multiplication calculation on the first normal vector and the second normal vector to obtain a target vector;
and the coordinate determination second subunit is used for determining the coordinate corresponding to the target vector in the camera coordinate system as the first coordinate.
10. The apparatus according to claim 8 or 9, wherein the first coordinate determination unit includes:
a coordinate determining third subunit, configured to determine, when the target edge line is multiple, the first coordinate using the following formula:
wherein l i For representing the linear equation corresponding to the i-th item target edge line,and an equation coefficient for expressing the ith linear equation, wherein p is used for expressing the first coordinate, and i is an integer greater than or equal to 3.
11. The apparatus according to any one of claims 7 to 9, wherein the external parameter determination unit comprises:
an external parameter determining first subunit, configured to determine a pitch angle and a yaw angle in the external parameter using the first coordinate and the second coordinate;
and the external parameter determining second subunit is used for determining the turning angle in the external parameter by utilizing the pre-constructed equal width constraint of the designated road line based on the pitch angle and the yaw angle.
12. The apparatus of claim 11, wherein the extrinsic feature determines a first subunit comprising:
a first rotation matrix determining subunit configured to determine a first rotation matrix between the camera coordinate system and the bird's-eye view coordinate system using the first coordinate and the second coordinate;
a second rotation matrix obtaining subunit, configured to obtain a second rotation matrix between the aerial coordinate system and the world coordinate system;
and the external parameter determining third subunit is used for determining a pitch angle and a yaw angle in the external parameter based on the first rotation matrix and the second rotation matrix.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 6.
CN202210288815.9A 2022-03-22 2022-03-22 External parameter determining method and device, electronic equipment and storage medium Active CN114663529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210288815.9A CN114663529B (en) 2022-03-22 2022-03-22 External parameter determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210288815.9A CN114663529B (en) 2022-03-22 2022-03-22 External parameter determining method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114663529A CN114663529A (en) 2022-06-24
CN114663529B true CN114663529B (en) 2023-08-01

Family

ID=82030468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210288815.9A Active CN114663529B (en) 2022-03-22 2022-03-22 External parameter determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114663529B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294234B (en) * 2022-10-09 2023-03-24 小米汽车科技有限公司 Image generation method and device, electronic equipment and storage medium
CN116630436B (en) * 2023-05-17 2024-01-12 禾多科技(北京)有限公司 Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2541498B1 (en) * 2011-06-30 2017-09-06 Harman Becker Automotive Systems GmbH Method of determining extrinsic parameters of a vehicle vision system and vehicle vision system
JP6326624B2 (en) * 2014-04-02 2018-05-23 パナソニックIpマネジメント株式会社 CALIBRATION DEVICE, CALIBRATION METHOD, CAMERA HAVING CALIBRATION FUNCTION, AND PROGRAM
CN109086650B (en) * 2017-06-14 2022-04-12 现代摩比斯株式会社 Calibration method and calibration apparatus
KR101969030B1 (en) * 2017-09-13 2019-04-16 (주)베이다스 Method and apparatus for providing camera calibration for vehicles
CN108875603B (en) * 2018-05-31 2021-06-04 上海商汤智能科技有限公司 Intelligent driving control method and device based on lane line and electronic equipment
CN108898638A (en) * 2018-06-27 2018-11-27 江苏大学 A kind of on-line automatic scaling method of vehicle-mounted camera
CN111220143B (en) * 2018-11-26 2021-12-17 北京图森智途科技有限公司 Method and device for determining position and posture of imaging equipment
CN111627066B (en) * 2019-02-27 2023-07-18 南京地平线机器人技术有限公司 External parameter adjusting method and device for camera
CN111401150B (en) * 2020-02-27 2023-06-13 江苏大学 Multi-lane line detection method based on example segmentation and self-adaptive transformation algorithm
CN112509054B (en) * 2020-07-20 2024-05-17 重庆兰德适普信息科技有限公司 Camera external parameter dynamic calibration method
CN112258582B (en) * 2020-10-12 2022-11-08 武汉中海庭数据技术有限公司 Camera attitude calibration method and device based on road scene recognition
CN113850867A (en) * 2021-08-20 2021-12-28 上海商汤临港智能科技有限公司 Camera parameter calibration method, camera parameter calibration device control method, camera parameter calibration device control device, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉的车辆环境感知系统关键技术研究;贺稳定;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;全文 *

Also Published As

Publication number Publication date
CN114663529A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN114663529B (en) External parameter determining method and device, electronic equipment and storage medium
US20190164310A1 (en) Camera registration in a multi-camera system
CN111274343A (en) Vehicle positioning method and device, electronic equipment and storage medium
EP3968266B1 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
CN112560680A (en) Lane line processing method and device, electronic device and storage medium
CN111753739A (en) Object detection method, device, equipment and storage medium
CN112184914A (en) Method and device for determining three-dimensional position of target object and road side equipment
CN113706704A (en) Method and equipment for planning route based on high-precision map and automatic driving vehicle
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
CN113435392A (en) Vehicle positioning method and device applied to automatic parking and vehicle
CN112668505A (en) Three-dimensional perception information acquisition method of external parameters based on road side camera and road side equipment
CN112560769A (en) Method for detecting obstacle, electronic device, road side device and cloud control platform
CN114724116B (en) Vehicle traffic information generation method, device, equipment and computer readable medium
CN113033456B (en) Method and device for determining grounding point of vehicle wheel, road side equipment and cloud control platform
CN116215517A (en) Collision detection method, device, apparatus, storage medium, and autonomous vehicle
CN114266876A (en) Positioning method, visual map generation method and device
CN113516013A (en) Target detection method and device, electronic equipment, road side equipment and cloud control platform
CN113252061A (en) Method and device for determining relationship information between vehicles, electronic equipment and storage medium
JP7425169B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN115294234B (en) Image generation method and device, electronic equipment and storage medium
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN111784659A (en) Image detection method and device, electronic equipment and storage medium
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN114565681B (en) Camera calibration method, device, equipment, medium and product
CN115588185B (en) Driving route generation method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant