CN116485906B - Parameter processing method, device, equipment and storage medium - Google Patents

Parameter processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116485906B
CN116485906B CN202310342111.XA CN202310342111A CN116485906B CN 116485906 B CN116485906 B CN 116485906B CN 202310342111 A CN202310342111 A CN 202310342111A CN 116485906 B CN116485906 B CN 116485906B
Authority
CN
China
Prior art keywords
target
acquisition
acquisition component
preset
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310342111.XA
Other languages
Chinese (zh)
Other versions
CN116485906A (en
Inventor
王佳龙
王丕阁
邵晓东
刘奇胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202310342111.XA priority Critical patent/CN116485906B/en
Publication of CN116485906A publication Critical patent/CN116485906A/en
Application granted granted Critical
Publication of CN116485906B publication Critical patent/CN116485906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The disclosure provides a parameter processing method, a device, equipment and a storage medium, relates to the field of data processing, and particularly relates to the fields of artificial intelligence, automatic driving and autonomous parking. The specific implementation scheme is as follows: determining a target acquisition component to be processed, wherein the target acquisition component is one of a plurality of preset acquisition components; determining at least two first acquisition assemblies required to be used for calibrating external parameters of the target acquisition assembly from the plurality of preset acquisition assemblies; obtaining luminosity error information corresponding to a first acquisition component and a target acquisition component based on at least a first external parameter of the first acquisition component and a current external parameter of the target acquisition component in the at least two first acquisition components; and obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly. Therefore, the external parameter calibration result with higher precision can be obtained rapidly, and the calibration efficiency is improved.

Description

Parameter processing method, device, equipment and storage medium
Technical Field
The disclosure relates to the field of data processing technology, and in particular to artificial intelligence, automatic driving, and autonomous parking.
Background
The safe and stable operation of the parking auxiliary system or the automatic driving system based on the panoramic image depends on the external parameters of the camera in the panoramic image system, if the calibration of the external parameters of the camera is inaccurate, the phenomenon of wrong spelling or dislocation of the panoramic image can occur, poor user experience problems can be caused, and even the safety problems of the parking auxiliary system and the automatic driving system can be caused.
Disclosure of Invention
The disclosure provides a parameter processing method, device, equipment and storage medium.
According to an aspect of the present disclosure, there is provided a parameter processing method, including:
determining a target acquisition component to be processed, wherein the target acquisition component is one of a plurality of preset acquisition components;
determining at least two first acquisition assemblies required to be used for calibrating external parameters of the target acquisition assembly from the plurality of preset acquisition assemblies;
obtaining luminosity error information corresponding to a first acquisition component and a target acquisition component at least based on a first external parameter of the first acquisition component in the at least two first acquisition components and a current external parameter of the target acquisition component, wherein the luminosity error information is obtained based on a difference value of illumination intensities of a common view area of the target acquisition component and the first acquisition component;
And obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly.
According to another aspect of the present disclosure, there is provided a parameter processing apparatus including:
the acquisition unit is used for determining a target acquisition component to be processed, wherein the target acquisition component is one of a plurality of preset acquisition components;
the processing unit is used for determining at least two first acquisition assemblies which are required to be used for calibrating the external parameters of the target acquisition assembly from the plurality of preset acquisition assemblies; obtaining luminosity error information corresponding to a first acquisition component and a target acquisition component at least based on a first external parameter of the first acquisition component in the at least two first acquisition components and a current external parameter of the target acquisition component, wherein the luminosity error information is obtained based on a difference value of illumination intensities of a common view area of the target acquisition component and the first acquisition component; and obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
In this way, the scheme of the disclosure obtains the external parameters of the target acquisition assembly by positioning at least two first acquisition assemblies required for calibrating the external parameters of the target acquisition assembly and utilizing the photometric error information corresponding to the first acquisition assemblies and the target acquisition assembly; therefore, an external parameter calibration result with higher precision can be obtained rapidly, and the calibration efficiency is improved; moreover, the scheme of the present disclosure does not limit the use field, thereby improving convenience and improving user experience.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow chart diagram of a parameter processing method according to an embodiment of the present application;
FIG. 2 (a) is a schematic illustration of a common view region according to an embodiment of the present application;
FIGS. 2 (b) and 2 (c) are schematic diagrams of images obtained after a camera acquires a common-view region according to an embodiment of the present application;
FIGS. 3 (a) to 3 (c) are schematic diagrams of calibration scenarios of a target vehicle with 4 cameras according to an embodiment of the present application;
FIGS. 4 (a) to 4 (c) are schematic diagrams of calibration scenarios of a target vehicle with 6 cameras according to an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram II of a parameter processing method according to an embodiment of the present application;
FIGS. 6 (a) to 6 (c) are schematic diagrams illustrating pixel sampling effects according to an embodiment of the present application;
FIG. 7 (a) is a schematic diagram showing the image A shown in FIG. 2 (b);
FIG. 7 (B) is a specific example diagram of the image B shown in FIG. 2 (c);
FIG. 8 is a flow chart of a parameter processing method according to an embodiment of the present application;
FIGS. 9 (a) and 9 (b) are graphs comparing effects of external parameters of a camera before and after calibration in an example according to an embodiment of the present application;
FIGS. 10 (a) and 10 (b) are graphs comparing effects of external parameters of a camera before and after calibration in another example according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a parameter processing apparatus according to an embodiment of the present application;
fig. 12 is a block diagram of an electronic device for implementing a parameter processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, e.g., including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C. The terms "first" and "second" herein mean a plurality of similar technical terms and distinguishes them, and does not limit the meaning of the order, or only two, for example, a first feature and a second feature, which means that there are two types/classes of features, the first feature may be one or more, and the second feature may be one or more.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be appreciated by one skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Panoramic image refers to: the vehicle can view the panoramic environment of 360 degrees around the vehicle body through the images of the panoramic image obtained by the cameras arranged around the vehicle body. The panoramic image can reflect the 360-degree environmental information of the vehicle body, has the characteristics of ultra-wide viewing angle and no splice seam, is convenient for a driver to observe the blind area of the peripheral vision of the vehicle, helps the driver to drive more intuitively and safely, and can provide support for a parking auxiliary system.
Panoramic images are also the basis of automatic driving, and in an automatic driving system, a perception component can output looking-around semantic information through the panoramic images, such as outputting key information related to automatic driving intensity, such as lane lines, ground arrows, parking spaces, zebra crossing edges, obstacles and the like.
The safe and stable operation of the parking auxiliary system and the automatic driving system based on the panoramic image depends on the calibration accuracy of the external parameters of the camera in the panoramic image system, if the calibration of the external parameters of the camera is inaccurate, the phenomenon of error or dislocation of the panoramic image can occur, poor user experience problems can be caused, and even the safety problems of the parking auxiliary system and the automatic driving system can be caused.
The calibration of the camera of the existing panoramic image system is carried out on a vehicle production line, the existing scheme has high requirements on the field, a calibration room is usually required to be built, a high-precision calibration plate is arranged between the calibration room, the coordinates of the calibration plate based on a coordinate system between the calibration room are measured through a total station, and in the calibration process, a centering device is required to move the vehicle to the origin of the coordinate system between the calibration room for calibration based on the calibration plate. Obviously, the existing scheme has more external dependence items, wherein the centering device, the total station and the high-precision calibration plate are expensive, and have higher maintenance cost, and the cost for recalibrating the vehicle to the production line is also high.
Based on the above, the scheme of the disclosure provides a parameter processing method to calibrate the external parameters of the camera, and the scheme effectively reduces the requirements of the external parameter calibration of the camera on the calibration field, can more conveniently calibrate the external parameters of the camera in an after-sales scene without returning to a vehicle production line, and the panoramic image splicing of the calibration result is accurate and has no dislocation.
Specifically, fig. 1 is a schematic flowchart of a parameter processing method according to an embodiment of the present application. The method is optionally applied in computing devices, such as personal computers, servers, server clusters, etc. with classical computing capabilities. The method includes at least some of the following. As shown in fig. 1, includes:
Step S101: a target acquisition component to be processed is determined.
Here, the target acquisition component is one of a plurality of preset acquisition components.
Further, in practical application, the preset collection component may be a device with an image collection function, such as a camera, which is not limited in the scheme of the disclosure.
Step S102: and determining at least two first acquisition assemblies required to be used for calibrating the external parameters of the target acquisition assembly from the plurality of preset acquisition assemblies.
Step S103: and obtaining luminosity error information corresponding to the first acquisition component and the target acquisition component at least based on the first external parameter of the first acquisition component in the at least two first acquisition components and the current external parameter of the target acquisition component.
Here, the photometric error information is obtained based on a difference in illumination intensity of a common viewing region of the target acquisition component and the first acquisition component.
Step S104: and obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly.
In this way, the scheme of the disclosure obtains the external parameters of the target acquisition assembly by positioning at least two first acquisition assemblies required for calibrating the external parameters of the target acquisition assembly and utilizing the photometric error information corresponding to the first acquisition assemblies and the target acquisition assembly; therefore, an external parameter calibration result with higher precision can be obtained rapidly, and the calibration efficiency is improved; moreover, the scheme of the present disclosure does not limit the use field, thereby improving convenience and improving user experience.
It should be noted that the difference value of the illumination intensity may be specifically a difference value between the illumination intensity of the target pixel point in the target image and the illumination intensity of the first pixel point in the first image. Further, the target image is an image which is acquired by the target acquisition component and corresponds to the common view area; correspondingly, the first image is an image which is acquired by the first acquisition component and corresponds to the common view area.
In addition, the scheme disclosed by the invention can be used for the external parameter calibration scene of the vehicle-mounted camera, and at the moment, the scheme disclosed by the invention does not have special limitation on the calibration scene, and a centering device, a total station, a special calibration device and the like are not required, so that the calibration cost can be effectively reduced.
It should be noted that, if the scheme disclosed herein is applied to an external parameter calibration scene of a vehicle-mounted camera, at this time, a plurality of preset acquisition components in the scheme disclosed herein are all disposed on a target vehicle, and the images acquired by the plurality of preset acquisition components in the scheme disclosed herein can be utilized to obtain a panoramic image of 360 degrees around the body of the target vehicle. Further, in this scenario, the method described in the present disclosure may be performed on the target vehicle, for example, on the vehicle, or may also be performed on another device independent of the target vehicle, such as the cloud side, or other electronic devices, which the present disclosure is not limited to.
It should be noted that, the co-view area according to the present disclosure refers to an overlapping portion of the acquisition areas of the two acquisition components, in other words, the acquisition areas of the two acquisition components having the co-view area partially overlap, so that the two acquisition components having the co-view area can acquire the target body in the co-view area at the same time.
For example, in a scene of a vehicle-mounted camera, 4 cameras are arranged on a body of a target vehicle, namely a front view camera, a left view camera, a rear view camera and a right view camera; at this time, as shown in fig. 2 (a), the front view camera is adjacent to the left view camera, and the overlapping portion of the acquisition areas of the front view camera and the left view camera is the common view area of the front view camera and the left view camera. At this time, if the target is placed in the common area of the front view camera and the left view camera, as shown in fig. 2 (B) and fig. 2 (c), the first original image collected by the front view camera includes an image a corresponding to the target, and the second original image collected by the left view camera includes an image B corresponding to the target.
Further, in a specific example, in order to facilitate improving the processing precision, the first image is an image of the target body acquired by the first acquisition component, where the target body is also placed in the common area of the first acquisition component and the target acquisition component; the target image is an image which is acquired by the target acquisition component and corresponds to the target body, so that the external target parameter of the target acquisition component is conveniently determined by utilizing luminosity error information between the first image which corresponds to the target body and the target image.
In a specific example of the present disclosure, at least two first acquisition components that are required to be used may be determined in the following manner:
mode one: specifically, the determining, from the plurality of preset collection assemblies, at least two first collection assemblies that are required to calibrate the external parameters of the target collection assembly includes:
and taking a preset acquisition component in two preset acquisition components with common view areas with the target acquisition component as the first acquisition component under the condition that the following conditions are met.
Specific conditions (which may be referred to as condition 1) include:
the external parameters of at least one preset acquisition assembly of the two preset acquisition assemblies with the common view area with the target acquisition assembly meet the first preset external parameter condition.
Here, in a specific example, the first preset external parameter condition is that a difference between an external parameter of the preset collecting component and a preset parameter value of the preset collecting component is less than or equal to a threshold, and at this time, the external parameter of the preset collecting component may be considered to be accurate. It should be noted that the threshold is a tested value, and may be set according to actual requirements, which is not limited by the scheme of the present disclosure.
For example, taking a scene of 4 vehicle cameras as an example, as shown in fig. 3 (a), the target vehicle includes 4 cameras, namely a front view camera 1, a left view camera 2, a rear view camera 3 and a right view camera 4; at this time, two adjacent cameras have a common view area, that is, the front view camera 1 and the left view camera 2 have a first common view area, the left view camera 2 and the rear view camera 3 have a second common view area, the rear view camera 3 and the right view camera 4 have a third common view area, and the right view camera 4 and the front view camera 1 have a fourth common view area.
Further, as shown in fig. 3 (b), the front view camera 1 is a target camera that needs to perform external parameter calibration, and at this time, cameras having a common view area with the front view camera 1 are a left view camera 2 and a right view camera 4; if the external parameters of the left-view camera 2 meet the first preset external parameter condition, the external parameters of the right-view camera 4 do not meet the first preset external parameter condition, or if the external parameters of the left-view camera 2 do not meet the first preset external parameter condition, the external parameters of the right-view camera 4 meet the first preset external parameter condition, both conditions meet the condition 1, and at this time, the left-view camera 2 and the right-view camera 4 can be jointly used as cameras for calibrating the external parameters of the target camera.
Further, if the external parameters of the left-view camera 2 and the external parameters of the right-view camera 4 both meet the first preset external parameter condition, the above condition 1 is also met, and at this time, the left-view camera 2 and the right-view camera 4 may be used together as a camera required for calibrating the external parameters of the target camera.
Similarly, if the left-view camera 2 is a target camera for external parameter calibration, at this time, the cameras having a common view area with the left-view camera 2 are the front-view camera 1 and the rear-view camera 3; if the external parameters of at least one of the front-view camera 1 and the rear-view camera 3 meet the first preset external parameter condition, the condition 1 is met, and at this time, the front-view camera 1 and the rear-view camera 3 can be jointly used as cameras for calibrating the external parameters of the target camera.
Or if the rear-view camera 3 is a target camera needing external parameter calibration, at this time, the cameras with the common view area with the rear-view camera 3 are a left-view camera 2 and a right-view camera 4; if the external parameters of at least one of the left-view camera 2 and the right-view camera 4 meet the first preset external parameter condition, the condition 1 is met, and at this time, the left-view camera 2 and the right-view camera 4 can be jointly used as cameras for calibrating the external parameters of the target camera.
Or if the right-view camera 4 is a target camera needing external parameter calibration, at this time, the cameras with the common view area with the right-view camera 4 are the front-view camera 1 and the rear-view camera 3; if the external parameters of at least one of the front-view camera 1 and the rear-view camera 3 meet the first preset external parameter condition, the condition 1 is met, and at this time, the front-view camera 1 and the rear-view camera 3 can be jointly used as cameras for calibrating the external parameters of the target camera.
Like this, this disclosed scheme provides a concrete scheme of selecting first collection subassembly, and this scheme is simple and convenient, high-efficient, can promote calibration efficiency on effectively promoting the basis of the precision of outer parameter calibration result, consequently, has promoted user's experience. Moreover, the calibration cost is low.
Mode two: specifically, the above determining, from the plurality of preset collection assemblies, at least two first collection assemblies that are required to calibrate the external parameters of the target collection assembly includes:
under the condition that the external parameters of each preset acquisition assembly in two preset acquisition assemblies with a common view area with the target acquisition assembly do not meet the first preset external parameter condition (can be called as condition 2), selecting a first acquisition assembly with the external parameters meeting the first preset external parameter condition from the plurality of preset acquisition assemblies;
At least one first acquisition component having a common view area with the target acquisition component is selected from the remaining preset acquisition components other than the selected first acquisition component.
That is, under the condition 1 that the above-mentioned condition is not satisfied, it is necessary to first select a first acquisition component satisfying a first preset external parameter condition from the plurality of preset acquisition components, in other words, first select a first acquisition component with an accurate external parameter, then select a first acquisition component having a common view area with the target acquisition component from the remaining preset acquisition components, so as to obtain at least two first acquisition components, and then calibrate the external parameter of the target acquisition component by using the selected at least two first acquisition components.
For example, taking the case that 4 cameras are provided on the body of the target vehicle, the front view camera 1 is the target camera that needs to perform external parameter calibration, and at this time, the cameras having a common view area with the front view camera 1 are the left view camera 2 and the right view camera 4; if the external parameters of the left-view camera 2 and the external parameters of the right-view camera 4 do not meet the first preset external parameter condition, namely the condition 1 is not met, and the condition 2 is met, selecting a rearview camera 3 with the external parameters meeting the first preset external parameter condition from the preset cameras; then, selecting a left-view camera 2 or a right-view camera 4 with a common view area with the front-view camera 1 from the rest cameras except the rear-view camera 3, or selecting all cameras (namely, the left-view camera 2 and the right-view camera 4) with the front-view camera 1 with the common view area as shown in fig. 3 (c), and further using the selected cameras as cameras required for calibrating external parameters of the front-view camera 1; for example, in this example, the cameras used for calibrating the external parameters of the front-view camera 1 include the following three combinations:
First combination: a left-view camera 2 and a rear-view camera 3; that is, the left-view camera 2 and the rear-view camera 3 are adopted to jointly calibrate the external parameters of the front-view camera 1.
A second combination: a right-view camera 4 and a rear-view camera 3; that is, the external parameters of the front camera 1 are calibrated by the right-view camera 4 and the rear camera 3.
Third combination: a left-view camera 2, a rear-view camera 3 and a right-view camera 4; that is, the left-view camera 2, the rear-view camera 3, and the right-view camera 4 are used to jointly calibrate the external parameters of the front-view camera 1.
It should be noted that, in this example, the acquisition component used for calibrating the external parameter of the target acquisition component is not an acquisition component with accurate external parameter, but an acquisition component with inaccurate external parameter (i.e. a preset acquisition component which does not meet the first preset external parameter condition) may also be used. That is, for the above example, the camera used for calibrating the external parameter of the front-view camera 1 is not a camera with accurate external parameter, but may be a camera with inaccurate external parameter (i.e. a camera that does not meet the first preset external parameter condition), and at this time, by using the scheme of the present disclosure, in one calibration process, all cameras with inaccurate external parameter may be calibrated at the same time.
For example, for the first combination, not only the target external parameters of the front-view camera 1 but also the external parameters of the left-view camera 2, which meet the first preset external parameters, can be obtained by using the scheme of the present disclosure. Similarly, for the second combination, with the present disclosure, the target external parameters of the front view camera 1 and the external parameters of the right view camera 4 that meet the first preset external parameters can be obtained. For the third combination, by using the scheme of the present disclosure, the target external parameters of the front view camera 1 and the external parameters of the right view camera 4, which meet the first preset external parameters, and the external parameters of the left view camera 2, which meet the first preset external parameters, can be obtained.
Like this, this disclosed scheme provides a feasible scheme of selecting first collection subassembly again, and this scheme application scene is wider, and simple and convenient, high-efficient, can be on effectively promoting the basis of the precision of outer parameter calibration result, richens the application scenario of demarcating, simultaneously, promotes and marks efficiency, consequently, has further promoted user's experience. Moreover, the feasible scheme has low calibration cost.
Further, in a specific example, to further improve accuracy of the calibration result, at least a first acquisition component having a common view area with the target acquisition component is selected from the remaining preset acquisition components except the selected first acquisition component, including:
Selecting at least one first acquisition component from the remaining preset acquisition components except the selected first acquisition component under the condition that the following conditions are met:
the first acquisition component is provided with a common view area with the selected first acquisition component; and the target acquisition component is provided with a common view area.
That is, in order to further improve the accuracy of the calibration result, if the condition 1 in the first mode is not satisfied and the condition 2 in the second mode is satisfied, a first acquisition component whose external parameter satisfies a first preset external parameter condition needs to be selected from the plurality of preset acquisition components, in other words, a first acquisition component whose external parameter is accurate needs to be selected first; secondly, selecting the acquisition component having the common view area with the selected first acquisition component, and selecting the acquisition component having the common view area with the target acquisition component, in other words, in this case, the external parameter of at least one first acquisition component among the selected at least two first acquisition components satisfies the first preset external parameter condition, at the same time, at least one first acquisition component among the selected at least two first acquisition components has the common view area with the target acquisition component, and at least one acquisition component having the common view area with the first acquisition component among the selected at least two first acquisition components is also the first acquisition component, that is, for any first acquisition component, at least one acquisition component having the common view area with the first acquisition component is also the first acquisition component.
For example, taking a scene of 6 vehicle cameras as an example, as shown in fig. 4 (a), the target vehicle includes 6 cameras, that is, a camera 1 and a camera 6 serving as front view cameras, a camera 2 serving as left view cameras, a camera 3 and a camera 4 serving as rear view cameras, and a camera 5 serving as right view cameras, at this time, two adjacent cameras have a common view area, that is, the camera 1 and the camera 2 have a first common view area, the camera 2 and the camera 3 have a second common view area, the camera 3 and the camera 4 have a third common view area, the camera 4 and the camera 5 have a fourth common view area, the camera 5 and the camera 6 have a fifth common view area, and the camera 6 and the camera 1 have a sixth common view area.
Further, as shown in fig. 4 (b), in the scene 1, the camera 1 is a target camera that needs to perform external parameter calibration at present, if the camera 2 and the camera 6 having a common view area with the camera 1 do not meet the condition 1 described above, but meet the condition 2 described above; at this time, firstly, selecting a camera 3 with external parameters meeting first preset external parameter conditions from all cameras; secondly, selecting the camera 2 with the common view area with the camera 1 from the rest cameras except the selected camera 3, and selecting the camera 2 with the common view area with the camera 3, so that the camera 2 and the camera 3 can be used together as the cameras required for calibrating the external parameters of the camera 1. It should be noted that the camera used for calibrating the external parameters of the camera 1 can be screened according to the principle of minimum used cameras, so that the calibration efficiency can be further improved.
Or, taking the scene 1 as an example, firstly, selecting a camera 3 meeting a first preset external parameter condition from all cameras; secondly, from the rest cameras except the selected camera 3, the camera 2 and the camera 4 which have the common view area with the camera 3 are selected, and the camera 2 and the camera 6 which have the common view area with the camera 1 are selected, at this time, the camera 2, the camera 3, the camera 4 and the camera 6 can be used together as cameras which are required for calibrating the external parameters of the camera 1. It should be noted that the principle that more cameras are used can be followed to screen the cameras used for calibrating the external parameters of the camera 1, so that the accuracy of the calibration result can be further improved.
For another example, as shown in fig. 4 (c), the camera 1 is a target camera that needs to perform external parameter calibration, and if the camera 2 and the camera 6 having a common view area with the camera 1 do not meet the condition 1 but meet the condition 2; at this time, firstly, selecting a camera 4 with external parameters meeting first preset external parameter conditions from all cameras; secondly, selecting a camera 2 with a common view area with the camera 1 from the rest cameras except the selected camera 4, and selecting a camera 3 with a common view area with the camera 4, wherein the camera 2, the camera 3 and the camera 4 can be used together as cameras required for calibrating external parameters of the camera 1.
Or, taking the scene 2 as an example, firstly, selecting a camera 4 meeting a first preset external parameter condition from all cameras; secondly, selecting a camera 2 and a camera 6 having a common view area with the camera 1 from the remaining cameras except the selected camera 4, and selecting a camera 3 and a camera 5 having a common view area with the camera 4; at this time, the camera 2, the camera 3, the camera 4, the camera 5 and the camera 6 can be used together as cameras required for calibrating the external parameters of the camera 1.
Further, it should be noted that, when there are a plurality of acquisition components that satisfy the first preset external parameter condition, any one of the plurality of acquisition components that satisfy the first preset external parameter condition may be selected as an acquisition component used for calibration, which is not limited in the present disclosure, so long as calibration can be achieved, and accuracy requirements of the calibration result are further satisfied.
Like this, this disclosed scheme provides a feasible scheme of selecting first collection subassembly again, and this scheme application scene is wider, and simple and convenient, high-efficient, can be on effectively promoting the basis of the precision of outer parameter calibration result, richens the application scenario of demarcating, simultaneously, promotes and marks efficiency, consequently, has further promoted user's experience. Moreover, the feasible scheme has low calibration cost.
Mode three: specifically, the above determining, from the plurality of preset collection assemblies, at least two first collection assemblies that are required to calibrate the external parameters of the target collection assembly includes:
and in the case that at least one preset acquisition component with external parameters meeting the first preset external parameter condition exists in the preset acquisition components (also called as condition 3), taking each preset acquisition component of all preset acquisition components except the target acquisition component as a first acquisition component required for calibrating the target acquisition component.
That is, in the case that at least one preset acquisition component whose external parameter satisfies the first preset external parameter condition exists in the plurality of preset acquisition components (that is, the external parameter of the at least one preset acquisition component is accurate), all preset acquisition components except the target acquisition component to be calibrated may be used as the first acquisition component at this time, so as to cooperatively complete the calibration of the external parameter of the target acquisition component.
For example, as shown in fig. 3 (a), the front view camera 1 is a target camera that needs to perform external parameter calibration, and at this time, if the external parameter of the rear view camera 3 satisfies the first preset external parameter condition, the condition 3 is satisfied, and at this time, the left view camera 2, the right view camera 4, and the rear view camera 3 may be used together as cameras that need to perform external parameter calibration of the target camera. It can be understood that under this scenario, the external parameters of the left-view camera 2 and the right-view camera 4 may not meet the first preset external parameter condition, and at this time, by adopting the scheme of the present disclosure, in one calibration procedure, the calibration tasks of the external parameters of the left-view camera 2 and the right-view camera 4, and the front-view camera 1 may be completed.
Similarly, the front view camera 1 is a target camera that needs to perform external parameter calibration, at this time, if the external parameter of the left view camera 2 meets the first preset external parameter condition, the condition 3 is met, and at this time, the left view camera 2, the right view camera 4 and the rear view camera 3 can be used together as a camera that is required for calibrating the external parameter of the target camera. It can be understood that, under this scenario, the external parameters of the rear-view camera 3 and the right-view camera 4 may not meet the first preset external parameter condition, and at this time, by adopting the scheme of the present disclosure, in one calibration procedure, the calibration tasks of the external parameters of the rear-view camera 3 and the right-view camera 4, and the front-view camera 1 may be completed.
Or, the front view camera 1 is a target camera that needs to perform external parameter calibration, at this time, if the external parameter of the right view camera 4 meets the first preset external parameter condition, the condition 3 is met, and at this time, the left view camera 2, the right view camera 4 and the rear view camera 3 may be used together as a camera that is required to perform external parameter calibration of the target camera. It can be understood that under this scenario, the external parameters of the left-view camera 2 and the rear-view camera 3 may not meet the first preset external parameter condition, and at this time, by adopting the scheme of the present disclosure, in one calibration procedure, the calibration tasks of the external parameters of the left-view camera 2 and the rear-view camera 3, and the front-view camera 1 may be completed.
It can be understood that in practical application, there may be a plurality of target acquisition components to be processed, at this time, based on the scheme of the present disclosure, if there is an external parameter of a preset acquisition component that meets a first preset external parameter condition, calibration of all target acquisition components to be processed can be completed in one calibration procedure, and the calibration procedure is simple and efficient, and further improves calibration efficiency.
Like this, this disclosed scheme provides a feasible scheme of selecting first collection subassembly again, and this scheme application scene is wider, and simple and convenient, high-efficient, can be on effectively promoting the basis of the precision of outer parameter calibration result, richens the application scenario of demarcating, simultaneously, promotes and marks efficiency, consequently, has further promoted user's experience. Moreover, the feasible scheme has low calibration cost.
In a specific example of the present disclosure, fig. 5 is a schematic flow chart diagram II of a parameter processing method according to an embodiment of the present application. The method is optionally applied to a computing device, such as a personal computer, a server cluster, or the like, which has computing capabilities. It will be appreciated that the relevant content of the method shown in fig. 1 above may also be applied to this example, and this example will not be repeated for the relevant content.
Further, the method includes at least some of the following. As shown in fig. 5, includes:
step S501: a target acquisition component to be processed is determined.
Here, the target acquisition component is one of a plurality of preset acquisition components.
Step S502: and taking a preset acquisition component in two preset acquisition components with common view areas with the target acquisition component as the first acquisition component under the condition that the following conditions are met.
The specific conditions include: the external parameters of at least one preset acquisition assembly of the two preset acquisition assemblies with the common view area with the target acquisition assembly meet the first preset external parameter condition.
It will be appreciated that the first acquisition component may also be obtained in the manner described above in this example, and will not be described in detail herein.
Step S503: and obtaining luminosity error information corresponding to the first acquisition component and the target acquisition component at least based on the first external parameter of the first acquisition component in the at least two first acquisition components and the current external parameter of the target acquisition component.
Here, the photometric error information is derived based on a difference in illumination intensity of a common viewing region of the target acquisition component and the first acquisition component; the illumination intensity difference value is the difference value between the illumination intensity of a target pixel point in the target image and the illumination intensity of a first pixel point in the first image.
Further, the target image is an image which is acquired by the target acquisition component and corresponds to the common view area; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area.
Step S504: and under the condition that the first external parameters of the first acquisition component meeting the first preset external parameter conditions are fixed, minimizing luminosity error information corresponding to the first acquisition component and the target acquisition component by at least adjusting the current external parameters of the target acquisition component so as to obtain the target external parameters of the target acquisition component.
That is, in this example, the accurate external parameters are fixed, and the external parameters to be calibrated are adjusted, so as to minimize the photometric error information, and further obtain the target external parameters of the target acquisition assembly. Or in this example, the accurate external parameter may be fixed, and the external parameter to be calibrated and the external parameter with inaccurate external parameter may be adjusted, so as to minimize the photometric error information, and further obtain the target external parameter of the target acquisition component. It can be appreciated that parameter adjustment can be performed based on actual accuracy requirements, scene requirements, etc., and the specific parameter adjustment manner is not limited by the scheme of the present disclosure.
Further, in an example, minimizing the photometric error information corresponding to the first acquisition component and the target acquisition component to obtain the off-target parameter of the target acquisition component may be specifically: each of the plurality of photometric error information is minimized to obtain an off-target parameter of the target acquisition assembly. It will be appreciated that in this example, one photometric error message is obtained for each first acquisition component, at which point each photometric error message may be minimized to obtain the off-target parameters for the target acquisition component.
For example, as shown in fig. 3 (b), in the case where the front view camera 1 is a camera that needs to perform external parameter calibration, when the above-described condition 1 is satisfied, the left view camera 2 and the right view camera 4 are used as cameras that need to be used for calibrating the external parameters of the front view camera 1; and obtaining photometric error information corresponding to the front-view camera 1 and the left-view camera 2, and obtaining photometric error information corresponding to the front-view camera 1 and the right-view camera 4, wherein the total number of the photometric error information is 2; and each photometric error information is minimized to obtain the off-target parameters of the front-view camera 1.
Further, in another example, the photometric error information corresponding to the first acquisition component and the target acquisition component is minimized to obtain the off-target parameter of the target acquisition component, which may also be specifically: and adding the plurality of luminosity error information to obtain total luminosity error information, and further minimizing the total luminosity error information to obtain the external target parameters of the target acquisition assembly.
Continuing with fig. 3 (b) as an example, after obtaining the photometric error information corresponding to the front-view camera 1 and the left-view camera 2, and obtaining the photometric error information corresponding to the front-view camera 1 and the right-view camera 4, the two photometric error information are added to obtain total photometric error information, and the total photometric error information is minimized to obtain the external target parameter of the front-view camera 1.
In this way, the scheme of the present disclosure provides a specific scheme for obtaining the external parameters of the target acquisition assembly through the parameter optimization mode, so that the time required for operation is effectively reduced, and the calculation speed is faster; in addition, the scheme does not limit the use field, so that convenience is improved, and meanwhile, the experience of a user is also improved.
In a specific example of the disclosed approach, photometric error information can be obtained in the following manner; specifically, the obtaining the photometric error information corresponding to the first collecting component and the target collecting component based at least on the first external parameter of the first collecting component and the current external parameter of the target collecting component (i.e. the step S103 or the step S503) specifically includes:
Step S601: and obtaining the illumination intensity of the first pixel point in the first pixel set under the looking-around coordinate system based on at least the first external parameter of the first acquisition component.
The first pixel set is obtained by sampling pixel points in the first image and comprises a plurality of first pixel points; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the first image.
Specifically, in an example, the first pixel set is obtained by sampling a pixel point corresponding to the target object in the first image.
Further, in an example, a half-dense pixel sampling method, a sparse pixel sampling method, a dense pixel sampling method, or the like may be used to sample a pixel point corresponding to the target object in the first image, so as to obtain a first pixel set.
Step S602: and obtaining the illumination intensity of the target pixel point in the target pixel set under the looking-around coordinate system at least based on the current external parameters of the target acquisition assembly.
The target pixel set is obtained by sampling pixel points in a target image and comprises a plurality of target pixel points; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the target image.
It can be understood that the original coordinate system of the first image is the original coordinate system of the first acquisition component. Similarly, the original coordinate system of the target image is the original coordinate system of the target acquisition component. Accordingly, the looking-around coordinate system may be specifically a 360-degree image coordinate system, so that panoramic images can be conveniently synthesized by using images acquired by a plurality of preset acquisition components.
Specifically, in an example, the target pixel set is obtained by sampling a pixel point corresponding to a target object in the target image. Further, in an example, a half-dense pixel sampling method, a sparse pixel sampling method, a dense pixel sampling method, or the like may be used to sample a pixel point corresponding to a target object in the target image, so as to obtain a target pixel set.
For example, a semi-dense pixel sampling method may be used to sample a pixel point corresponding to a target object in the first image (or the target image), where the sampling step may specifically include: for the pixel points corresponding to the target body in the first image (or the target image), sampling to obtain the pixel points with obvious gradient change by using a pixel gradient detection method of the image; if no pixel point with obvious gradient change is detected, sampling is carried out by adopting an interval sampling method in a region where no gradient change is detected; in this way, a first set of pixels (or a set of target pixels) is obtained.
Here, fig. 6 (a) is a sampling effect diagram of a semi-dense pixel sampling method, and compared with the sparse pixel sampling method shown in fig. 6 (b), and the dense pixel sampling method shown in fig. 6 (c), the semi-dense pixel sampling method can effectively avoid the problem of uneven pixel distribution of the sparse sampling method, and simultaneously effectively avoid the problem of reduced calculation speed caused by excessive pixel observation of the dense sampling method, so that the calibration accuracy is ensured, and the balance of the calibration speed is achieved.
In addition, it should be noted that the semi-dense pixel sampling method is superior to the sparse pixel sampling method in accuracy and is equivalent to the dense pixel sampling method, and the calculation speed is between the dense pixel sampling method and the sparse pixel sampling method, so that the calibration speed can be balanced while the calibration accuracy is ensured. Moreover, compared with a dense pixel sampling method, the semi-dense pixel sampling method can reduce the calculated amount by 80% at most, and the calculation speed of the luminosity error is improved by 80%.
Further, in the case that the first pixel set is obtained by sampling the pixel points corresponding to the target object in the first image, and the target pixel set is obtained by sampling the pixel points corresponding to the target object in the target image, at this time, the accuracy of the calibration result can be effectively improved.
Step S603: and obtaining luminosity error information corresponding to the first acquisition component and the target acquisition component based on the illumination intensity of the first pixel point under the looking-around coordinate system and the illumination intensity of the target pixel point under the looking-around coordinate system.
Specifically, photometric error information corresponding to the first acquisition component and the target acquisition component may be obtained in the following manner; namely, the coordinate information of the target pixel point i in the target image under the looking-around coordinate system is recorded as p 1,i The illumination intensity of the target pixel point I is recorded as I 1 (p 1,i ) The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the coordinate information of the first pixel point i in the first image under the looking-around coordinate system is recorded as p 2,i The illumination intensity of the first pixel point I is recorded as I 2 (p 2,i ) The method comprises the steps of carrying out a first treatment on the surface of the At this time, the illumination intensity I of the target pixel point I in the looking-around coordinate system 1 (p 1,i ) The illumination intensity I of the first pixel point I under the looking-around coordinate system 2 (p 2,i ) Can be recorded as e i The specific expression is:
e i =I 1 (p 1,i )-I 2 (p 2,i );
further, the photometric error information corresponding to the first acquisition component and the target acquisition component may specifically be:
here, the value of i is a positive integer greater than 0 and less than N, where N represents the number of the target pixel points or the first pixel points obtained by sampling, that is, the number of the pixel points included in the target pixel set (or the first pixel set). T represents the transpose.
Therefore, the scheme is simple, convenient and efficient, can effectively reduce the time required by operation, has higher calculation speed, and lays a foundation for the follow-up efficient acquisition of the external parameter result of the target acquisition component.
In a specific example of the present disclosure, the illumination intensity of the first pixel point in the first pixel set under the looking-around coordinate system may be obtained in the following manner; specifically, the obtaining, based at least on the first external parameter of the first acquisition component, the illumination intensity of the first pixel point in the first pixel set in the looking-around coordinate system specifically includes:
acquiring coordinate information of a first pixel point in the first pixel set under a looking-around coordinate system based on a first external parameter, a first internal parameter and a first coordinate conversion parameter of a first acquisition component; the first coordinate conversion parameter represents an association relationship between an original coordinate system of the first image and the looking-around coordinate system; and obtaining the illumination intensity of the first pixel point based on the coordinate information of the first pixel point under the looking-around coordinate system.
For example, the coordinate information p of the first pixel point i in the looking-around coordinate system can be obtained by the following formula 2,i I.e.
Here, P 2,i Representing coordinate information of the first pixel point i under an original coordinate system corresponding to the first acquisition component;an external parameter representing the first acquisition component; />Representing the internal parameters of the first acquisition component, +.>Representing a transformation matrix of the original coordinate system of the first acquisition component to the look-around coordinate system.
Further, based on the obtained coordinate information p of the first pixel point in the looking-around coordinate system 2,i The illumination intensity I of the first pixel point can be obtained 2 (p 2,i )。
In this way, the specific scheme for obtaining the illumination intensity corresponding to the first pixel point in the first acquisition component is provided, and support is provided for obtaining the external parameter result of the target acquisition component in a follow-up efficient manner.
In a specific example of the scheme of the disclosure, the illumination intensity of the target pixel point in the target pixel set under the looking-around coordinate system may be obtained in the following manner; specifically, the obtaining the illumination intensity of the target pixel point in the target pixel set under the looking-around coordinate system based at least on the current external parameter of the target acquisition component specifically includes:
obtaining coordinate information of a target pixel point in the target pixel set under a looking-around coordinate system based on the current external parameter, the target internal parameter and the second coordinate conversion parameter of the target acquisition component; the second coordinate conversion parameter represents the association relation between the original coordinate system of the target image and the looking-around coordinate system; and obtaining the illumination intensity of the target pixel point based on the coordinate information of the target pixel point under the looking-around coordinate system.
For example, the coordinate information p of the target pixel point i in the looking-around coordinate system can be obtained by the following formula 1,i The method comprises the following steps:
/>
here, P 1,i Representing that the target pixel point i is at the target acquisitionCollecting coordinate information of the assembly under an original coordinate system;representing the current external parameters of the target acquisition component; />Internal parameters representing the target acquisition component, +.>Representing the transformation matrix of the original coordinate system of the target acquisition assembly to the look-around coordinate system.
Further, based on the obtained coordinate p of the target pixel point in the looking-around coordinate system 1,i Obtaining the illumination intensity I of the target pixel point 1 (p 1,i )。
In this way, the specific scheme for obtaining the illumination intensity corresponding to the target pixel point in the target acquisition component is provided, and support is provided for obtaining the external parameter result of the target acquisition component in a follow-up efficient manner.
In a specific example of the solution of the present disclosure, in order to further improve the accuracy of the calibration result, after obtaining the external target parameter of the target acquisition component, the external target parameter of the target acquisition component may be further checked. Specifically, after obtaining the off-target parameters of the target acquisition component, the method further comprises:
and verifying the target external parameters of the target acquisition assembly, and outputting the target external parameters of the target acquisition assembly under the condition that the target external parameters of the target acquisition assembly meet the second preset external parameter conditions.
Therefore, according to the scheme, the target external parameters of the target acquisition assembly are obtained, and the target external parameters are required to be checked, so that the accuracy of the obtained result is further ensured, and the user experience is further improved.
Further, in a specific example, the verification may be performed by verifying the parameters outside the target of the target acquisition component as described above, which specifically includes:
check mode one: comparing the external parameters of the target acquisition assembly with preset parameter values of the target acquisition assembly; at this time, the second preset external parameter condition indicates that a difference between the target external parameter of the target acquisition component and the preset parameter value of the target acquisition component is smaller than a first preset threshold.
Here, the first preset threshold is a tested value, which may be set according to actual requirements, which is not limited in the present disclosure.
And a second checking mode: obtaining a re-projection error of a target pixel point of the target image and a first pixel point of the first image under the condition that the external parameter of the target acquisition component is the target external parameter; comparing the re-projection error of the target pixel point of the target image and the first pixel point of the first image with a second preset threshold; wherein the second preset extrinsic condition indicates that the reprojection error is smaller than a second preset threshold.
Here, the second preset threshold is a tested value, which may be set according to actual requirements, which is not limited in the present disclosure.
For example, the re-projection error may be obtained by taking the scenes shown in fig. 2 (b) and fig. 2 (c) as an example, and at this time, as shown in fig. 7 (a), determining 4 corner points (respectively, corner point 1, corner point 2, corner point 3, and corner point 4) of the target object in the image a acquired by the front view camera; as shown in fig. 7 (B), determining 4 corner points of a target body in an image B acquired by a left-view camera; projecting 4 corner points of the object in the image A onto the image B to obtain the re-projection errors of the corner points after projection and the corner points corresponding to the object in the image B, namely the re-projection errors of the corner point 1 after projection and the corner point 1 of the object in the image B, namely the re-projection errors of the corner point 2 after projection and the corner point 2 of the object in the image B, the re-projection errors of the corner point 3 after projection and the corner point 3 of the object in the image B, and the re-projection errors of the corner point 4 after projection and the corner point 4 of the object in the image B, so as to obtain 4 re-projection errors, and at the moment, the obtained 4 re-projection errors can be respectively compared with a second preset threshold value, and if both the re-projection errors are smaller than the second preset threshold value, the external parameter calibration is considered to be accurate; or taking the sum of the 4 re-projection errors as the total weight projection error, comparing the total weight projection error with a second preset threshold value, and if the total weight projection error is smaller than the second preset threshold value, considering that the external parameter calibration is accurate. Otherwise, the calibration is considered to be failed.
Here, in a specific example, the target body may be a mark (marker) which is clearly different from the ground on which the target vehicle is mounted, for example, a calibration board having a chromatic aberration with the ground.
In the actual scenario, the external target parameters of the target acquisition assembly obtained by using the optimization method of the present disclosure may not be unique, and at this time, any one of the external target parameters may be selected as the final external target parameter. Or, the above verification method is adopted to select the external parameters meeting the requirements, which is not limited by the scheme of the present disclosure.
Therefore, the accuracy of the calibration result of the external parameters of the target acquisition assembly can be effectively improved by verifying the external parameters of the target acquisition assembly, and further user experience is improved.
The following provides further details of the disclosed aspects with reference to specific examples; the scheme of the disclosure provides a parameter processing method to calibrate the external parameters of the camera, and the scheme effectively reduces the requirements of the external parameter calibration of the camera on a calibration field, can more conveniently calibrate the external parameters of the camera in an after-sales scene without returning to a vehicle production line, and is accurate in panoramic image splicing based on a calibration result without dislocation.
Specifically, as shown in fig. 8, the core steps of the scheme of the present disclosure include:
step S801: and determining that the target vehicle is placed on the horizontal ground, and placing a calibration plate with obvious chromatic aberration with the ground on a common view area between adjacent cameras of the target vehicle, or driving the target vehicle to the ground with obvious texture.
Specifically, the target vehicle includes 4 cameras, respectively, and the front view camera, the left view camera, the back view camera and the right view camera, and at this time, two adjacent cameras have a common view area, namely the front view camera and the left view camera have a first common view area, the left view camera and the back view camera have a second common view area, the back view camera and the right view camera have a third common view area, and the right view camera and the front view camera have a fourth common view area.
Further, the object placed on the common view area, such as a calibration plate with obvious chromatic aberration with the ground, is placed, or the ground where the common view area is located has obvious texture.
Step S802: and acquiring images acquired by all cameras of the target vehicle, and obtaining panoramic images spliced by the images.
For example, fig. 9 (a) is a panoramic image, that is, an image acquired by 4 cameras of the target vehicle is stitched, and at this time, a target camera that needs to calibrate external parameters can be observed based on the panoramic image. As shown in fig. 9 (a), the images acquired by the left-view camera and the front-view camera in the first common view area show obvious double images in the panoramic image, and at this time, the front-view camera is determined to be the target camera with external parameters to be calibrated.
Step S803: and acquiring a camera mark of the front-view camera with external parameters to be calibrated.
In an actual scene, a user can input the identifier corresponding to the target camera to be calibrated to other devices such as a diagnostic instrument of an external vehicle-mounted system, and the scheme disclosed by the invention is not limited to the identifier.
Step S804: and obtaining the external target parameters of the front-view camera based on the scheme disclosed by the disclosure.
Step S805: checking the external parameters of the targets of the front-view camera, and judging whether the calibration is successful or not; if the calibration is judged to be successful, executing a step S806; otherwise, step S807 is executed.
Step S806: outputting the external parameters of the target of the forward-looking camera;
step S807: an error code is returned.
Fig. 10 (a) and fig. 10 (b) show effect comparison diagrams in a real scene, which are sufficient to illustrate that the method and the device for calibrating the target acquisition assembly can effectively calibrate the target acquisition assembly to be calibrated, and the panoramic image obtained after calibration meets the expected requirement.
Based on this, the disclosed solution has the following advantages:
firstly, the calibration cost is effectively reduced; according to the scheme, expensive equipment such as a centering device, a total station and a high-precision calibration plate is not required, so that the calibration cost is reduced; meanwhile, the maintenance cost is reduced.
Secondly, convenience is provided; because the scheme of the invention reduces the requirements of the external parameter calibration of the camera on the field, the external parameter calibration of the camera can be more conveniently carried out in an after-sales scene without returning to a production line, and the method has convenience.
Thirdly, the precision is high; according to the scheme, the external parameters of the target cameras are calibrated by at least two cameras, and a high-precision calibration result can be obtained.
Fourth, the popularity and development of parking assist systems and autopilot systems is being driven. The scheme of the disclosure avoids the phenomenon that after-sale user experience of the parking auxiliary system and the automatic driving system is poor due to insufficient calibration function capability at present, so the scheme of the disclosure effectively promotes popularization and development of the parking auxiliary system and the automatic driving system.
The present disclosure also provides a parameter processing apparatus, as shown in fig. 11, including:
an obtaining unit 1101, configured to determine a target acquisition component to be processed, where the target acquisition component is one of a plurality of preset acquisition components;
the processing unit 1102 is configured to determine at least two first acquisition components that are required to be used for calibrating external parameters of the target acquisition component from the plurality of preset acquisition components; obtaining luminosity error information corresponding to a first acquisition component and a target acquisition component at least based on a first external parameter of the first acquisition component in the at least two first acquisition components and a current external parameter of the target acquisition component, wherein the luminosity error information is obtained based on a difference value of illumination intensities of a common view area of the target acquisition component and the first acquisition component; and obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly.
In a specific example of the solution of the present disclosure, the processing unit 1102 is specifically configured to:
taking a preset acquisition component of two preset acquisition components with common view areas with the target acquisition component as the first acquisition component under the condition that the following conditions are met:
the external parameters of at least one preset acquisition assembly of the two preset acquisition assemblies with the common view area with the target acquisition assembly meet the first preset external parameter condition.
In a specific example of the solution of the present disclosure, the processing unit 1102 is specifically configured to:
under the condition that the external parameters of each preset acquisition assembly in two preset acquisition assemblies with a common view area with the target acquisition assembly do not meet the first preset external parameter condition, selecting a first acquisition assembly with the external parameters meeting the first preset external parameter condition from the plurality of preset acquisition assemblies;
and selecting at least a first acquisition component with a common view area with the target acquisition component from the rest preset acquisition components except the selected first acquisition components.
In a specific example of the disclosed solution, the processing unit 1102 is specifically configured to
Selecting at least one first acquisition component from the remaining preset acquisition components except the selected first acquisition component under the condition that the following conditions are met:
The first acquisition component is provided with a common view area with the selected first acquisition component;
and the target acquisition component is provided with a common view area.
In a specific example of the solution of the present disclosure, the processing unit 1102 is specifically configured to:
and under the condition that at least one preset acquisition component with external parameters meeting first preset external parameter conditions exists in the plurality of preset acquisition components, taking each preset acquisition component of all preset acquisition components except the target acquisition component as a first acquisition component required for calibrating the target acquisition component.
In a specific example of the solution of the present disclosure, the processing unit 1102 is specifically configured to:
and under the condition that the first external parameters of the first acquisition component meeting the first preset external parameter conditions are fixed, minimizing luminosity error information corresponding to the first acquisition component and the target acquisition component by at least adjusting the current external parameters of the target acquisition component so as to obtain the target external parameters of the target acquisition component.
In a specific example of the disclosed solution, the processing unit 1102 is specifically configured to
Obtaining the illumination intensity of a first pixel point in the first pixel set under the looking-around coordinate system based on at least a first external parameter of the first acquisition component; the first pixel set is obtained by sampling pixel points in a first image and comprises a plurality of first pixel points; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the first image;
Obtaining the illumination intensity of a target pixel point in a target pixel set under the looking-around coordinate system at least based on the current external parameters of the target acquisition component; the target pixel set is obtained by sampling pixel points in a target image and comprises a plurality of target pixel points; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the target image;
and obtaining luminosity error information corresponding to the first acquisition component and the target acquisition component based on the illumination intensity of the first pixel point under the looking-around coordinate system and the illumination intensity of the target pixel point under the looking-around coordinate system.
In a specific example of the solution of the present disclosure, the processing unit 1102 is specifically configured to:
acquiring coordinate information of a first pixel point in the first pixel set under a looking-around coordinate system based on a first external parameter, a first internal parameter and a first coordinate conversion parameter of a first acquisition component; the first coordinate conversion parameter represents an association relationship between an original coordinate system of the first image and the looking-around coordinate system;
And obtaining the illumination intensity of the first pixel point based on the coordinate information of the first pixel point under the looking-around coordinate system.
In a specific example of the solution of the present disclosure, the processing unit 1102 is specifically configured to:
obtaining coordinate information of a target pixel point in the target pixel set under a looking-around coordinate system based on the current external parameter, the target internal parameter and the second coordinate conversion parameter of the target acquisition component; the second coordinate conversion parameter represents the association relation between the original coordinate system of the target image and the looking-around coordinate system;
and obtaining the illumination intensity of the target pixel point based on the coordinate information of the target pixel point under the looking-around coordinate system.
In a specific example of the present disclosure, further comprising: a parameter detection unit, wherein,
the parameter detection unit is used for verifying the target external parameter of the target acquisition component, and outputting the target external parameter of the target acquisition component under the condition that the target external parameter of the target acquisition component meets a second preset external parameter condition.
In a specific example of the solution of the present disclosure, the parameter detection unit is specifically configured to:
Comparing the external parameters of the target acquisition assembly with preset parameter values of the target acquisition assembly; wherein the second preset external parameter condition indicates that a difference value between a target external parameter of a target acquisition component and a preset parameter value of the target acquisition component is smaller than a first preset threshold value;
and/or the number of the groups of groups,
obtaining a re-projection error of a target pixel point of the target image and a first pixel point of the first image under the condition that the external parameter of the target acquisition component is the target external parameter; comparing the re-projection error of the target pixel point of the target image and the first pixel point of the first image with a second preset threshold; wherein the second preset external parameter condition indicates that the reprojection error is smaller than a second preset threshold; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area.
Descriptions of specific functions and examples of each unit of the apparatus in the embodiments of the present disclosure may refer to related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 12 shows a schematic block diagram of an example electronic device 1200 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the apparatus 1200 includes a computing unit 1201, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the device 1200 may also be stored. The computing unit 1201, the ROM 1202, and the RAM 1203 are connected to each other via a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
Various components in device 1200 are connected to I/O interface 1205, including: an input unit 1206 such as a keyboard, mouse, etc.; an output unit 1207 such as various types of displays, speakers, and the like; a storage unit 1208 such as a magnetic disk, an optical disk, or the like; and a communication unit 1209, such as a network card, modem, wireless communication transceiver, etc. The communication unit 1209 allows the device 1200 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1201 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1201 performs the various methods and processes described above, such as the parameter processing method. For example, in some embodiments, the parameter processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1200 via ROM 1202 and/or communication unit 1209. When a computer program is loaded into the RAM 1203 and executed by the computing unit 1201, one or more steps of the parameter processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1201 may be configured to perform the parameter processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (24)

1. A parameter processing method, comprising:
determining a target acquisition component to be processed, wherein the target acquisition component is one of a plurality of preset acquisition components;
determining at least two first acquisition assemblies required to be used for calibrating external parameters of the target acquisition assembly from the plurality of preset acquisition assemblies; at least one of the at least two first acquisition components has a common view area with the target acquisition component; the external parameter of at least one other of the at least two first acquisition components meets a first preset external parameter condition;
Obtaining luminosity error information corresponding to a first acquisition component and a target acquisition component at least based on a first external parameter of the first acquisition component in the at least two first acquisition components and a current external parameter of the target acquisition component, wherein the luminosity error information is obtained based on a difference value of illumination intensities of a common view area of the target acquisition component and the first acquisition component;
and obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly.
2. The method of claim 1, wherein the determining at least two first acquisition components from the plurality of preset acquisition components that are needed for calibrating the external parameters of the target acquisition component comprises:
taking a preset acquisition component of two preset acquisition components with common view areas with the target acquisition component as the first acquisition component under the condition that the following conditions are met:
the external parameters of at least one preset acquisition assembly of the two preset acquisition assemblies with the common view area with the target acquisition assembly meet the first preset external parameter condition.
3. The method of claim 1, wherein the determining at least two first acquisition components from the plurality of preset acquisition components that are needed for calibrating the external parameters of the target acquisition component comprises:
under the condition that the external parameters of each preset acquisition assembly in two preset acquisition assemblies with a common view area with the target acquisition assembly do not meet the first preset external parameter condition, selecting a first acquisition assembly with the external parameters meeting the first preset external parameter condition from the plurality of preset acquisition assemblies;
and selecting at least a first acquisition component with a common view area with the target acquisition component from the rest preset acquisition components except the selected first acquisition components.
4. A method according to claim 3, wherein said selecting at least a first acquisition component having a common view area with the target acquisition component from the remaining preset acquisition components other than the selected first acquisition component comprises:
selecting at least one first acquisition component from the remaining preset acquisition components except the selected first acquisition component under the condition that the following conditions are met:
the first acquisition component is provided with a common view area with the selected first acquisition component; and the target acquisition component is provided with a common view area.
5. A method according to claim 3, wherein said determining from said plurality of preset acquisition components at least two first acquisition components for use in calibrating external parameters of said target acquisition component comprises:
and under the condition that at least one preset acquisition component with external parameters meeting first preset external parameter conditions exists in the plurality of preset acquisition components, taking each preset acquisition component of all preset acquisition components except the target acquisition component as a first acquisition component required for calibrating the target acquisition component.
6. The method according to any one of claim 2 to 5, wherein,
the obtaining the external parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly comprises the following steps:
and under the condition that the first external parameters of the first acquisition component meeting the first preset external parameter conditions are fixed, minimizing luminosity error information corresponding to the first acquisition component and the target acquisition component by at least adjusting the current external parameters of the target acquisition component so as to obtain the target external parameters of the target acquisition component.
7. The method according to any one of claims 1-5, wherein the obtaining photometric error information corresponding to a first acquisition component of the at least two first acquisition components and the target acquisition component based at least on the first extrinsic parameter of the first acquisition component and the current extrinsic parameter of the target acquisition component comprises:
obtaining the illumination intensity of a first pixel point in the first pixel set under the looking-around coordinate system based on at least a first external parameter of the first acquisition component; the first pixel set is obtained by sampling pixel points in a first image and comprises a plurality of first pixel points; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the first image;
obtaining the illumination intensity of a target pixel point in a target pixel set under the looking-around coordinate system at least based on the current external parameters of the target acquisition component; the target pixel set is obtained by sampling pixel points in a target image and comprises a plurality of target pixel points; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the target image;
And obtaining luminosity error information corresponding to the first acquisition component and the target acquisition component based on the illumination intensity of the first pixel point under the looking-around coordinate system and the illumination intensity of the target pixel point under the looking-around coordinate system.
8. The method of claim 7, wherein the deriving the illumination intensity of the first pixel point in the first set of pixels in the look-around coordinate system based at least on the first external parameter of the first acquisition component comprises:
acquiring coordinate information of a first pixel point in the first pixel set under a looking-around coordinate system based on a first external parameter, a first internal parameter and a first coordinate conversion parameter of a first acquisition component; the first coordinate conversion parameter represents an association relationship between an original coordinate system of the first image and the looking-around coordinate system;
and obtaining the illumination intensity of the first pixel point based on the coordinate information of the first pixel point under the looking-around coordinate system.
9. The method of claim 7, wherein the obtaining, based at least on the current external parameter of the target acquisition component, the illumination intensity of the target pixel point in the target pixel set in the looking-around coordinate system includes:
Obtaining coordinate information of a target pixel point in the target pixel set under a looking-around coordinate system based on the current external parameter, the target internal parameter and the second coordinate conversion parameter of the target acquisition component; the second coordinate conversion parameter represents the association relation between the original coordinate system of the target image and the looking-around coordinate system;
and obtaining the illumination intensity of the target pixel point based on the coordinate information of the target pixel point under the looking-around coordinate system.
10. The method of any of claims 1-5, further comprising:
and verifying the target external parameters of the target acquisition assembly, and outputting the target external parameters of the target acquisition assembly under the condition that the target external parameters of the target acquisition assembly meet the second preset external parameter conditions.
11. The method of claim 10, wherein the verifying the off-target parameter of the target acquisition component comprises:
comparing the external parameters of the target acquisition assembly with preset parameter values of the target acquisition assembly; wherein the second preset external parameter condition indicates that a difference value between a target external parameter of a target acquisition component and a preset parameter value of the target acquisition component is smaller than a first preset threshold value;
And/or the number of the groups of groups,
under the condition that the external parameter of the target acquisition component is the target external parameter, obtaining a re-projection error of a target pixel point of the target image and a first pixel point of the first image; comparing the re-projection error of the target pixel point of the target image and the first pixel point of the first image with a second preset threshold; wherein the second preset external parameter condition indicates that the reprojection error is smaller than a second preset threshold; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area.
12. A parameter processing apparatus comprising:
the acquisition unit is used for determining a target acquisition component to be processed, wherein the target acquisition component is one of a plurality of preset acquisition components;
the processing unit is used for determining at least two first acquisition assemblies which are required to be used for calibrating the external parameters of the target acquisition assembly from the plurality of preset acquisition assemblies; at least one of the at least two first acquisition components has a common view area with the target acquisition component; the external parameter of at least one other of the at least two first acquisition components meets a first preset external parameter condition; obtaining luminosity error information corresponding to a first acquisition component and a target acquisition component at least based on a first external parameter of the first acquisition component in the at least two first acquisition components and a current external parameter of the target acquisition component, wherein the luminosity error information is obtained based on a difference value of illumination intensities of a common view area of the target acquisition component and the first acquisition component; and obtaining the external target parameters of the target acquisition assembly based on the photometric error information corresponding to the first acquisition assembly and the target acquisition assembly.
13. The apparatus of claim 12, wherein the processing unit is specifically configured to:
taking a preset acquisition component of two preset acquisition components with common view areas with the target acquisition component as the first acquisition component under the condition that the following conditions are met:
the external parameters of at least one preset acquisition assembly of the two preset acquisition assemblies with the common view area with the target acquisition assembly meet the first preset external parameter condition.
14. The apparatus of claim 12, wherein the processing unit is specifically configured to:
under the condition that the external parameters of each preset acquisition assembly in two preset acquisition assemblies with a common view area with the target acquisition assembly do not meet the first preset external parameter condition, selecting a first acquisition assembly with the external parameters meeting the first preset external parameter condition from the plurality of preset acquisition assemblies;
and selecting at least a first acquisition component with a common view area with the target acquisition component from the rest preset acquisition components except the selected first acquisition components.
15. The apparatus of claim 14, wherein the processing unit is specifically configured to:
selecting at least one first acquisition component from the remaining preset acquisition components except the selected first acquisition component under the condition that the following conditions are met:
The first acquisition component is provided with a common view area with the selected first acquisition component; and the target acquisition component is provided with a common view area.
16. The apparatus of claim 14, wherein the processing unit is specifically configured to:
and under the condition that at least one preset acquisition component with external parameters meeting first preset external parameter conditions exists in the plurality of preset acquisition components, taking each preset acquisition component of all preset acquisition components except the target acquisition component as a first acquisition component required for calibrating the target acquisition component.
17. The apparatus according to any of claims 13-16, wherein the processing unit is specifically configured to:
and under the condition that the first external parameters of the first acquisition component meeting the first preset external parameter conditions are fixed, minimizing luminosity error information corresponding to the first acquisition component and the target acquisition component by at least adjusting the current external parameters of the target acquisition component so as to obtain the target external parameters of the target acquisition component.
18. The apparatus according to any of claims 12-16, wherein the processing unit is specifically configured to:
obtaining the illumination intensity of a first pixel point in the first pixel set under the looking-around coordinate system based on at least a first external parameter of the first acquisition component; the first pixel set is obtained by sampling pixel points in a first image and comprises a plurality of first pixel points; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the first image;
Obtaining the illumination intensity of a target pixel point in a target pixel set under the looking-around coordinate system at least based on the current external parameters of the target acquisition component; the target pixel set is obtained by sampling pixel points in a target image and comprises a plurality of target pixel points; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area; the look-around coordinate system is different from an original coordinate system of the target image;
and obtaining luminosity error information corresponding to the first acquisition component and the target acquisition component based on the illumination intensity of the first pixel point under the looking-around coordinate system and the illumination intensity of the target pixel point under the looking-around coordinate system.
19. The apparatus of claim 18, wherein the processing unit is specifically configured to:
acquiring coordinate information of a first pixel point in the first pixel set under a looking-around coordinate system based on a first external parameter, a first internal parameter and a first coordinate conversion parameter of a first acquisition component; the first coordinate conversion parameter represents an association relationship between an original coordinate system of the first image and the looking-around coordinate system;
And obtaining the illumination intensity of the first pixel point based on the coordinate information of the first pixel point under the looking-around coordinate system.
20. The apparatus of claim 18, wherein the processing unit is specifically configured to:
obtaining coordinate information of a target pixel point in the target pixel set under a looking-around coordinate system based on the current external parameter, the target internal parameter and the second coordinate conversion parameter of the target acquisition component; the second coordinate conversion parameter represents the association relation between the original coordinate system of the target image and the looking-around coordinate system;
and obtaining the illumination intensity of the target pixel point based on the coordinate information of the target pixel point under the looking-around coordinate system.
21. The apparatus of any of claims 12-16, further comprising: a parameter detection unit, wherein,
the parameter detection unit is used for verifying the target external parameter of the target acquisition component, and outputting the target external parameter of the target acquisition component under the condition that the target external parameter of the target acquisition component meets a second preset external parameter condition.
22. The apparatus of claim 21, wherein the parameter detection unit is specifically configured to:
Comparing the external parameters of the target acquisition assembly with preset parameter values of the target acquisition assembly; wherein the second preset external parameter condition indicates that a difference value between a target external parameter of a target acquisition component and a preset parameter value of the target acquisition component is smaller than a first preset threshold value;
and/or the number of the groups of groups,
under the condition that the external parameter of the target acquisition component is the target external parameter, obtaining a re-projection error of a target pixel point of the target image and a first pixel point of the first image; comparing the re-projection error of the target pixel point of the target image and the first pixel point of the first image with a second preset threshold; wherein the second preset external parameter condition indicates that the reprojection error is smaller than a second preset threshold; the first image is an image which is acquired by the first acquisition component and corresponds to the common view area; the target image is an image which is acquired by the target acquisition component and corresponds to the common view area.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
CN202310342111.XA 2023-03-31 2023-03-31 Parameter processing method, device, equipment and storage medium Active CN116485906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310342111.XA CN116485906B (en) 2023-03-31 2023-03-31 Parameter processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310342111.XA CN116485906B (en) 2023-03-31 2023-03-31 Parameter processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116485906A CN116485906A (en) 2023-07-25
CN116485906B true CN116485906B (en) 2024-04-12

Family

ID=87226060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310342111.XA Active CN116485906B (en) 2023-03-31 2023-03-31 Parameter processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116485906B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203077A (en) * 2020-08-21 2021-01-08 中国科学院西安光学精密机械研究所 Colorful glimmer multi-view stereoscopic vision camera and data fusion method thereof
CN115546312A (en) * 2022-09-29 2022-12-30 上海汽车集团股份有限公司 Method and device for correcting external parameters of camera
CN115639697A (en) * 2022-11-04 2023-01-24 孔岳 Relative illumination parameter-based light transmission assembly adjusting system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11587260B2 (en) * 2020-10-05 2023-02-21 Zebra Technologies Corporation Method and apparatus for in-field stereo calibration
US11688090B2 (en) * 2021-03-16 2023-06-27 Toyota Research Institute, Inc. Shared median-scaling metric for multi-camera self-supervised depth evaluation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203077A (en) * 2020-08-21 2021-01-08 中国科学院西安光学精密机械研究所 Colorful glimmer multi-view stereoscopic vision camera and data fusion method thereof
CN115546312A (en) * 2022-09-29 2022-12-30 上海汽车集团股份有限公司 Method and device for correcting external parameters of camera
CN115639697A (en) * 2022-11-04 2023-01-24 孔岳 Relative illumination parameter-based light transmission assembly adjusting system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于共面点的改进摄像机标定方法研究;刘杨豪;谢林柏;;计算机工程(第08期);全文 *

Also Published As

Publication number Publication date
CN116485906A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
EP3869399A2 (en) Vehicle information detection method and apparatus, electronic device, storage medium and program
CN110793544B (en) Method, device and equipment for calibrating parameters of roadside sensing sensor and storage medium
EP3627109A1 (en) Visual positioning method and apparatus, electronic device and system
CN109855568B (en) Method and device for detecting automatic driving sensor, electronic equipment and storage medium
WO2018120040A1 (en) Obstacle detection method and device
CN112967344B (en) Method, device, storage medium and program product for calibrating camera external parameters
EP3621041B1 (en) Three-dimensional representation generating system
CN112991459B (en) Camera calibration method, device, equipment and storage medium
CN114663397B (en) Method, device, equipment and storage medium for detecting drivable area
KR20210040849A (en) Three-dimensional object detection method and device, electronic equipment and readable storage medium
CN116193108B (en) Online self-calibration method, device, equipment and medium for camera
CN114663529B (en) External parameter determining method and device, electronic equipment and storage medium
EP3800443B1 (en) Database construction method, positioning method and relevant device therefor
CN111612851B (en) Method, apparatus, device and storage medium for calibrating camera
CN116485906B (en) Parameter processing method, device, equipment and storage medium
US10462376B2 (en) Exposure method in panoramic photo shooting and apparatus
CN116030139A (en) Camera detection method and device, electronic equipment and vehicle
CN116245730A (en) Image stitching method, device, equipment and storage medium
KR20180097004A (en) Method of position calculation between radar target lists and vision image ROI
CN112241675A (en) Object detection model training method and device
CN113221999B (en) Picture annotation accuracy obtaining method and device and electronic equipment
JP7425169B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN117351450B (en) Monocular 3D detection method and device, electronic equipment and storage medium
CN113359669B (en) Method, device, electronic equipment and medium for generating test data
CN115660959B (en) Image generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant