CN115082573B - Parameter calibration method and device, vehicle and storage medium - Google Patents

Parameter calibration method and device, vehicle and storage medium Download PDF

Info

Publication number
CN115082573B
CN115082573B CN202211001526.2A CN202211001526A CN115082573B CN 115082573 B CN115082573 B CN 115082573B CN 202211001526 A CN202211001526 A CN 202211001526A CN 115082573 B CN115082573 B CN 115082573B
Authority
CN
China
Prior art keywords
image
preset
vehicle
image acquisition
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211001526.2A
Other languages
Chinese (zh)
Other versions
CN115082573A (en
Inventor
李志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202211001526.2A priority Critical patent/CN115082573B/en
Publication of CN115082573A publication Critical patent/CN115082573A/en
Application granted granted Critical
Publication of CN115082573B publication Critical patent/CN115082573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a parameter calibration method, a parameter calibration device, a vehicle and a storage medium, which can acquire current scene images respectively acquired by a plurality of image acquisition devices on the vehicle; under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image, determining an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices; carrying out image recognition on the overlapped area to obtain a target recognition result; and calibrating the external parameters of each image acquisition device according to the target identification result to obtain the target external parameters respectively corresponding to each image acquisition device.

Description

Parameter calibration method and device, vehicle and storage medium
Technical Field
The present disclosure relates to the field of parameter calibration of image capturing devices on vehicles, and in particular, to a parameter calibration method and apparatus, a vehicle, and a storage medium.
Background
The reverse image is a function which is more and more popular on the current vehicle. Early parking imaging systems typically used a rear-facing camera that displayed an image of the rear of the vehicle to the driver during vehicle reversing for the driver to use to view environmental information. In the current common scheme, a plurality of cameras are arranged on the periphery of a vehicle, and images acquired by the plurality of cameras are spliced through parameter calibration to obtain a top view, so that a driver can observe the surrounding environment more conveniently, or the cameras are used for identifying the position of a library and/or the position of an obstacle in an automatic driving algorithm.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a parameter calibration method, apparatus, vehicle and storage medium.
According to a first aspect of the embodiments of the present disclosure, a parameter calibration method is provided, including:
acquiring current scene images respectively acquired by a plurality of image acquisition devices on a vehicle;
under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image, determining an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices;
carrying out image recognition on the overlapped area to obtain a target recognition result;
and calibrating the external parameters of each image acquisition device according to the target identification result to obtain the target external parameters respectively corresponding to each image acquisition device.
Optionally, determining, according to the current scene image, whether a current scene where the vehicle is located is a preset scene includes:
performing image recognition on the current scene images respectively collected by the plurality of image collecting devices to obtain image recognition results;
and under the condition that a preset road surface marking line exists on the road surface where the vehicle is located currently according to the image recognition result, determining that the current scene is the preset scene.
Optionally, the determining, according to the current scene image, an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices includes:
performing image splicing on the current scene image respectively acquired by each image acquisition device according to the current external parameters respectively corresponding to the plurality of image acquisition devices to obtain a first spliced image, wherein an overlapping area of the images acquired by every two adjacent image acquisition devices is reserved in the first spliced image;
and taking an image area corresponding to a preset area in the first spliced image as the overlapping area.
Optionally, the target recognition result includes position information of a preset road marking line included in the overlapping region, and the performing image recognition on the overlapping region to obtain the target recognition result includes:
performing image recognition on an overlapping area of images acquired by every two adjacent image acquisition devices to obtain position information of a preset pavement marking line contained in the overlapping area, wherein the position information comprises a first position of each pixel point of the preset pavement marking line in a first image, a second position of each pixel point of the preset pavement marking line in a second image, and a third position of each pixel point of the preset pavement marking line in a first spliced image; the first image is a current scene image acquired by one image acquisition device of two adjacent image acquisition devices, and the second image is a current scene image acquired by the other image acquisition device of the two adjacent image acquisition devices.
Optionally, the performing parameter calibration on external parameters of the plurality of image acquisition devices according to the target recognition result includes:
determining the corresponding relation between the first image and the second image according to the first position, the second position and the third position; the corresponding relation is used for representing the corresponding relation of two target pavement marking lines belonging to the same preset pavement marking line on the first image and the second image;
aiming at every two adjacent image acquisition devices, establishing an optimization model by taking the minimum distance between two target pavement marking lines corresponding to the two adjacent image acquisition devices as an optimization target and two external parameters of the two adjacent image acquisition devices as variables to be optimized;
and solving the optimization model to obtain target external parameters respectively corresponding to each image acquisition device.
Optionally, the method further comprises:
for each image acquisition device, determining a difference value of the current external parameters of the image acquisition device, which are participated outside the target of the image acquisition device;
and recording the current calibration result to a preset database under the condition that the difference is greater than or equal to a first preset difference threshold.
Optionally, the current calibration result includes the target external parameter; after the current calibration result is recorded in the preset database, the method further comprises:
after the preset number of the target external parameters are stored in the preset database, if the difference value between any two target external parameters in the preset number of the target external parameters is smaller than or equal to a second preset difference value threshold, determining the calibrated external parameters respectively corresponding to each image acquisition device according to the preset number of the target external parameters.
Optionally, the method further comprises:
and performing image splicing on the current scene image respectively acquired by each image acquisition device according to the calibrated external parameters respectively corresponding to each image acquisition device to obtain a second spliced image corresponding to the current scene.
Optionally, the current calibration result comprises a first vehicle position of the vehicle; after the current calibration result is recorded in the preset database, the method further comprises:
performing image recognition on the current scene images respectively acquired by the plurality of image acquisition devices at the current moment to obtain image recognition results;
acquiring a second vehicle position where the vehicle is located at the current moment;
and under the condition that a preset road marking line exists on the road where the vehicle is located currently according to the image recognition result, and the distance between the first vehicle position and the second vehicle position is larger than or equal to a preset distance threshold value, determining that the current scene is the preset scene.
According to a second aspect of the embodiments of the present disclosure, there is provided a parameter calibration apparatus, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire current scene images acquired by a plurality of image acquisition devices on a vehicle respectively;
the determining module is configured to determine an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices;
the image recognition module is configured to perform image recognition on the overlapping area to obtain a target recognition result;
and the parameter calibration module is configured to perform parameter calibration on the external parameter of each image acquisition device according to the target identification result to obtain the target external parameter corresponding to each image acquisition device.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring current scene images respectively acquired by a plurality of image acquisition devices on a vehicle;
under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image, determining the overlapping area of the images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices;
carrying out image recognition on the overlapped area to obtain a target recognition result;
and calibrating the external parameters of each image acquisition device according to the target identification result to obtain the target external parameters respectively corresponding to each image acquisition device.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the parameter calibration method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method can acquire current scene images respectively acquired by a plurality of image acquisition devices on a vehicle in a driving or power-on and parking state of the vehicle, can extract an overlapping area of images acquired by every two adjacent image acquisition devices under the condition that the current scene of the vehicle is determined to be a preset scene according to the current scene image, and then carry out parameter calibration on external parameters of each image acquisition device according to a target identification result after the image identification is carried out on the overlapping area.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIGS. 1a and 1b are scene schematics of a top view after stitching;
FIG. 2 is a flow chart illustrating a method of parameter calibration according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a scenario for parameter calibration of external parameters of multiple image capture devices on a vehicle, in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating another method of parameter calibration in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating a parameter calibration arrangement according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating another parameter calibration arrangement in accordance with an exemplary embodiment;
FIG. 7 is a functional block diagram schematic of a vehicle shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
It should be noted that all the actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
The method is mainly applied to a scene of calibrating parameters of external parameters of a plurality of image acquisition devices (such as cameras, cameras and the like) arranged in different directions of a vehicle, and taking the cameras installed on the vehicle as an example, in the related art, the cameras are generally calibrated accurately before the vehicle leaves a factory, but in the use process of the vehicle, the external parameters of the cameras relative to a vehicle body or a top view are changed due to different loads of the vehicle or possible looseness in installation of the cameras. In an actual application scene, in the process of obtaining a top view by stitching images of a plurality of cameras, image stitching needs to be performed based on external parameters of the respective cameras, if the external parameters are inaccurate, the top view obtained by stitching is inaccurate, for example, fig. 1a and 1b are scene schematic diagrams of the top view after stitching, as shown in fig. 1a, a broken line divides the whole image into 4 regions, image regions of the front, rear, left and right regions are respectively from scene images acquired by the front, rear, left and right 4 cameras on the vehicle, and elements such as a parking space, a lane line, a ground mark and the like around the vehicle can be clearly seen from the top view.
In order to solve the existing problems, the present disclosure provides a parameter calibration method, an apparatus, a vehicle, and a storage medium, which may acquire current scene images respectively acquired by a plurality of image acquisition devices on the vehicle in a driving or power-on/off state of the vehicle, and may extract an overlapping area of images acquired by every two adjacent image acquisition devices when it is determined that a current scene of the vehicle is a preset scene according to the current scene image, and then perform parameter calibration on an external parameter of each image acquisition device according to a target recognition result after performing image recognition on the overlapping area.
Specific embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 2 is a flow chart illustrating a parameter calibration method according to an exemplary embodiment, which may be applied to a vehicle provided with a plurality of image capturing devices, as shown in fig. 2, the method comprising the following steps.
In step S201, current scene images respectively captured by a plurality of image capturing devices on a vehicle are acquired.
The image acquisition device can be, for example, cameras arranged at different positions of the vehicle body, for example, one camera can be arranged on the vehicle body in four directions, namely, front, back, left and right directions, respectively, so that in the step, current scene images acquired by the four cameras arranged on the vehicle body in the front, back, left and right directions can be acquired respectively.
In addition, the current scene image here generally refers to a road surface image around the vehicle in the environment where the vehicle is currently located.
In step S202, in a case where the current scene where the vehicle is located is determined to be the preset scene according to the current scene image, an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices is determined according to the current scene image.
The preset scene is a preset scene requiring parameter calibration on the plurality of image acquisition devices, for example, if the preset scene is a parking lot, a vehicle usually has a parking requirement in the scene, in such a scene, images of parking spaces on a road surface around the vehicle (i.e., current scene images) can be acquired, and then road surface images acquired by cameras at different angles are subjected to image splicing based on external references of the cameras to obtain a top view of the current scene to be displayed to a driver, so that the driver can conveniently perform parking operation according to the top view; in addition, in an automatic driving scene, a vehicle can control automatic driving of the vehicle according to a lane line of a current lane, at this time, a road surface image of a road where the vehicle is located needs to be acquired, after the road surface images in different directions are spliced based on external parameters of a camera, the lane line is identified based on the spliced image so as to control automatic driving of the vehicle, and therefore if the current scene where the vehicle is located is determined to be the preset scene, an external parameter calibration flow for a plurality of image acquisition devices on the vehicle needs to be started so as to accurately perform image splicing based on the calibrated external parameters.
In this step, it may be determined whether the current scene where the vehicle is located is a preset scene according to the current scene image in the following manner:
performing image recognition on the current scene images respectively collected by the plurality of image collecting devices to obtain image recognition results; and under the condition that a preset road surface marking line exists on the road surface where the vehicle is located currently according to the image recognition result, determining that the current scene is the preset scene.
The predetermined pavement marking lines may be, for example, a lane marking line and/or a library marking line.
In the process of performing image recognition on the current scene image to obtain an image recognition result, the preset road surface marking line on the road surface may be extracted based on an image recognition technology provided in the related art (for example, image feature extraction using a machine learning algorithm), which is not limited by the present disclosure.
In addition, as shown in fig. 1a, since the view angles (FOVs) of two adjacent cameras overlap, an overlapping region exists in a region around a dotted line at the upper left corner of the top view between a scene image (which may be denoted as P1) acquired by a camera located on the left side of the vehicle body and a scene image (which may be denoted as P2) acquired by an adjacent camera located in front of the vehicle body, each pixel point (which refers to a pixel point on the world coordinate system where the top view is located) in a preset region at the upper left corner of the top view corresponds to pixel points L1 and F1 of two camera coordinate systems, where the preset region may be calculated in advance according to the view angles (FOVs) respectively corresponding to the two adjacent cameras, the pixel point L1 is a pixel point in the camera coordinate system corresponding to the image P1 and corresponding to a target pixel point (i.e. any pixel point in the preset overlapping region at the upper left corner of the top view) in the camera coordinate system corresponding to the image P2, and the pixel point F1 is a pixel point in the camera coordinate system corresponding to a target pixel point on the world coordinate system (i.e. any pixel point in the preset region at the upper left corner of the top view).
In this step, the overlapping area of the images acquired by each two adjacent image acquisition devices in the plurality of image acquisition devices can be determined according to the current scene image by the following method:
the method includes the steps that image splicing is conducted on current scene images acquired by each image acquisition device according to current external parameters corresponding to the image acquisition devices respectively to obtain first spliced images, overlapping areas of images acquired by every two adjacent image acquisition devices are reserved in the first spliced images, in other words, scene overlapping areas in the scene images acquired by every two adjacent cameras are reserved in the two images respectively, the scene images acquired by the two adjacent cameras are respectively P1 and P2, and in the first spliced images obtained after splicing, each pixel point corresponds to pixel points L1 and F1 in the overlapping areas corresponding to the P1 and the P2 on the first spliced images.
In this way, an image area corresponding to a preset area in the first stitched image may be used as the overlap area, where one of the preset areas may be, for example, a rectangular area with a dotted line as a diagonal line at the upper left corner of the first stitched image, and the specific size and shape of the preset area may be calculated in advance according to the view angles FOV of two adjacent cameras, which is not limited by the present disclosure.
In addition, every two adjacent cameras correspond to an overlapping area, and different camera combinations (one camera combination comprises two adjacent cameras) correspond to different overlapping areas.
In step S203, image recognition is performed on the overlapping area, and a target recognition result is obtained.
The target recognition result includes position information of a preset pavement marking included in the overlap region, where the preset pavement marking includes a library bit line and/or a lane line.
In this step, image recognition may be performed on an overlapping region of images acquired by every two adjacent image acquisition devices to obtain position information of a preset pavement marking line included in the overlapping region, where the position information includes a first position of each pixel point of the preset pavement marking line in a first image, a second position of each pixel point of the preset pavement marking line in a second image, and a third position of each pixel point of the preset pavement marking line in the first stitched image; the first image is a current scene image acquired by one image acquisition device of two adjacent image acquisition devices, and the second image is a current scene image acquired by the other image acquisition device of the two adjacent image acquisition devices.
For example, taking a scene image P1 acquired by a camera on the left side of a vehicle body and a scene image P2 acquired by an adjacent camera in front of the vehicle body as an example, an overlapping area of the scene image P1 and the scene image P2 is a preset area at the upper left corner of the first stitched image, the first image is the scene image P1, and the second image is the scene image P2, which is only an example here, and the disclosure does not limit this.
In step S204, parameter calibration is performed on the external parameter of each image acquisition device according to the target recognition result, so as to obtain a target external parameter corresponding to each image acquisition device.
In this step, the correspondence relationship between the first image and the second image may be determined according to the first position, the second position, and the third position; the corresponding relation is used for representing the corresponding relation of two target pavement marking lines which belong to the same preset pavement marking line on the first image and the second image; aiming at each two adjacent image acquisition devices, establishing an optimization model by taking the minimum distance between the two target pavement marking lines corresponding to the two adjacent image acquisition devices as an optimization target and taking two external parameters of the two adjacent image acquisition devices as variables to be optimized; and solving the optimization model to obtain target external parameters respectively corresponding to each image acquisition device.
For example, fig. 3 is a schematic view of a scene for calibrating external parameters of a plurality of image capturing devices on a vehicle according to an exemplary embodiment, as shown in fig. 3, a scene image P1 is a scene image captured by a camera (or referred to as a left-view camera) located on the left side of a vehicle body, a scene image P2 is a scene image captured by a camera (or referred to as a front-view camera) located on the front side of the vehicle body, a scene image P3 is a scene image captured by a camera (or referred to as a right-view camera) located on the right side of the vehicle body, and assuming that BEV is a top view obtained by splicing the three scene images, taking fig. 3 as an example, an external parameter optimization model as follows can be established:
Figure DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 117504DEST_PATH_IMAGE002
represents the internal reference of the forward looking camera and is matched with the reference>
Figure DEST_PATH_IMAGE003
Represents an external reference of a forward looking camera>
Figure 441169DEST_PATH_IMAGE004
Represents the internal reference of the left-view camera and is used for judging whether the left-view camera is normal or normal>
Figure DEST_PATH_IMAGE005
The outer parameter of the left-view camera is shown,/>
Figure 486486DEST_PATH_IMAGE006
represents the internal reference of the right-view camera and is used for judging whether the right-view camera is normal or normal>
Figure DEST_PATH_IMAGE007
Representing the external parameters of the right-view camera; the transformation relation of the camera coordinate system of the forward-looking camera to the top view BEV can be
Expressed as:
Figure 147274DEST_PATH_IMAGE008
,/>
Figure DEST_PATH_IMAGE009
representing the coordinates of the ith bin bit line (i =1,2,3,4 in the example shown in fig. 3) on the front view P2. The transformation relationship of the camera coordinate system of the left-view camera to the top view BEV can be expressed as: />
Figure 227226DEST_PATH_IMAGE010
,/>
Figure DEST_PATH_IMAGE011
Representing the coordinates of the j-th bin bit line (in the example shown in fig. 3, j =1,2) on the left view P1. The transformation relationship of the camera coordinate system of the right-view camera to the top view BEV can be expressed as:
Figure 439901DEST_PATH_IMAGE012
,/>
Figure DEST_PATH_IMAGE013
representing the coordinates of the kth bin bit line (in the example shown in fig. 3, k =3,4) on right view P3. Therefore, the optimization target can be the minimum value of the Y, and after the optimization mode is solved by a common nonlinear optimization method, the external parameters of the three cameras can be calculated and evaluated when the Y value is minimum>
Figure 972514DEST_PATH_IMAGE014
The value of (b) is an external parameter of the calibrated camera, and the above example is only an example, and the disclosure does not limit this.
By adopting the method, the external parameter of the image acquisition device installed on the vehicle can be automatically calibrated without special calibration equipment after the vehicle leaves the factory, so that the external parameter change of the image acquisition device in the using process of the vehicle is corrected, and the problem of splicing dislocation can be avoided when the image splicing is carried out based on the calibrated external parameter; in addition, the calibration algorithm mainly depends on a library bit line and a lane line, and the elements are common in daily driving and parking scenes, so that the success rate of calibration operation and the frequency of calibration check are high, and the timeliness of calibration updating is also high when external parameters change.
Fig. 4 is a flow chart of a parameter calibration method according to the embodiment shown in fig. 2, and as shown in fig. 4, the method further includes the following steps:
in step S205, for each image capturing device, a difference value of the off-target participation of the image capturing device in the current participation of the image capturing device is determined.
The target external parameter generally refers to an external parameter of the image acquisition device obtained through latest calibration, and the current external parameter of the image acquisition device may be an original external parameter calibrated for the image acquisition device before the vehicle leaves a factory, or a newly calibrated external parameter replacing the original external parameter after the external parameter calibration for the vehicle leaves the factory.
In step S206, in case that the difference is greater than or equal to the first preset difference threshold, the current calibration result is recorded in the preset database.
Based on steps S201 to S204, one parameter calibration may be completed, and then, in order to improve the accuracy of parameter calibration, after obtaining the external parameters of each calibrated image acquisition device, the external parameters of the calibrated target of the image acquisition device may be compared with the current external parameters of the image acquisition device, and if the difference between the two parameters is large, the external parameters of the image acquisition device may be changed to a certain extent, so in step S206, under the condition that the difference is greater than or equal to the first preset difference threshold, the current calibration result is recorded to the preset database.
As shown in fig. 4, the method further comprises the steps of:
in step S207, after the preset number of target external references are stored in the preset database, if the difference between any two target external references in the preset number of target external references is less than or equal to the second preset difference threshold, the calibrated external references corresponding to each image capturing device are determined according to the preset number of target external references.
After the step is executed, if it is determined that a plurality of (i.e., a preset number of) target external parameters obtained after multiple calibrations are stored in the preset database, and the difference between any two target external parameters in the preset number of target external parameters is less than or equal to a second preset difference threshold, the new calibration result is characterized to be stable enough, and at this time, the latest calibrated target external parameter can be determined from the preset number of target external parameters to serve as the calibrated external parameter of the corresponding image acquisition device, and the original external parameter of the image acquisition device is replaced; in another possible implementation manner, a mean value of the preset number of target external parameters may also be calculated, and then the mean value is used as a calibrated external parameter of a corresponding image capturing device and replaces an original external parameter of the image capturing device, which is only an example and is not limited in this disclosure.
As shown in fig. 4, the method further comprises:
in step S208, image stitching is performed on the current scene image respectively acquired by each image acquisition device according to the calibrated external reference respectively corresponding to each image acquisition device, so as to obtain a second stitched image corresponding to the current scene.
In this step, image stitching may be performed based on the external parameters calibrated by each image acquisition device, so as to obtain the top view shown in fig. 1a, and a specific implementation manner of obtaining the top view based on image stitching performed based on the external parameters may refer to description in related technologies.
As shown in fig. 4, the method further comprises the steps of:
in step S209, image recognition is performed on the current scene images respectively acquired by the plurality of image acquisition devices at the current time, so as to obtain an image recognition result, and a second vehicle position where the vehicle is located at the current time is obtained.
In step S210, in a case where it is determined that a preset road marking line exists on a road surface where the vehicle is currently located according to the image recognition result, and a distance between the first vehicle position and the second vehicle position is greater than or equal to a preset distance threshold, it is determined that the current scene is a preset scene.
After step S206 is executed, the current calibration result is recorded in the preset database, and then it may be continuously determined whether the current scene is suitable for performing external reference calibration, where the current calibration result includes, in addition to the calibrated external reference, the first vehicle position where the vehicle is located during this calibration, in this disclosure, in order to avoid the situation that multiple times of calibration errors occur in the same scene, when it is determined whether the external reference calibration needs to be performed again, a new determination condition needs to be added, that is, when the distance between the second vehicle position where the vehicle is located at the current time and the first vehicle position where the vehicle was calibrated last time is greater than or equal to the preset distance threshold, it is determined that the current scene is the preset scene, and then the external reference calibration may be performed again.
In the present disclosure, in the case that it is determined that the current scene is the preset scene based on steps S209 to S210, the external parameter calibration may be performed again based on steps S202 to S204, and the calibrated target external parameter is recorded in the preset database.
By adopting the method, the external parameter of the image acquisition device installed on the vehicle can be automatically calibrated without special calibration equipment after the vehicle leaves the factory, so that the external parameter change of the image acquisition device in the using process of the vehicle is corrected, and the problem of splicing dislocation can be avoided when the image splicing is carried out based on the calibrated external parameter; in addition, the calibration algorithm mainly depends on a library bit line and a lane line, and the elements are common in daily driving and parking scenes, so that the success rate of calibration operation and the frequency of calibration check are high, and the timeliness of calibration updating is also high when external parameters change.
FIG. 5 is a block diagram illustrating a parameter calibration arrangement, as shown in FIG. 5, according to an exemplary embodiment, including:
an obtaining module 501 configured to obtain current scene images respectively collected by a plurality of image collecting devices on a vehicle;
a determining module 502, configured to determine, according to the current scene image, an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices, when it is determined that a current scene where the vehicle is located is a preset scene according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices;
an image recognition module 503, configured to perform image recognition on the overlapping area, and obtain a target recognition result;
a parameter calibration module 504, configured to perform parameter calibration on the external parameter of each image acquisition device according to the target identification result, so as to obtain a target external parameter corresponding to each image acquisition device.
Optionally, the determining module 502 is configured to perform image recognition on the current scene image respectively acquired by multiple image acquiring devices, so as to obtain an image recognition result; and under the condition that a preset road surface marking line exists on the road surface where the vehicle is located currently according to the image recognition result, determining that the current scene is the preset scene.
Optionally, the determining module 502 is configured to perform image stitching on the current scene image respectively acquired by each of the image acquisition devices according to current external parameters respectively corresponding to the plurality of image acquisition devices to obtain a first stitched image, where an overlapping area of images acquired by every two adjacent image acquisition devices is reserved in the first stitched image; and taking an image area corresponding to a preset area in the first spliced image as the overlapping area.
Optionally, the target recognition result includes position information of a preset pavement marking line included in the overlapping region, the image recognition module 503 is configured to perform image recognition on the overlapping region of the images acquired by every two adjacent image acquisition devices to obtain position information of the preset pavement marking line included in the overlapping region, where the position information includes a first position of each pixel point of the preset pavement marking line in the first image, a second position of each pixel point of the preset pavement marking line in the second image, and a third position of each pixel point of the preset pavement marking line in the first stitched image; the first image is a current scene image acquired by one image acquisition device of two adjacent image acquisition devices, and the second image is a current scene image acquired by the other image acquisition device of the two adjacent image acquisition devices.
Optionally, the parameter calibration module 504 is configured to determine a corresponding relationship between the first image and the second image according to the first position, the second position, and the third position; the corresponding relation is used for representing the corresponding relation of two target pavement marking lines belonging to the same preset pavement marking line on the first image and the second image; aiming at each two adjacent image acquisition devices, establishing an optimization model by taking the minimum distance between the two target pavement marking lines corresponding to the two adjacent image acquisition devices as an optimization target and taking two external parameters of the two adjacent image acquisition devices as variables to be optimized; and solving the optimization model to obtain target external parameters respectively corresponding to each image acquisition device.
Optionally, fig. 6 is a block diagram of a parameter calibration apparatus according to the embodiment shown in fig. 5, and as shown in fig. 6, the apparatus further includes:
a calibration result recording module 505 configured to determine, for each of the image capturing devices, a difference value of the current external parameter of the image capturing device in which the target of the image capturing device participates; and recording the current calibration result to a preset database under the condition that the difference is greater than or equal to a first preset difference threshold.
Optionally, the current calibration result includes the target external parameter; after the current calibration result is recorded in the preset database, as shown in fig. 6, the apparatus further includes:
the external parameter determining module 506 is configured to, after a preset number of the target external parameters are stored in the preset database, determine, according to the preset number of the target external parameters, the calibrated external parameters respectively corresponding to each image acquisition device if the difference between any two of the preset number of the target external parameters is less than or equal to a second preset difference threshold.
Optionally, as shown in fig. 6, the apparatus further includes:
the image stitching module 507 is configured to perform image stitching on the current scene image respectively acquired by each image acquisition device according to the calibrated external parameter respectively corresponding to each image acquisition device, so as to obtain a second stitched image corresponding to the current scene.
Optionally, the current calibration result includes a first vehicle position of the vehicle; the device further comprises:
a scene judgment module 508, configured to perform image recognition on the current scene images respectively acquired by the multiple image acquisition devices at the current time to obtain image recognition results; acquiring a second vehicle position where the vehicle is located at the current moment; and under the condition that a preset road marking line exists on the road where the vehicle is located currently according to the image recognition result, and the distance between the first vehicle position and the second vehicle position is larger than or equal to a preset distance threshold value, determining that the current scene is the preset scene.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the parameter calibration method provided by the present disclosure.
Referring to fig. 7, fig. 7 is a functional block diagram of a vehicle 700 according to an exemplary embodiment. The vehicle 700 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 700 may acquire environmental information of its surroundings through the sensing system 720 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 700 may include various subsystems such as infotainment system 710, perception system 720, decision control system 730, drive system 740, and computing platform 750. Alternatively, vehicle 700 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 700 may be interconnected by wire or wirelessly.
In some embodiments, infotainment system 710 may include a communication system 711, an entertainment system 712, and a navigation system 713.
The communication system 711 may include a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 712 may include a display device, a microphone, and a sound box, and a user may listen to a radio in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, a voice signal of the user may be acquired through a microphone, and certain control of the vehicle 700 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a sound.
The navigation system 713 may include a map service provided by a map provider to provide navigation of travel routes for the vehicle 700, and the navigation system 713 may be used in conjunction with the global positioning system 721, the inertial measurement unit 722 of the vehicle. The map service provided by the map supplier can be a two-dimensional map or a high-precision map.
The perception system 720 may include several types of sensors that sense information about the environment surrounding the vehicle 700. For example, the sensing system 720 may include a global positioning system 721 (the global positioning system may be a GPS system, a beidou system, or other positioning system), an Inertial Measurement Unit (IMU) 722, a laser radar 723, a millimeter wave radar 724, an ultrasonic radar 725, and a camera 726. The sensing system 720 may also include sensors of internal systems of the monitored vehicle 700 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 700.
The global positioning system 721 is used to estimate the geographic location of the vehicle 700.
The inertial measurement unit 722 is used to sense a pose change of the vehicle 700 based on the inertial acceleration. In some embodiments, inertial measurement unit 722 may be a combination of accelerometers and gyroscopes.
Lidar 723 utilizes a laser to sense objects in the environment in which vehicle 700 is located. In some embodiments, lidar 723 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 724 utilizes radio signals to sense objects within the surrounding environment of the vehicle 700. In some embodiments, in addition to sensing objects, the millimeter-wave radar 724 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 725 may sense objects around the vehicle 700 using ultrasonic signals.
The camera 726 is used to capture image information of the surrounding environment of the vehicle 700. The camera 726 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the camera 726 may include a still image or video stream information.
The decision control system 730 comprises a computing system 731 for making analytical decisions based on information obtained by the perception system 720, the decision control system 730 further comprises a vehicle control unit 732 for controlling the powertrain of the vehicle 700, and a steering system 733, a throttle 734 and a braking system 735 for controlling the vehicle 700.
The computing system 731 is operable to process and analyze various information acquired by the perception system 720 in order to identify objects, and/or features in the environment surrounding the vehicle 700. The targets may include pedestrians or animals, and the objects and/or features may include traffic signals, road boundaries, and obstacles. The computing system 731 may use object recognition algorithms, motion from Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 731 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 731 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle control unit 732 may be used to perform coordinated control of the vehicle's power battery and engine 741 to improve the power performance of the vehicle 700.
The steering system 733 is operable to adjust a heading of the vehicle 700. For example, in one embodiment, a steering wheel system.
The throttle 734 is used to control the operating speed of the engine 741 and thus the speed of the vehicle 700.
The brake system 735 is used to control the deceleration of the vehicle 700. The braking system 735 may use friction to slow the wheel 744. In some embodiments, the braking system 735 may convert kinetic energy of the wheels 744 into electrical current. The braking system 735 may also take other forms to slow the rotational speed of the wheels 744 to control the speed of the vehicle 700.
Drive system 740 may include components that provide powered motion to vehicle 700. In one embodiment, drive system 740 may include an engine 741, an energy source 742, a transmission 743, and wheels 744. The engine 741 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engines, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 741 converts the energy source 742 into mechanical energy.
Examples of energy source 742 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 742 may also provide energy for other systems of the vehicle 700.
A transmission 743 may transmit mechanical power from an engine 741 to wheels 744. The drivetrain 743 may include a gearbox, differential, and driveshaft. In one embodiment, the driveline 743 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 744.
Some or all of the functions of the vehicle 700 are controlled by the computing platform 750. Computing platform 750 may include at least one processor 751, processor 751 may execute instructions 753 stored in a non-transitory computer-readable medium, such as memory 752. In some embodiments, computing platform 750 may also be a plurality of computing devices that control individual components or subsystems of vehicle 700 in a distributed manner.
The processor 751 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 751 may also include a processor such as a Graphics Processing Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 7 functionally illustrates processors, memories, and other elements of the computer in the same block, one of ordinary skill in the art will appreciate that the processors, computers, or memories may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the processor 751 may perform the parameter calibration method described above.
In various aspects described herein, the processor 751 can be remotely located from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to execute a single maneuver.
In some embodiments, the memory 752 can contain instructions 753 (e.g., program logic) that are executable by the processor 751 to perform various functions of the vehicle 700. Memory 752 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of infotainment system 710, perception system 720, decision control system 730, drive system 740.
In addition to the instructions 753, the memory 752 can also store data such as road maps, route information, location, direction, speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 700 and the computing platform 750 during operation of the vehicle 700 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 750 may control functions of vehicle 700 based on inputs received from various subsystems, such as drive system 740, perception system 720, and decision control system 730. For example, the computing platform 750 may utilize input from the decision control system 730 in order to control the steering system 733 to avoid obstacles detected by the perception system 720. In some embodiments, the computing platform 750 is operable to provide control over many aspects of the vehicle 700 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the vehicle 700. For example, the memory 752 may be partially or completely separate from the vehicle 700. The aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 7 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a road, such as vehicle 700 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 700 or a sensing and computing device associated with the vehicle 700 (e.g., computing system 731, computing platform 750) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each of the identified objects is dependent on the behavior of each other, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 700 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 700, such as the lateral position of the vehicle 700 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 700 to cause the autonomous vehicle to follow a given trajectory and/or to maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 700 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned parameter calibration method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A parameter calibration method is characterized by comprising the following steps:
acquiring current scene images respectively acquired by a plurality of image acquisition devices on a vehicle;
under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image, determining the overlapping area of the images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices; the preset scene comprises that a preset road surface marking line exists on the road surface where the vehicle is located currently; or the preset scene includes that a preset road marking line exists on a road where the vehicle is located currently, and the distance between a first vehicle position and a second vehicle position is greater than or equal to a preset distance threshold, wherein the first vehicle position is a vehicle position when the vehicle performs parameter calibration last time, and the second vehicle position is a vehicle position where the vehicle is located at the current moment;
carrying out image recognition on the overlapped area to obtain a target recognition result;
calibrating the external parameters of each image acquisition device according to the target identification result to obtain the target external parameters respectively corresponding to each image acquisition device;
the target recognition result includes position information of a preset road marking included in the overlap region, and the image recognition of the overlap region to obtain the target recognition result includes:
performing image recognition on an overlapping area of images acquired by every two adjacent image acquisition devices to obtain position information of a preset pavement marking line contained in the overlapping area, wherein the position information comprises a first position of each pixel point of the preset pavement marking line in a first image, a second position of each pixel point of the preset pavement marking line in a second image, and a third position of each pixel point of the preset pavement marking line in a first spliced image; the first image is a current scene image acquired by one image acquisition device of two adjacent image acquisition devices, and the second image is a current scene image acquired by the other image acquisition device of the two adjacent image acquisition devices.
2. The parameter calibration method according to claim 1, wherein determining whether the current scene where the vehicle is located is a preset scene according to the current scene image comprises:
performing image recognition on the current scene images respectively collected by the plurality of image collecting devices to obtain image recognition results;
and under the condition that a preset road surface marking line exists on the road surface where the vehicle is located currently according to the image recognition result, determining that the current scene is the preset scene.
3. The parameter calibration method according to claim 1, wherein the determining an overlapping area of the images acquired by each two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image comprises:
performing image splicing on the current scene image respectively acquired by each image acquisition device according to the current external parameters respectively corresponding to the plurality of image acquisition devices to obtain a first spliced image, wherein an overlapping area of the images acquired by every two adjacent image acquisition devices is reserved in the first spliced image;
and taking an image area corresponding to a preset area in the first spliced image as the overlapping area.
4. The parameter calibration method according to claim 1, wherein the performing parameter calibration on the external parameter of each image acquisition device according to the target recognition result comprises:
determining the corresponding relation between the first image and the second image according to the first position, the second position and the third position; the corresponding relation is used for representing the corresponding relation of two target pavement marking lines which belong to the same preset pavement marking line on the first image and the second image;
aiming at every two adjacent image acquisition devices, establishing an optimization model by taking the minimum distance between two target pavement marking lines corresponding to the two adjacent image acquisition devices as an optimization target and two external parameters of the two adjacent image acquisition devices as variables to be optimized;
and solving the optimization model to obtain target external parameters respectively corresponding to each image acquisition device.
5. The method of parameter calibration according to claim 1, further comprising:
for each image acquisition device, determining a difference value of the current external parameters of the image acquisition device, which are participated outside the target of the image acquisition device;
and recording the current calibration result to a preset database under the condition that the difference is greater than or equal to a first preset difference threshold.
6. The parameter calibration method according to claim 5, wherein the current calibration result includes the target external parameter; after the current calibration result is recorded in the preset database, the method further comprises:
after the preset number of the target external parameters are stored in the preset database, if the difference value between any two target external parameters in the preset number of the target external parameters is smaller than or equal to a second preset difference value threshold, determining the calibrated external parameters respectively corresponding to each image acquisition device according to the preset number of the target external parameters.
7. The method of parameter calibration according to claim 6, further comprising:
and performing image splicing on the current scene image respectively acquired by each image acquisition device according to the calibrated external parameters respectively corresponding to each image acquisition device to obtain a second spliced image corresponding to the current scene.
8. The parameter calibration method according to claim 5, wherein the current calibration result comprises a first vehicle position of the vehicle; after the current calibration result is recorded in the preset database, the method further comprises:
performing image recognition on the current scene images respectively acquired by the plurality of image acquisition devices at the current moment to obtain image recognition results;
acquiring a second vehicle position where the vehicle is located at the current moment;
and under the condition that a preset road marking line exists on the road where the vehicle is located currently according to the image recognition result, and the distance between the first vehicle position and the second vehicle position is larger than or equal to a preset distance threshold value, determining that the current scene is the preset scene.
9. A parameter calibration apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire current scene images acquired by a plurality of image acquisition devices on a vehicle respectively;
the determining module is configured to determine an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices; the preset scene comprises that a preset road surface marking line exists on the road surface where the vehicle is located currently; or the preset scene includes that a preset road marking line exists on a road where the vehicle is located currently, and the distance between a first vehicle position and a second vehicle position is greater than or equal to a preset distance threshold, wherein the first vehicle position is a vehicle position when the vehicle performs parameter calibration last time, and the second vehicle position is a vehicle position where the vehicle is located at the current moment;
the image recognition module is configured to perform image recognition on the overlapping area to obtain a target recognition result;
the parameter calibration module is configured to perform parameter calibration on the external parameter of each image acquisition device according to the target identification result to obtain a target external parameter corresponding to each image acquisition device;
the target identification result comprises position information of a preset pavement marking line contained in the overlapping region, the image identification module is configured to perform image identification on the overlapping region aiming at the overlapping region of the images acquired by every two adjacent image acquisition devices to obtain the position information of the preset pavement marking line contained in the overlapping region, and the position information comprises a first position of each pixel point of the preset pavement marking line in a first image, a second position of each pixel point of the preset pavement marking line in a second image, and a third position of each pixel point of the preset pavement marking line in a first spliced image; the first image is a current scene image acquired by one image acquisition device of two adjacent image acquisition devices, and the second image is a current scene image acquired by the other image acquisition device of the two adjacent image acquisition devices.
10. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring current scene images respectively acquired by a plurality of image acquisition devices on a vehicle;
under the condition that the current scene where the vehicle is located is determined to be a preset scene according to the current scene image, determining an overlapping area of images acquired by every two adjacent image acquisition devices in the plurality of image acquisition devices according to the current scene image; the preset scene is a preset scene in which parameter calibration is required to be carried out on the plurality of image acquisition devices; the preset scene comprises the road surface on which the vehicle is currently located and a preset road surface marking line; or the preset scene includes that a preset road marking line exists on a road where the vehicle is located currently, and the distance between a first vehicle position and a second vehicle position is greater than or equal to a preset distance threshold, wherein the first vehicle position is a vehicle position when the vehicle performs parameter calibration last time, and the second vehicle position is a vehicle position where the vehicle is located at the current moment;
carrying out image recognition on the overlapped area to obtain a target recognition result;
calibrating the external parameters of each image acquisition device according to the target identification result to obtain the target external parameters respectively corresponding to each image acquisition device;
the target recognition result includes position information of a preset road marking line included in the overlap region,
the processor is configured to: performing image recognition on an overlapping area of images acquired by every two adjacent image acquisition devices to obtain position information of a preset pavement marking line contained in the overlapping area, wherein the position information comprises a first position of each pixel point of the preset pavement marking line in a first image, a second position of each pixel point of the preset pavement marking line in a second image, and a third position of each pixel point of the preset pavement marking line in a first spliced image; the first image is a current scene image acquired by one image acquisition device of two adjacent image acquisition devices, and the second image is a current scene image acquired by the other image acquisition device of the two adjacent image acquisition devices.
11. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the steps of the method of any one of claims 1-8.
CN202211001526.2A 2022-08-19 2022-08-19 Parameter calibration method and device, vehicle and storage medium Active CN115082573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211001526.2A CN115082573B (en) 2022-08-19 2022-08-19 Parameter calibration method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211001526.2A CN115082573B (en) 2022-08-19 2022-08-19 Parameter calibration method and device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN115082573A CN115082573A (en) 2022-09-20
CN115082573B true CN115082573B (en) 2023-04-11

Family

ID=83244445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211001526.2A Active CN115082573B (en) 2022-08-19 2022-08-19 Parameter calibration method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115082573B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052910A (en) * 2017-12-19 2018-05-18 深圳市保千里电子有限公司 A kind of automatic adjusting method, device and the storage medium of vehicle panoramic imaging system
CN112785655A (en) * 2021-01-28 2021-05-11 中汽创智科技有限公司 Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium
CN113421215A (en) * 2021-07-19 2021-09-21 江苏金海星导航科技有限公司 Automatic tracking system of car based on artificial intelligence

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10504241B2 (en) * 2016-12-19 2019-12-10 Magna Electronics Inc. Vehicle camera calibration system
CN110660105B (en) * 2018-06-29 2022-05-31 杭州海康威视数字技术股份有限公司 Calibration parameter optimization method and device for panoramic looking-around system
CN111243034A (en) * 2020-01-17 2020-06-05 广州市晶华精密光学股份有限公司 Panoramic auxiliary parking calibration method, device, equipment and storage medium
CN111815719B (en) * 2020-07-20 2023-12-22 阿波罗智能技术(北京)有限公司 External parameter calibration method, device and equipment of image acquisition equipment and storage medium
CN112529966B (en) * 2020-12-17 2023-09-15 豪威科技(武汉)有限公司 On-line calibration method of vehicle-mounted looking-around system and vehicle-mounted looking-around system thereof
CN112614192B (en) * 2020-12-24 2022-05-17 亿咖通(湖北)技术有限公司 On-line calibration method of vehicle-mounted camera and vehicle-mounted information entertainment system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052910A (en) * 2017-12-19 2018-05-18 深圳市保千里电子有限公司 A kind of automatic adjusting method, device and the storage medium of vehicle panoramic imaging system
CN112785655A (en) * 2021-01-28 2021-05-11 中汽创智科技有限公司 Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium
CN113421215A (en) * 2021-07-19 2021-09-21 江苏金海星导航科技有限公司 Automatic tracking system of car based on artificial intelligence

Also Published As

Publication number Publication date
CN115082573A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN114842075B (en) Data labeling method and device, storage medium and vehicle
CN115100377B (en) Map construction method, device, vehicle, readable storage medium and chip
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN115330923B (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
CN115123257A (en) Method and device for identifying position of road deceleration strip, vehicle, storage medium and chip
CN115220449A (en) Path planning method and device, storage medium, chip and vehicle
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN115222791B (en) Target association method, device, readable storage medium and chip
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN115221151B (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115082573B (en) Parameter calibration method and device, vehicle and storage medium
CN115334111A (en) System architecture, transmission method, vehicle, medium and chip for lane recognition
CN115170630A (en) Map generation method, map generation device, electronic device, vehicle, and storage medium
CN114937351A (en) Motorcade control method and device, storage medium, chip, electronic equipment and vehicle
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip
CN115205461B (en) Scene reconstruction method and device, readable storage medium and vehicle
CN115407344B (en) Grid map creation method, device, vehicle and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant