CN116228535A - Image processing method and device, electronic equipment and vehicle - Google Patents

Image processing method and device, electronic equipment and vehicle Download PDF

Info

Publication number
CN116228535A
CN116228535A CN202310091686.9A CN202310091686A CN116228535A CN 116228535 A CN116228535 A CN 116228535A CN 202310091686 A CN202310091686 A CN 202310091686A CN 116228535 A CN116228535 A CN 116228535A
Authority
CN
China
Prior art keywords
lane
splicing
lane line
camera
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310091686.9A
Other languages
Chinese (zh)
Inventor
谭竞扬
赵龙
王光甫
贾澜鹏
叶年进
李文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202310091686.9A priority Critical patent/CN116228535A/en
Publication of CN116228535A publication Critical patent/CN116228535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, an image processing device, electronic equipment and a vehicle. The method comprises the following steps: obtaining lane line pictures shot by a plurality of cameras in a vehicle at the same moment, and performing bird-eye view conversion on the lane line pictures shot by the plurality of cameras to obtain bird-eye views of the plurality of lane lines; preprocessing and Hough transformation are respectively carried out on the aerial view of each lane line, so as to obtain a lane fitting straight line corresponding to the aerial view of each lane line; sequentially calculating the offset of two lane fitting straight lines corresponding to the lane line aerial views with splicing areas in the corresponding splicing areas until the offset of all the splicing areas corresponding to the lane line aerial views is obtained; correcting a homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain a corrected homography matrix; and performing aerial view splicing on pictures shot subsequently by a plurality of cameras in the vehicle by using the corrected homography matrix. The method and the device can solve the problem of uneven lane line splicing existing in the BEV function.

Description

Image processing method and device, electronic equipment and vehicle
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a vehicle.
Background
AVM (Around View Monitor, look-around monitor) is also called panoramic surveillance imaging system, which has been widely used in the fields of assisted driving, automatic driving, etc. The AVM system consists of a plurality of fisheye cameras, and can splice pictures shot by the fisheye cameras into a panoramic top view so as to provide more vehicle surrounding information for a driver, thereby completing functions of auxiliary parking, automatic lane changing and the like.
In AVM systems, the most frequently used function of the driver is BEV (Bird's Eye View), also known as the "emperor" perspective. However, due to the calibration error, the problem that the bird's eye view obtained by BEV function splicing is uneven in lane line splicing is caused, and the driving experience of a user is greatly reduced.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a vehicle, so as to solve the problem that lane lines are spliced unevenly in a bird's eye view obtained by BEV function splicing.
In a first aspect, an embodiment of the present application provides an image processing method, including:
obtaining lane line pictures shot by a plurality of cameras in a vehicle at the same moment, and performing bird-eye view conversion on the lane line pictures shot by the plurality of cameras to obtain bird-eye views of the plurality of lane lines;
preprocessing and Hough transformation are respectively carried out on the aerial view of each lane line, so as to obtain a lane fitting straight line corresponding to the aerial view of each lane line;
sequentially calculating the offset of two lane fitting straight lines corresponding to the lane line aerial views with splicing areas in the corresponding splicing areas until the offset of all the splicing areas corresponding to the lane line aerial views is obtained;
correcting a homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain a corrected homography matrix;
and performing aerial view splicing on pictures shot subsequently by a plurality of cameras in the vehicle by using the corrected homography matrix.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring lane line pictures shot by a plurality of cameras in the vehicle at the same moment, and performing bird-eye view conversion on the lane line pictures shot by the plurality of cameras to obtain a plurality of lane line bird-eye views;
the second acquisition module is used for respectively preprocessing and Hough transformation of the aerial view of each lane line to obtain a lane fitting straight line corresponding to the aerial view of each lane line;
the calculation module is used for sequentially calculating the offset of two lane fitting straight lines corresponding to the lane line aerial views with the splicing areas in the corresponding splicing areas until the offset of all the splicing areas corresponding to the lane line aerial views is obtained;
the correction module is used for correcting the homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain a corrected homography matrix;
and the splicing processing module is used for splicing the aerial view of the pictures shot subsequently by the cameras in the vehicle by utilizing the corrected homography matrix.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect or any one of the possible implementations of the first aspect, when the computer program is executed by the processor.
In a fourth aspect, embodiments of the present application provide a vehicle comprising an electronic device as described in the third aspect.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a vehicle, which are used for reversely calibrating pictures with the problem of uneven lane line splicing so as to correct a homography matrix, so that the corrected homography matrix can be used for performing bird-eye view splicing on pictures shot by a plurality of cameras in the vehicle. The corrected homography matrix is obtained based on reverse calibration, so that calibration errors are eliminated, the problem that lane lines are spliced unevenly in the bird's eye view obtained by BEV function splicing caused by the calibration errors is solved, a good visual effect can be provided for a driver, and the driving experience of a user is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a lane line with uneven stitching provided in an embodiment of the present application;
fig. 2 is a flowchart of an implementation of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a binarized image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a polygon filtering process according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an edge detection process according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram for eliminating lane line splice unevenness according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a vehicle according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following description will be made with reference to the accompanying drawings by way of specific embodiments.
As described in the related art, in the AVM system, the function with the highest frequency of use of the driver is the BEV function, and the function is to obtain the final aerial view by calibrating the pose relationship between the plurality of fisheye cameras, then projecting the images captured by the plurality of cameras onto the aerial view by the projection method, and then performing the stitching process. Because of the calibration errors of the camera parameters, the calibration tools and the like, the problem that the aerial view obtained by the BEV function splicing has the uneven lane line splicing, such as the uneven lane line splicing shown in the circled area in fig. 1 (the black blocks in fig. 1 refer to vehicles) is caused.
In order to solve the problems in the prior art, an embodiment of the application provides an image processing method, an image processing device, electronic equipment and a vehicle. The image processing method provided in the embodiment of the present application will be first described below.
The execution subject of the image processing method may be an image processing apparatus, such as a vehicle controller (Vehicle Control Unit, VCU), an electronic control unit (Electronic Control Unit, ECU), or any electronic device capable of executing the relevant processing of the image processing method, which is not particularly limited in the embodiments of the present application.
The applicant finds that the implementation principle of the bird's eye view obtained by splicing BEV functions is as follows: firstly, a homography matrix is obtained based on calibration data, and then the homography matrix is utilized to carry out aerial view splicing processing on pictures shot by each camera so as to obtain a final aerial view. According to analysis, the key to solve the problems in the prior art is to adopt an accurate homography matrix, and in order to obtain the accurate homography matrix, accurate calibration data is needed. Therefore, the image processing method provided in the embodiment of the present application adopts the following technical concept: and reversely calibrating the pictures with the problem of uneven lane line splicing to eliminate calibration errors, thereby realizing correction of the homography matrix.
Referring to fig. 2, a flowchart of an implementation of an image processing method provided in an embodiment of the present application is shown, and details are as follows:
step 210, obtaining lane line pictures shot by a plurality of cameras in the vehicle at the same time, and performing aerial view conversion on the lane line pictures shot by the plurality of cameras to obtain aerial views of the plurality of lane lines.
In some embodiments, the Vehicle may be any type of Vehicle configured with BEV functionality, such as a conventional fuel Vehicle, e.g., a gasoline Vehicle, a diesel Vehicle, or a new energy Vehicle, e.g., an EV (Electric Vehicle), an HEV (Hybrid Electric Vehicle, hybrid Vehicle), a PHEV (Plug-in Hybrid Electric Vehicle, plug-in hybrid Vehicle), or the like.
In some embodiments, the lane line picture may be a picture including lane lines. Specifically, lane line pictures taken by a plurality of cameras in a vehicle at the same time can be obtained in various manners. For example, lane line pictures shot by a plurality of cameras at the same time can be extracted from a buffer memory of a vehicle, or when lane lines are detected in the running process of the vehicle, the lane line pictures shot by the plurality of cameras at the same time are acquired in real time.
And 220, respectively preprocessing and Hough transformation are carried out on the aerial view of each lane line, and a lane fitting straight line corresponding to the aerial view of each lane line is obtained.
In some embodiments, the lane-fitting line is a line that reflects the actual position of the lane line in the bird's eye view. In order to obtain the lane fitting straight line, preprocessing such as binarization processing, distortion screening processing, quadrilateral screening and the like can be performed on the aerial view of each lane line, and then the lane fitting straight line corresponding to the aerial view of each lane line is obtained through Hough transformation.
Taking a target lane line bird's-eye view as an example, the target lane line bird's-eye view refers to any one of the lane line bird's-eye views obtained in step 210, and the process of obtaining the lane fitting straight line will be described. Firstly, binarization processing, distortion screening processing and quadrilateral screening processing are sequentially carried out on the target lane line aerial view, and lane graphics corresponding to the target lane line aerial view are obtained. And then, carrying out edge detection processing on the lane graph to generate a lane edge mask. And then, carrying out Hough transformation on the lane edge mask to obtain linear coordinate data corresponding to the lane edge mask. And finally, performing straight line fitting on the straight line coordinate data corresponding to the lane edge mask to obtain a lane fitting straight line corresponding to the target lane line aerial view.
In some embodiments, the target lane line bird's eye view may be binarized with a brightness threshold to isolate the lane line. As shown in fig. 3, a binarized picture is provided.
In some embodiments, considering that the aerial view generally has some distortion areas, after the binarized picture is obtained, distortion screening processing may be performed to remove the distortion areas in the picture, so that the accuracy of subsequent inverse calibration may be improved.
In some embodiments, considering that the lane lines are generally quadrilateral in shape, the image may be subjected to a quadrilateral screening process after the distortion screening process to obtain the lane pattern. Specifically, first, a polygon detection is performed on a picture to obtain a polygon existing in the picture. Then, the polygon obtained in many ways is subjected to polygon fitting to obtain polygons with specific edge numbers, such as quadrilaterals, pentagons, hexagons and the like. Then, a quadrangle is selected from all polygons obtained by fitting. And finally, carrying out secondary screening on the screened quadrangle through the area threshold value to obtain the lane graph meeting the requirements. For the area threshold, considering that the lane lines tend to be large in area, quadrangles that do not belong to the lane lines may be eliminated by the area threshold, which may be an empirical value, such as 1000 square centimeters. As shown in fig. 4, a lane image obtained by performing the polygon screening process of fig. 3 is provided.
In some embodiments, the edge detection process may be Sobel edge detection. As shown in fig. 5, a lane edge mask obtained by performing an edge detection process on the lane image in fig. 4 is provided.
In some embodiments, after obtaining the lane edge mask, hough transformation may be performed on the lane edge mask, that is, the image is transferred to the Hough domain on the lane edge mask, and then the line data is extracted, for example, the HoughLinesP as the houghshift function in Opencv is used to obtain the line data of the coordinate pair of the start point and the end point of the line.
In some embodiments, it is possible that some threshold value is eliminated for this, considering that there may be an interfering straight line. Specifically, the lane width is typically 3-4 meters, the vehicle width is 2 meters, and the dimensional relationship between the bird's eye view image pixels and the real length is 1:1, namely 1 pixel represents 1cm, etc., a threshold value of a detection range of some lane lines, such as 3 meters, can be set, and then data exceeding the threshold value is eliminated, so that some interference straight lines can be eliminated, and the accuracy of subsequent reverse calibration is improved.
In some embodiments, the detected point coordinates of the plurality of straight lines may be fitted to a straight line to fit a straight line that reflects the actual position of the lane line in the bird's eye view.
And 230, sequentially calculating the offset of the two lane fitting straight lines corresponding to the lane line aerial views with the splicing areas in the corresponding splicing areas until the offset of all the splicing areas corresponding to the lane line aerial views is obtained.
Note that, the uneven lane line stitching often occurs in a stitching region, that is, a region where two bird's-eye views overlap when stitching is performed. The splicing area is the key of reverse calibration, and parameters enabling the lane lines to be spliced neatly can be obtained through the splicing area, namely the offset of the two lane fitting straight lines in the splicing area.
In some embodiments, the offset of the two lane-fitting lines at the splice region may be obtained by the intersection of the lane-fitting lines with a splice seam of the splice region, which is the boundary of the splice region. Specifically, for convenience of description, the following first fitting straight line and the second fitting straight line refer to two lane fitting straight lines corresponding to any two lane line bird's-eye views with a splicing area. First, a first intersection point of a first fitting straight line and a splicing seam of a target splicing area and a second intersection point of a second fitting straight line and a splicing seam of the target splicing area are obtained, wherein the target splicing area is a splicing area of any two lane line aerial views with splicing areas. Then, the distance between the first intersection and the second intersection is calculated, and the calculated distance is determined as the offset of the target splicing area.
It should be noted that, based on the relevant parameters of the aerial view, the real length represented by one pixel on the aerial view in reality can be accurately obtained, that is, the aerial view can provide rich prior information related to the scale, so that the distance between the first intersection and the second intersection can truly reflect the degree of uneven splicing of the lane lines, and accurate calibration data can be provided for the subsequent correction of the homography matrix.
And 240, correcting the homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain a corrected homography matrix.
In some embodiments, the coordinates of the projection points of the plurality of cameras may be adjusted according to a preset adjustment method, where the preset adjustment method is used to eliminate the offset of all the stitching regions. And correcting the homography matrix calibrated in advance by the vehicle according to the projection point coordinates of the plurality of cameras obtained through adjustment, so as to obtain a corrected homography matrix. Specifically, the pre-calibrated homography matrix may be labeled as H_front, H_back, H_left, H_right.
Taking a vehicle in which 4 cameras are arranged as an example, the 4 cameras may be referred to as a front camera, a rear camera, a left camera, and a right camera, a process of adjusting coordinates of projection points of the plurality of cameras will be described.
In some embodiments, the coordinates of the projection points of the front camera and the rear camera may be maintained unchanged, and the offset of all the stitching regions may be added to the coordinates of the corresponding projection points of the left camera and the right camera, that is, the positions of the front and the rear bird's-eye views are fixed, and the positions of the left and the right bird's-eye views are finely adjusted.
Specifically, the offset of the splicing area of the lane line aerial view shot by the left camera and the front camera respectively, for example, marked as offset_left_front, may be added to the left front projection point coordinate of the left camera; adding the offset of the splicing area of the lane line aerial view shot by each of the left camera and the rear camera, for example, the offset is marked as offset_left_back, to the left rear projection point coordinate of the left camera; adding the offset of the splicing area of the lane line aerial view shot by the right camera and the front camera to the right front projection point coordinate of the right camera, wherein the offset is marked as offset_right_front; an offset, e.g., labeled as offset_right_back, of a stitching region of the lane line aerial view captured by each of the right camera and the rear camera is added to the right rear projection point coordinates of the right camera.
In some embodiments, the coordinates of the projection points of the left camera and the right camera may be maintained unchanged, the offset of all the stitching regions may be added to the coordinates of the corresponding projection points of the front camera and the rear camera, that is, the positions of the left and right bird's-eye views are fixed, and the positions of the front and rear bird's-eye views are finely adjusted.
Specifically, the offset of the splicing area of the lane line aerial view shot by the front camera and the left camera can be added to the front left projection point coordinates of the front camera; adding the offset of the splicing area of the lane line aerial view shot by each of the front camera and the right camera to the front-right projection point coordinates of the front camera; adding the offset of the splicing area of the lane line aerial view shot by each of the rear camera and the left camera to the rear left projection point coordinate of the rear camera; and adding the offset of the splicing area of the lane line aerial view shot by each of the rear camera and the right camera to the rear right projection point coordinate of the rear camera.
And 250, performing aerial view stitching on pictures shot subsequently by a plurality of cameras in the vehicle by using the corrected homography matrix.
Therefore, after the corrected homography matrix is obtained, the homography matrix calibrated in advance for the vehicle can be replaced by the corrected homography matrix, and because the corrected homography matrix is obtained based on reverse calibration, calibration errors are eliminated, so that the problem that lane lines are spliced unevenly in the bird-eye view obtained by BEV function splicing caused by the calibration errors is solved, good visual effect can be provided for a driver, and driving experience of a user is greatly improved.
As shown in fig. 6, a schematic diagram of re-splicing fig. 1 by using the corrected homography matrix is provided, and it can be found from fig. 6 that the problem of uneven lane line splicing in the circular area is well solved.
In the embodiment of the application, a scheme for reversely calibrating pictures with the problem of uneven lane line splicing is provided so as to correct homography matrixes, so that the corrected homography matrixes can be utilized to splice aerial views of pictures shot by a plurality of cameras in a vehicle. The corrected homography matrix is obtained based on reverse calibration, so that calibration errors are eliminated, the problem that lane lines are spliced unevenly in the bird's eye view obtained by BEV function splicing caused by the calibration errors is solved, a good visual effect can be provided for a driver, and the driving experience of a user is greatly improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
The following are device embodiments of the present application, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, and for convenience of explanation, only a portion related to the embodiment of the present application is shown, and the details are as follows:
as shown in fig. 7, the image processing apparatus includes:
the first obtaining module 710 is configured to obtain lane line images captured by a plurality of cameras in a vehicle at the same time, and perform aerial view conversion on the lane line images captured by the plurality of cameras to obtain aerial views of the plurality of lane lines;
the second obtaining module 720 is configured to perform preprocessing and hough transform on the aerial view of each lane line, so as to obtain a lane fitting straight line corresponding to the aerial view of each lane line;
the calculating module 730 is configured to sequentially calculate offsets of two lane fitting straight lines corresponding to two lane line aerial views with splicing areas in the corresponding splicing areas until offsets of all splicing areas corresponding to the lane line aerial views are obtained;
the correction module 740 is configured to correct the homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas, so as to obtain a corrected homography matrix;
and the stitching processing module 750 is configured to stitch the aerial views of the pictures subsequently shot by the plurality of cameras in the vehicle by using the corrected homography matrix.
In one possible implementation, the second acquisition module is further configured to:
sequentially performing binarization processing, distortion screening processing and quadrilateral screening processing on the target lane line aerial view to obtain lane graphics corresponding to the target lane line aerial view; the target lane line aerial view is any one lane line aerial view of the lane line aerial views;
performing edge detection processing on the lane graph to generate a lane edge mask;
performing Hough transformation on the lane edge mask to obtain linear coordinate data corresponding to the lane edge mask;
and performing straight line fitting on the straight line coordinate data corresponding to the lane edge mask to obtain a lane fitting straight line corresponding to the target lane line aerial view.
In one possible implementation, the computing module is further configured to:
acquiring a first intersection point of a first fitting straight line and a splicing seam of a target splicing area, and acquiring a second intersection point of a second fitting straight line and a splicing seam of the target splicing area; the first fitting straight line is any one of two lane fitting straight lines, the second fitting straight line is a lane fitting straight line different from the first fitting straight line in the two lane fitting straight lines, and the target splicing area is a splicing area of an aerial view of any two lane lines with splicing areas;
and calculating the distance between the first intersection point and the second intersection point, and determining the calculated distance as the offset of the target splicing area.
In one possible implementation, the correction module is further configured to:
according to the offset of all the splicing areas, the projection point coordinates of a plurality of cameras are adjusted according to a preset adjustment mode; the preset adjustment mode is used for eliminating offset of all splicing areas;
and correcting the homography matrix calibrated in advance by the vehicle according to the projection point coordinates of the cameras obtained through adjustment, and obtaining the homography matrix after correction.
In one possible implementation, the plurality of cameras are front, rear, left and right cameras arranged around the vehicle;
correspondingly, the correction module is also used for:
maintaining the projection point coordinates of the front camera and the rear camera unchanged, and adding the offset of all splicing areas to the corresponding projection point coordinates of the left camera and the right camera;
alternatively, the coordinates of the projection points of the left and right cameras are maintained unchanged, and the offsets of all the stitching regions are added to the coordinates of the corresponding projection points of the front and rear cameras.
In one possible implementation, the correction module is further configured to:
adding the offset of the splicing area of the lane line aerial view shot by each of the left camera and the front camera to the left front projection point coordinate of the left camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the left camera and the rear camera to the left rear projection point coordinate of the left camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the right camera and the front camera to the right front projection point coordinate of the right camera;
and adding the offset of the splicing area of the lane line aerial view shot by each of the right camera and the rear camera to the right rear projection point coordinate of the right camera.
In one possible implementation, the correction module is further configured to:
adding the offset of the splicing area of the lane line aerial view shot by each of the front camera and the left camera to the front left projection point coordinate of the front camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the front camera and the right camera to the front-right projection point coordinates of the front camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the rear camera and the left camera to the rear left projection point coordinate of the rear camera;
and adding the offset of the splicing area of the lane line aerial view shot by each of the rear camera and the right camera to the rear right projection point coordinate of the rear camera.
In the embodiment of the application, since the corrected homography matrix is obtained based on reverse calibration, calibration errors are eliminated, so that the problem that the lane lines are spliced unevenly in the bird's eye view obtained by BEV function splicing caused by the calibration errors is solved, a good visual effect can be provided for a driver, and the driving experience of a user is greatly improved.
The present application also provides a computer program product having a program code which, when run in a corresponding processor, controller, computing device or terminal, performs the steps of any of the image processing method embodiments described above, such as steps 210 to 250 shown in fig. 2. Those skilled in the art will appreciate that the methods and apparatus presented in the embodiments of the present application may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. The special purpose processor may include an Application Specific Integrated Circuit (ASIC), a Reduced Instruction Set Computer (RISC), and/or a Field Programmable Gate Array (FPGA). The proposed method and device are preferably implemented as a combination of hardware and software. The software is preferably installed as an application program on a program storage device. Which is typically a machine based on a computer platform having hardware, such as one or more Central Processing Units (CPUs), random Access Memory (RAM), and one or more input/output (I/O) interfaces. An operating system is also typically installed on the computer platform. The various processes and functions described herein may either be part of the application program or part of the application program which is executed by the operating system.
Fig. 8 is a schematic diagram of an electronic device 8 provided in an embodiment of the present application. As shown in fig. 8, the electronic device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in the memory 81 and executable on the processor 80. The steps of the various image processing method embodiments described above, such as steps 210 through 250 shown in fig. 2, are implemented when the processor 80 executes the computer program 82. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the modules of the apparatus embodiments described above, such as the functions of the modules 710-750 of fig. 7.
By way of example, the computer program 82 may be partitioned into one or more modules that are stored in the memory 81 and executed by the processor 80 to complete the present application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program 82 in the electronic device 8. For example, the computer program 82 may be partitioned into modules 710 through 750 shown in FIG. 7.
The electronic device 8 may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of an electronic device 8 and is not meant to be limiting as to the electronic device 8, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the electronic device 8, such as a hard disk or a memory of the electronic device 8. The memory 81 may also be an external storage device of the electronic device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the electronic device 8. The memory 81 is used for storing the computer program and other programs and data required by the electronic device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a vehicle, as shown in fig. 9, and the vehicle 9 includes the electronic device 8.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the above-described embodiment method, or may be implemented by instructing related hardware by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of the above-described respective image processing method embodiments when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier wave signal, a telecommunication signal, a software distribution medium, and so forth.
Furthermore, the features of the embodiments shown in the drawings or mentioned in the description of the present application are not necessarily to be construed as separate embodiments from each other. Rather, each feature described in one example of one embodiment may be combined with one or more other desired features from other embodiments, resulting in other embodiments not described in text or with reference to the drawings.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. An image processing method, comprising:
obtaining lane line pictures shot by a plurality of cameras in a vehicle at the same moment, and performing aerial view conversion on the lane line pictures shot by the plurality of cameras to obtain aerial views of a plurality of lane lines;
preprocessing and Hough transformation are respectively carried out on each lane line aerial view, so that lane fitting straight lines corresponding to each lane line aerial view are obtained;
sequentially calculating the offset of two lane fitting straight lines corresponding to the lane line aerial views with splicing areas in the corresponding splicing areas until the offset of all splicing areas corresponding to the lane line aerial views is obtained;
correcting the homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain a corrected homography matrix;
and performing bird's eye view stitching on pictures shot subsequently by a plurality of cameras in the vehicle by using the corrected homography matrix.
2. The image processing method according to claim 1, wherein the preprocessing and hough transformation are performed on each lane line aerial view respectively to obtain a lane fitting straight line corresponding to each lane line aerial view, and the method comprises:
sequentially performing binarization processing, distortion screening processing and quadrilateral screening processing on the target lane line aerial view to obtain a lane graph corresponding to the target lane line aerial view; wherein the target lane line bird's-eye view is any one lane line bird's-eye view among the plurality of lane line bird's-eye views;
performing edge detection processing on the lane graph to generate a lane edge mask;
performing Hough transformation on the lane edge mask to obtain linear coordinate data corresponding to the lane edge mask;
and performing straight line fitting on the straight line coordinate data corresponding to the lane edge mask to obtain a lane fitting straight line corresponding to the target lane line aerial view.
3. The image processing method according to claim 1, wherein the sequentially calculating the offset of two lane fitting straight lines corresponding to the lane line bird's eye view map with any two splicing regions in the corresponding splicing regions includes:
acquiring a first intersection of a first fitting straight line and a splicing seam of a target splicing area, and acquiring a second intersection of a second fitting straight line and a splicing seam of the target splicing area; the first fitting straight line is any one lane fitting straight line of the two lane fitting straight lines, the second fitting straight line is a lane fitting straight line different from the first fitting straight line of the two lane fitting straight lines, and the target splicing region is a splicing region of the two arbitrary lane line aerial views with splicing regions;
and calculating the distance between the first intersection point and the second intersection point, and determining the calculated distance as the offset of the target splicing area.
4. The image processing method according to claim 1, wherein the correcting the homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain the corrected homography matrix includes:
according to the offset of all the splicing areas, the coordinates of the projection points of the cameras are adjusted according to a preset adjustment mode; the preset adjustment mode is used for eliminating the offset of all the splicing areas;
and correcting the homography matrix calibrated in advance by the vehicle according to the projection point coordinates of the cameras obtained through adjustment, so as to obtain a corrected homography matrix.
5. The image processing method according to claim 4, wherein the plurality of cameras are a front camera, a rear camera, a left camera, and a right camera arranged around the vehicle;
and adjusting the coordinates of the projection points of the plurality of cameras according to the offset of all the splicing areas and a preset adjustment mode, wherein the method comprises the following steps:
maintaining the projection point coordinates of the front camera and the rear camera unchanged, and adding the offset of all the splicing areas to the corresponding projection point coordinates of the left camera and the right camera;
or, maintaining the projection point coordinates of the left camera and the right camera unchanged, and adding the offset of all the stitching regions to the corresponding projection point coordinates of the front camera and the rear camera.
6. The image processing method according to claim 5, wherein the adding the offsets of the all stitching regions to the respective proxel coordinates of the left and right cameras includes:
adding the offset of the splicing area of the lane line aerial view shot by each of the left camera and the front camera to the left front projection point coordinate of the left camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the left camera and the rear camera to the left rear projection point coordinate of the left camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the right camera and the front camera to the right front projection point coordinate of the right camera;
and adding the offset of the splicing area of the lane line aerial view shot by each of the right camera and the rear camera to the right rear projection point coordinate of the right camera.
7. The image processing method according to claim 5, wherein the adding the offsets of the all stitching regions to the respective proxel coordinates of the front camera and the rear camera includes:
adding the offset of the splicing area of the lane line aerial view shot by each of the front camera and the left camera to the front left projection point coordinate of the front camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the front camera and the right camera to the front right projection point coordinates of the front camera;
adding the offset of the splicing area of the lane line aerial view shot by each of the rear camera and the left camera to rear left projection point coordinates of the rear camera;
and adding the offset of the splicing area of the lane line aerial view shot by each of the rear camera and the right camera to the rear right projection point coordinate of the rear camera.
8. An image processing apparatus, comprising:
the first acquisition module is used for acquiring lane line pictures shot by a plurality of cameras in the vehicle at the same moment, and performing aerial view conversion on the lane line pictures shot by the plurality of cameras to obtain aerial views of a plurality of lane lines;
the second acquisition module is used for respectively preprocessing and Hough transformation of the lane line aerial views to obtain lane fitting straight lines corresponding to the lane line aerial views;
the calculation module is used for sequentially calculating the offset of two lane fitting straight lines corresponding to the lane line aerial views with splicing areas in the corresponding splicing areas until the offset of all splicing areas corresponding to the lane line aerial views is obtained;
the correction module is used for correcting the homography matrix calibrated in advance for the vehicle according to the offset of all the splicing areas to obtain a corrected homography matrix;
and the splicing processing module is used for splicing the aerial view of the pictures shot by the cameras in the vehicle subsequently by utilizing the corrected homography matrix.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of the preceding claims 1 to 7 when the computer program is executed.
10. A vehicle comprising the electronic device of claim 9.
CN202310091686.9A 2023-02-09 2023-02-09 Image processing method and device, electronic equipment and vehicle Pending CN116228535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310091686.9A CN116228535A (en) 2023-02-09 2023-02-09 Image processing method and device, electronic equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310091686.9A CN116228535A (en) 2023-02-09 2023-02-09 Image processing method and device, electronic equipment and vehicle

Publications (1)

Publication Number Publication Date
CN116228535A true CN116228535A (en) 2023-06-06

Family

ID=86581866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310091686.9A Pending CN116228535A (en) 2023-02-09 2023-02-09 Image processing method and device, electronic equipment and vehicle

Country Status (1)

Country Link
CN (1) CN116228535A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991114A (en) * 2023-09-26 2023-11-03 西安交通大学 Method for measuring and compensating and correcting splicing errors in laser processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991114A (en) * 2023-09-26 2023-11-03 西安交通大学 Method for measuring and compensating and correcting splicing errors in laser processing
CN116991114B (en) * 2023-09-26 2023-12-19 西安交通大学 Method for measuring and compensating and correcting splicing errors in laser processing

Similar Documents

Publication Publication Date Title
CN101710932B (en) Image stitching method and device
CN112927306B (en) Calibration method and device of shooting device and terminal equipment
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN114202588B (en) Method and device for quickly and automatically calibrating vehicle-mounted panoramic camera
CN111652937B (en) Vehicle-mounted camera calibration method and device
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
CN112330755B (en) Calibration evaluation method and device of all-round system, storage medium and terminal
CN116228535A (en) Image processing method and device, electronic equipment and vehicle
CN113658262A (en) Camera external parameter calibration method, device, system and storage medium
CN111243034A (en) Panoramic auxiliary parking calibration method, device, equipment and storage medium
CN115082565A (en) Camera calibration method, device, server and medium
CN111488762A (en) Lane-level positioning method and device and positioning equipment
CN117495676A (en) Panoramic all-around image stitching method and device, electronic equipment and storage medium
CN116757935A (en) Image fusion splicing method and system of fisheye camera and electronic equipment
CN116630401A (en) Fish-eye camera ranging method and terminal
CN114663521A (en) All-round-view splicing processing method for assisting parking
CN115937839A (en) Large-angle license plate image recognition method, calculation equipment and storage medium
CN115619636A (en) Image stitching method, electronic device and storage medium
JP2020095628A (en) Image processing device and image processing method
CN108665501A (en) Automobile viewing system three-dimensional scaling scene and the scaling method for using the scene
CN115018926A (en) Method, device and equipment for determining pitch angle of vehicle-mounted camera and storage medium
CN113255405A (en) Parking space line identification method and system, parking space line identification device and storage medium
JP2016111585A (en) Image processing system, system, image processing method, and program
CN113157835B (en) Image processing method, device and platform based on GIS platform and storage medium
CN113516722B (en) Vehicle camera calibration method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination