CN115587935A - Panoramic all-around image splicing method and device, electronic equipment and storage medium - Google Patents

Panoramic all-around image splicing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115587935A
CN115587935A CN202211278247.0A CN202211278247A CN115587935A CN 115587935 A CN115587935 A CN 115587935A CN 202211278247 A CN202211278247 A CN 202211278247A CN 115587935 A CN115587935 A CN 115587935A
Authority
CN
China
Prior art keywords
pixel
pixel point
panoramic
image
fisheye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211278247.0A
Other languages
Chinese (zh)
Inventor
阮善恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202211278247.0A priority Critical patent/CN115587935A/en
Publication of CN115587935A publication Critical patent/CN115587935A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a panoramic all-around image splicing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a panoramic all-round-looking initialized image and partition information of the panoramic all-round-looking initialized image, wherein the partition information of the panoramic all-round-looking initialized image comprises four independent observation areas, four common observation areas and an area where a vehicle body is located; determining a dual weighted fusion strategy corresponding to each pixel point in the common observation area; constructing a pixel point corresponding relation mapping table according to the partition information of the panoramic all-around view initialization image, a double weighted fusion strategy corresponding to each pixel point in the common observation area and the internal and external parameters of the four-way fisheye camera; and acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-around initialized image according to the mapping table of the pixel point correspondence relationship, and writing the pixel value into the position of the pixel point to obtain a spliced panoramic all-around image. The method and the device have the advantages of high splicing efficiency and good splicing quality.

Description

Panoramic all-around image splicing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for stitching panoramic views, an electronic device, and a storage medium.
Background
In the field of automatic driving, an Around View system (AVM for short) belongs to a part of an automatic parking system, and is a function with high practicability and capable of greatly improving user experience and driving safety. The system utilizes 4-way fisheye cameras arranged in the front, the back, the left and the right of the vehicle body to acquire the environmental information around the vehicle body in real time, and then the environmental information is synthesized into a 360-degree panoramic all-around image through software. When the system is combined with an image processing technology, the functions of parking space identification, obstacle detection, drivable area identification and the like can be realized, and the system is an important component of an automatic parking system.
The existing software generates 360-degree panoramic view images in real time through an image stitching technology, and the image stitching is a process of finding out a geometric transformation relation between two adjacent images by using an algorithm on two or more images with the same area and then stitching the two images into one image. The inventor of the application discovers that the existing image splicing algorithm has low splicing efficiency, splicing messy codes in an overlapping area, splicing ghost images and other boundary effects when researching and practicing the splicing algorithm of the panoramic all-around image.
Disclosure of Invention
Based on the above problems in the prior art, embodiments of the present application provide a method and an apparatus for stitching panoramic view images, an electronic device, and a storage medium, so as to efficiently stitch the panoramic view images and avoid the occurrence of a boundary effect in the panoramic view images.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for stitching panoramic images, where the method includes:
acquiring a panoramic all-around view initialization image and partition information of the panoramic all-around view initialization image, wherein the partition information of the panoramic all-around view initialization image comprises four independent observation areas, four common observation areas and an area where a vehicle body is located;
determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, wherein the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point;
according to the partition information of the panoramic all-round-looking initialized image, a dual weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four paths of fisheye cameras arranged in the front, the back, the left and the right of a vehicle body, a pixel point correspondence relation mapping table is constructed, the pixel point correspondence relation mapping table indicates the correspondence relation between the pixel point of the single observation area and the pixel point of the fisheye image collected by the corresponding path of fisheye camera, and indicates the correspondence relation between the pixel point of the common observation area and the pixel points of the two fisheye images collected by the corresponding two paths of fisheye cameras and the corresponding angle weighting and distance weighting;
and acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-around view initialization image according to the pixel point corresponding relation mapping table, and writing the pixel value into the pixel point position to obtain a spliced panoramic all-around view image.
Optionally, the determining a dual weighted fusion policy corresponding to each pixel point in the common observation region includes:
acquiring pixel positions of pixel points in the common observation area, pixel positions of two paths of fisheye cameras corresponding to the common observation area and boundary line positions of the common observation area and two adjacent independent observation areas;
determining the deviation degree of the pixel point to the two boundary lines according to the pixel position of the pixel point in the common observation area and the boundary line position of the common observation area, and obtaining the angle weighting corresponding to the pixel point according to the deviation degree of the pixel point to the two boundary lines;
and determining the relative distance between the pixel point and the two fisheye cameras according to the pixel position of the pixel point in the common observation area and the pixel positions of the two fisheye cameras corresponding to the common observation area, and obtaining the distance weight corresponding to the pixel point according to the relative distance between the pixel point and the two fisheye cameras.
Optionally, the boundary line position of the common observation area is acquired by:
acquiring vehicle corner points corresponding to the common observation area;
and obtaining the position of the boundary line of the common observation area according to the pixel coordinates of the vehicle corner points.
Optionally, the determining, according to the pixel position of the pixel point in the common observation area and the boundary line position of the common observation area, the deflection degree of the pixel point with respect to the two boundary lines, and obtaining, according to the deflection degree of the pixel point with respect to the two boundary lines, the angle weighting corresponding to the pixel point, includes:
acquiring a horizontal pixel distance value of a horizontal pixel coordinate of a pixel point in the common observation area and a horizontal pixel coordinate of a corresponding vehicle corner point, and acquiring a longitudinal pixel distance value of a longitudinal pixel coordinate of a pixel point in the common observation area and a longitudinal pixel coordinate of a corresponding vehicle corner point;
and acquiring a pixel distance sum value of the transverse pixel distance value and the longitudinal pixel distance value, taking a ratio of the transverse pixel distance value to the pixel distance sum value as a first angle weight, and taking a ratio of the longitudinal pixel distance value to the pixel distance sum value as a second angle weight.
Optionally, the partition information of the panoramic all-round-looking initialization image is acquired by:
acquiring the pixel size of an initialized panoramic all-round-looking initialized image and the physical length and width size of a vehicle;
determining pixel positions of four vehicle corner points of the vehicle in the panoramic all-round looking initialized image according to the pixel size of the panoramic all-round looking initialized image and the physical length and width size of the vehicle by taking an image center pixel point of the panoramic all-round looking initialized image as the pixel center of the vehicle;
and dividing nine rectangular areas of the panoramic all-around initialization image by taking the pixel positions of four vehicle corner points of the vehicle as reference points to obtain the partition information of the panoramic all-around initialization image.
Optionally, the dividing, with the pixel positions of the four vehicle corner points of the vehicle as reference points, the panoramic annular view initialized image into nine rectangular areas to obtain the partition information of the panoramic annular view initialized image includes:
taking a rectangular sub-area positioned in the center area of the panoramic all-around initialization image as the area where the vehicle body is positioned;
taking the rectangular sub-regions right above, right below, right left and right of the region where the vehicle body is located as an independent observation region of a fisheye camera in front of the vehicle body, an independent observation region of a fisheye camera behind the vehicle body, an independent observation region of a fisheye camera on the left of the vehicle body and an independent observation region of a fisheye camera on the right of the vehicle body in sequence;
and sequentially taking the rectangular sub-areas at the upper left, the upper right, the lower left and the lower right of the area where the vehicle body is positioned as a common observation area of the fish-eye camera at the front of the vehicle body and the fish-eye camera at the left of the vehicle body, a common observation area of the fish-eye camera at the front of the vehicle body and the fish-eye camera at the right of the vehicle body, a common observation area of the fish-eye camera at the rear of the vehicle body and the fish-eye camera at the left of the vehicle body and a common observation area of the fish-eye camera at the rear of the vehicle body and the fish-eye camera at the right of the vehicle body.
Optionally, the obtaining a pixel value corresponding to each pixel point in the panoramic view initialization image according to the pixel point correspondence mapping table includes:
creating an independent pixel value processing thread for each pixel point in the panoramic all-round vision initialization image, wherein the pixel value processing thread is used for reading the corresponding pixel value of the corresponding fisheye image according to the pixel point corresponding relation mapping table and performing interpolation calculation;
and realizing parallel operation of all pixel value processing threads through a Cuda parallel computing architecture, and obtaining a pixel value corresponding to each pixel point in the panoramic initialization image.
In a second aspect, an embodiment of the present application further provides a panoramic view image stitching apparatus, where the apparatus includes:
the device comprises an image initialization unit, a display unit and a control unit, wherein the image initialization unit is used for acquiring a panoramic all-around initialization image and partition information of the panoramic all-around initialization image, and the partition information of the panoramic all-around initialization image comprises four independent observation areas, four common observation areas and an area where a vehicle body is located;
the weight initialization unit is used for determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, and the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point;
the mapping table initialization unit is used for constructing a pixel point corresponding relation mapping table according to partition information of the panoramic all-around vision initialization image, a double weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four fisheye cameras arranged in front of, behind, on the left of and on the right of a vehicle body, wherein the pixel point corresponding relation mapping table indicates the corresponding relation between the pixel point of the single observation area and the pixel point of the fisheye image collected by the corresponding fisheye camera, and indicates the corresponding relation between the pixel point of the common observation area and the pixel points of the two fisheye images collected by the corresponding fisheye cameras and the corresponding angle weighting and distance weighting;
and the image splicing unit is used for acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-around initialization image according to the pixel point correspondence mapping table, and writing the pixel value into the position of the pixel point to obtain a spliced panoramic all-around image.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions that when executed cause the processor to perform any of the panoramic surround image stitching methods described above.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs, which when executed by an electronic device including multiple application programs, cause the electronic device to perform any one of the panoramic annular image stitching methods described above.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: the method comprises the steps of initializing a panoramic all-round-view initialization image, wherein the relative position relation between four fisheye lenses installed on a vehicle body and a vehicle is determined to be unchanged after calibration, so that the corresponding relation between each pixel point in the panoramic all-round-view initialization image and the pixel points of four fisheye images is determined to be unchanged under the condition that the pixel size of the panoramic all-round-view initialization image is determined, the deviation angle of each pixel point relative to an adjacent independent observation area and the relative distance between each pixel point and the fisheye lens in a common observation area are also determined to be unchanged, and therefore after the image is initialized, a pixel point corresponding relation mapping table from the panoramic all-round-view initialization image to the fisheye image can be constructed, the corresponding relation between the pixel points of the panoramic all-round-view initialization image and the fisheye image is expressed through the pixel point corresponding relation mapping table, and the angle weighting and the distance weighting are distributed to the corresponding pixel values of the fisheye image.
The method and the device have the advantages that the splicing templates of the panoramic all-around images are provided through the panoramic all-around initialization image and the corresponding relation mapping table of the pixel points, when the panoramic all-around images are spliced, any group of fisheye images can be quickly spliced on the basis of the splicing templates, and the panoramic all-around images without boundary problems are obtained.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a panoramic view image stitching method in an embodiment of the present application;
FIG. 2 is a schematic view of a partition of a panoramic initialization image shown in an embodiment of the present application;
fig. 3 is a schematic diagram of a four-way fisheye image shown in the embodiment of the present application;
fig. 4 is a schematic diagram illustrating a stitching effect of a panoramic all-around image shown in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a panoramic all-around image stitching device in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
An embodiment of the present application provides a method for stitching panoramic annular view images, and as shown in fig. 1, a flow diagram of the method for stitching panoramic annular view images in the embodiment of the present application is provided, where the method at least includes the following steps S110 to S140:
step S110, acquiring a panoramic all-round-looking initialization image and the partition information of the panoramic all-round-looking initialization image, wherein the partition information of the panoramic all-round-looking initialization image comprises four independent observation areas, four common observation areas and an area where a vehicle body is located.
The panoramic all-around image splicing method is executed by a vehicle end, four fisheye cameras, namely a front fisheye camera, a rear fisheye camera, a left fisheye camera and a right fisheye camera, are mounted on a vehicle body of a vehicle in advance, and data can be normally acquired in real time after calibration is completed. When the panoramic images are spliced, a blank panoramic initialization image needs to be initialized, and then based on the position relationship between the front, rear, left and right four-way fisheye cameras and the vehicle, the panoramic initialization image can be subjected to partition processing to obtain 9 sub-areas shown in fig. 2, wherein a first sub-area (corresponding to the sub-area with the label of 1 in fig. 2), a third sub-area (corresponding to the sub-area with the label of 3 in fig. 2), a seventh sub-area (corresponding to the sub-area with the label of 7 in fig. 2) and a ninth sub-area (corresponding to the sub-area with the label of 9 in fig. 2) are four common observation areas, and the common observation areas refer to the common visible areas of the two-way fisheye cameras; the second sub-region (corresponding to the sub-region numbered 2 in fig. 2), the fourth sub-region (corresponding to the sub-region numbered 4 in fig. 2), the sixth sub-region (corresponding to the sub-region numbered 6 in fig. 2), and the eighth sub-region (corresponding to the sub-region numbered 8 in fig. 2) are four separate observation regions, and the separate observation regions are regions that can be observed only by the corresponding fisheye cameras separately; the fifth sub-area (corresponding to the sub-area labeled 5 in fig. 2) is the area where the vehicle body is located, and the fifth sub-area cannot be observed by the four-way fisheye camera.
Step S120, determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, wherein the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point.
Since the common observation area refers to the common visible area of the two fisheye cameras, the pixel points in the common observation area in the panoramic all-round-looking initialization image correspond to the pixel points of the two fisheye images, taking the pixel point (i, j) in the first sub-area in fig. 2 as an example, the pixel point (i, j) corresponds to the pixel point of the fisheye image in front of the vehicle collected by the fisheye camera in front of the vehicle, and also corresponds to the pixel point of the fisheye image in left side collected by the fisheye camera in left side. Because the fisheye camera has calibration errors and imaging errors, if equal-weight fusion calculation is directly performed on the corresponding pixel value _ f of the fisheye image in front of the vehicle and the corresponding pixel value _ l of the fisheye image on the left side, boundary problems such as obvious blurring, scrambling, cracking and the like exist in the first sub-area, the second sub-area and the fourth sub-area, and therefore smoothing processing needs to be performed on the overlapped area.
Aiming at the boundary problem of the common observation area, the double-weighted fusion strategy is set for each pixel point in the common observation area, the more the pixel point is deviated to a certain single observation area, the larger angle weight of the pixel value of the fisheye image acquired by the fisheye camera corresponding to the deviated single observation area is required to be given, the closer the distance between the pixel point and the fisheye camera is, the larger distance weight of the pixel value of the fisheye image acquired by the fisheye camera which is closer to the pixel point is required to be given, and through the constraint of the double weights, the boundary problem does not exist in the fused common observation area.
And S130, constructing a pixel point corresponding relation mapping table according to the partition information of the panoramic all-around initialized image, a dual weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four fisheye cameras arranged in front of, behind, on the left of and on the right of the vehicle body.
The pixel point correspondence relation mapping table indicates the correspondence relation between the pixel points of the individual observation areas and the pixel points of the fisheye images collected by the corresponding fisheye cameras, and indicates the correspondence relation between the pixel points of the common observation areas and the pixel points of the two fisheye images collected by the corresponding fisheye cameras, and the corresponding angle weighting and distance weighting.
For example, the mapping table of the pixel point correspondence relationship is set as a two-channel mapping table, for pixel points in a common observation region, a first channel indicates an identifier of one channel of fisheye lens, a corresponding pixel point in a fisheye image and corresponding angle weighting and distance weighting, and a second channel indicates an identifier of the other channel of fisheye lens, a corresponding pixel point in the fisheye image and corresponding angle weighting and distance weighting; for the pixel points in the independent observation area, the first channel refers to the identification of the corresponding fisheye lens and the corresponding pixel points in the fisheye image, and the numerical value of the second channel is null; for the pixel points in the area where the vehicle body is located, the first channel indicates the filled pixel value, for example, the gray value of 255, and the second channel is empty.
And step S140, acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-round looking initialized image according to the pixel point correspondence mapping table, and writing the pixel value into the position of the pixel point to obtain a spliced panoramic all-round looking image.
As can be known from the panoramic all-round-looking image stitching method shown in fig. 1, in this embodiment, a panoramic all-round-looking initialization image is initialized first, and since the relative position relationship between the calibrated four fisheye lenses installed on a vehicle and the vehicle is determined, when the pixel size of the panoramic all-round-looking initialization image is determined, the corresponding relationship between each pixel point in the panoramic all-round-looking initialization image and the pixel points of four fisheye images is determined to be constant, and the deflection angle of each pixel point relative to an adjacent individual observation area and the relative distance between each pixel point in a common observation area and the fisheye lens are also determined to be constant, after the image is initialized, a mapping table of the corresponding relationship between the panoramic all-round-looking initialization image and the pixel points of the fisheye image can be constructed, and the corresponding relationship between the panoramic all-round-looking initialization image and the pixel points of the fisheye image is expressed by the mapping table of the corresponding pixel points, and the angle weighting and distance weighting assigned to the corresponding pixel values of the fisheye image are expressed.
That is to say, this application provides the concatenation template of panorama all around image through panorama all around initialization image and pixel corresponding relation map table, when carrying out panorama all around image concatenation, can splice fast any group's fisheye image based on above-mentioned concatenation template, obtains the panorama all around image that does not have the boundary problem.
In some embodiments of the present application, the panoramic initialization image is a Bird's Eye View (BEV), that is, an inverse perspective transformation relationship exists between the panoramic initialization image and the fisheye image, so that a corresponding relationship between each pixel point in the panoramic initialization image and a pixel point of a corresponding fisheye image can be obtained based on the inverse perspective transformation relationship.
When the panoramic all-round-looking initialization image is obtained through initialization, the pixel size of the panoramic all-round-looking initialization image can be determined according to requirements, wherein the pixel size refers to the number of pixel points included in the length (unit is centimeter or inch) direction of the image and the number of pixel points included in the width (unit is centimeter or inch) direction of the image.
Based on the above, the method obtains the partition information of the panoramic all-round-view initialization image through the following steps:
acquiring the pixel size of the initialized panoramic all-round-looking initialized image and the physical length and width size of a vehicle;
determining pixel positions of four vehicle corner points of the vehicle in the panoramic all-round looking initialized image according to the pixel size of the panoramic all-round looking initialized image and the physical length and width size of the vehicle by taking an image center pixel point of the panoramic all-round looking initialized image as a pixel center of the vehicle, wherein P1, P2, P3 and P4 shown in FIG. 2 are the four vehicle corner points;
dividing the panoramic all-round-looking initialization image into nine rectangular areas by taking the pixel positions of four vehicle corner points of the vehicle as reference points, for example, dividing the whole panoramic all-round-looking initialization image into nine rectangular areas by using a straight line P1P2, a straight line P2P4, a straight line P1P2 and a straight line P3P4, and obtaining the partition information of the panoramic all-round-looking initialization image.
In the embodiment, a rectangular sub-area located in the center area of the panoramic all-round-looking initialization image is used as an area where a vehicle body is located;
taking a rectangular sub-area right above the area where the vehicle body is located as an independent observation area of a fisheye camera in front of the vehicle body, taking a rectangular sub-area right below the area where the vehicle body is located as an independent observation area of a fisheye camera behind the vehicle body, taking a rectangular sub-area right left of the area where the vehicle body is located as an independent observation area of a fisheye camera on the left of the vehicle body, and taking a rectangular sub-area right of the area where the vehicle body is located as an independent observation area of a fisheye camera on the right of the vehicle body;
the left upper rectangular sub-region of the region where the vehicle body is located is used as a common observation region of the fish-eye camera in front of the vehicle body and the fish-eye camera in left of the vehicle body, the right upper rectangular sub-region of the region where the vehicle body is located is used as a common observation region of the fish-eye camera in front of the vehicle body and the fish-eye camera in right of the vehicle body, the left lower rectangular sub-region of the region where the vehicle body is located is used as a common observation region of the fish-eye camera in rear of the vehicle body and the fish-eye camera in left of the vehicle body, and the right lower rectangular sub-region of the region where the vehicle body is located is used as a common observation region of the fish-eye camera in rear of the vehicle body and the fish-eye camera in right of the vehicle body.
The ground image of the dead ahead of the automobile body or dead behind is hardly shot according to the fish eye projection of the left side and the right side of the automobile body, so that the area is divided by the vehicle corner points, and the accuracy of area division is guaranteed. In other embodiments, the area division may also be performed according to the calibration of the test image, for example, a test chart board with test elements is laid on the ground around the vehicle, four test images are obtained by shooting the test chart board with four fisheye cameras on the vehicle body, and the test images are analyzed to obtain the partition information.
Because each pixel point in the common observation area corresponds to a pixel point of two fisheye images, the dual weighting fusion strategy corresponding to each pixel point in the common observation area is determined through the following steps:
acquiring pixel positions of pixel points in the common observation region, pixel positions of two paths of fisheye cameras corresponding to the common observation region and boundary line positions of the common observation region and two adjacent independent observation regions;
determining the deviation degree of the pixel point to the two boundary lines according to the pixel position of the pixel point in the common observation area and the boundary line position of the common observation area, and obtaining the angle weighting corresponding to the pixel point according to the deviation degree of the pixel point to the two boundary lines;
and determining the relative distance between the pixel point and the two fisheye cameras according to the pixel position of the pixel point in the common observation area and the pixel positions of the two fisheye cameras corresponding to the common observation area, and obtaining the distance weight corresponding to the pixel point according to the relative distance between the pixel point and the two fisheye cameras.
In this embodiment, vehicle corner points corresponding to the common observation region are obtained, for example, in fig. 2, a vehicle corner point P1 corresponding to the common observation region with the reference number 1, a vehicle corner point P2 corresponding to the common observation region with the reference number 3, a vehicle corner point P3 corresponding to the common observation region with the reference number 7, and a vehicle corner point P4 corresponding to the common observation region with the reference number 9. Then the boundary line between the common observation region labeled 1 and the individual observation region labeled 2 is the straight line on which the longitudinal pixel coordinate axis of point P1 is located, and the boundary line between the common observation region labeled 1 and the individual observation region labeled 4 is the straight line on which the transverse pixel coordinate axis of point P1 is located.
Based on this, in this embodiment, a horizontal pixel distance value of a horizontal pixel coordinate of a pixel point in the common observation area and a horizontal pixel coordinate of a corresponding vehicle corner point is obtained, a vertical pixel distance value of a vertical pixel coordinate of a pixel point in the common observation area and a vertical pixel coordinate of a corresponding vehicle corner point is obtained, a pixel distance sum of the horizontal pixel distance value and the vertical pixel distance value is obtained, a ratio of the horizontal pixel distance value to the pixel distance sum is used as a first angle weighting, and a ratio of the vertical pixel distance value to the pixel distance sum is used as a second angle weighting.
According to the embodiment, a first relative distance value between a pixel point and a first fisheye camera is calculated according to the horizontal pixel coordinate and the longitudinal pixel coordinate of the pixel point in the common observation area and according to the horizontal pixel coordinate and the longitudinal pixel coordinate of the first fisheye camera; similarly, according to the horizontal pixel coordinate and the vertical pixel coordinate of the pixel point in the common observation area and the horizontal pixel coordinate and the vertical pixel coordinate of the second fisheye camera, calculating a second relative distance value between the pixel point and the second fisheye camera; and calculating a sum of the first relative distance value and the second relative distance value, taking a ratio of the first relative distance value to the sum as a first distance weight, and taking a ratio of the second relative distance value to the sum as a second distance weight.
Here, the first fisheye camera and the second fisheye camera refer to two fisheye cameras corresponding to the common observation area, where the first fisheye camera, the first angle weighting and the first distance weighting correspond to a first channel stored in the pixel point correspondence mapping table, and the second fisheye camera, the second angle weighting and the second distance weighting correspond to a second channel stored in the pixel point correspondence mapping table.
After the dual weighted fusion strategy corresponding to each pixel point in the common observation area is obtained, the mapping table of the pixel point correspondence relation of the application can be constructed.
When a pixel point correspondence mapping table is constructed, a vehicle body coordinate system is firstly constructed, in this embodiment, for example, a position close to the middle of a vehicle head is used as an origin (0, 0), the vehicle head orientation is a Y axis, the vehicle left direction is an X axis, and the vehicle up direction is a Z axis, so that a current internal parameter K of a four-way fisheye camera and a current relative external parameter of the four-way fisheye camera and a vehicle are obtained, wherein the relative external parameter includes a rotation matrix R and a translation matrix T. Therefore, the corresponding relation between each pixel point in the panoramic all-round-looking initialized image and the pixel point of the fisheye image collected by the corresponding fisheye camera can be obtained according to the inverse perspective projection principle.
And after the mapping relation of the corresponding pixel points and the double weighted fusion strategy corresponding to each pixel point in the common observation area are obtained, a pixel point corresponding relation mapping table of two channels can be constructed.
According to the method and the device, the pixel point corresponding relation mapping table can be established in the initialization process of splicing the panoramic all-around images, so that in actual operation, only the pixel value of the corresponding fisheye image needs to be searched based on the pixel point corresponding relation mapping table, the image splicing speed can be greatly improved, and the memory space is saved.
Next, when four fisheye images collected by four fisheye cameras are obtained, it may be determined in which region of the panoramic all-around initialization image the pixel (i, j) is located, and if the pixel (i, j) is located in an individual observation region, the pixel value of the corresponding fisheye image is found through the pixel correspondence mapping table, and the pixel value of the pixel (i, j) is obtained after interpolation processing is performed on the found pixel value. And if the pixel point (i, j) is located in the common observation area, finding the pixel values corresponding to the two fisheye images through the pixel point corresponding relation mapping table, performing interpolation processing on the two pixel values, and fusing the two pixel values based on dual weighting to obtain the pixel value of the pixel point (i, j). If the pixel point (i, j) is located in the area where the vehicle body is located, a pixel value is generated according to a preset method, the preset method may be to use the pixel value indicated in the mapping table of the correspondence relationship between the pixel points as the pixel value of the pixel point (i, j), or may be to obtain the pixel value according to a judgment result of the area where the pixel point belongs, for example, when the pixel point (i, j) is judged to be located in the area where the vehicle body is located, the set pixel value is directly obtained as the pixel value of the pixel point (i, j).
The pixel value calculation method of the pixel points located in the common observation area is as follows:
assuming that the coordinate of a pixel point of the common observation area with the label of 1 is (i, j), the coordinate of a pixel point of a point P1 on the panoramic all-round-looking initialization image is (x 1, y 1), the coordinate of a pixel point corresponding to the fisheye camera on the left side is (cx _ l, cy _ l), the coordinate of a pixel point corresponding to the fisheye camera in front of the vehicle is (cx _ f, cy _ f), and the positions of pixel points of the four fisheye cameras on the panoramic all-round-looking initialization image can be obtained by calculation according to relative external parameters of the fisheye cameras and the vehicle.
Then, the first angular weight w11 and the second angular weight w12 corresponding to the pixel point (i, j) are as follows:
w11=dx/(dx+dy)
w12=dy/(dx+dy)
where dx = abs (i-x 1), dy = abs (j-y 1), and abs is a distance operator.
The first distance weighting w21 and the second distance weighting w22 corresponding to the pixel point (i, j) are as follows:
w21=dist1/(dist1+dist2)
w22=dist2/(dist1+dist2)
where dist1= sqrt (pow (i-cx _1, 2) + pow (j-cy _1, 2)), dist2= sqrt (pow (i-cx _ f, 2) + pow (j-cy _ f, 2)), pow (x-y, 2) is the operator for squaring the difference between x and y, and sqrt () is the operator for squaring the square root.
Here, the first angular weight w11 and the first distance weight w21 are dual weights assigned to corresponding pixel values on the left fisheye image, and the second angular weight w21 and the second distance weight w22 are dual weights assigned to corresponding pixel values on the front fisheye image.
Therefore, the pixel value val corresponding to the pixel point (i, j) is obtained as follows:
val=((w11+w21)*val_1+(w12+w22)*val_2)/2.0
for the pixel values of the pixel points (u, v) in the independent observation area, the pixel points corresponding to the pixel points (u, v) one to one on the fisheye image can be obtained according to the mapping table of the pixel point correspondence relationship, and the pixel values of the corresponding pixel points are used as the pixel values of the pixel points (u, v).
It should be noted here that, in order to obtain a better stitching effect, the mapping coordinates are set to be a floating point type, so after the corresponding pixel values of the fisheye image are obtained, the present application further performs interpolation processing on the pixel values, for example, a bilinear interpolation algorithm is used to interpolate the corresponding pixel values of the fisheye image, and the pixel value of each pixel point in the panoramic all-round-looking initialization image is calculated according to the pixel values after the interpolation operation.
The embodiment of the application shows that the panoramic all-around view image shown in fig. 4 is obtained by splicing the four fisheye images shown in fig. 3 based on the image splicing method, and as can be seen by referring to fig. 4, the image splicing method of the application can overcome the boundary problem of the overlapping area, and the image content in the overlapping area in the panoramic all-around view image is relatively smooth.
In addition, considering that the mapping relation of each pixel point of the panoramic all-around initialized image is mutually independent, the pixel value of each pixel point can be quickly generated in parallel through a Cuda parallel computing architecture.
The Cuda parallel computing architecture is a parallel computing platform and a programming model, and can greatly improve the computing performance and the efficiency of image splicing computation by utilizing the Processing capacity of a Graphics Processing Unit (GPU).
Specifically, in some embodiments of the present application, the obtaining a pixel value corresponding to each pixel point in the panoramic initialization image according to the pixel point correspondence mapping table includes:
creating an independent pixel value processing thread for each pixel point in the panoramic all-round vision initialization image, wherein the pixel value processing thread is used for reading the corresponding pixel value of the corresponding fisheye image according to the pixel point corresponding relation mapping table and performing interpolation calculation;
and realizing parallel operation of all pixel value processing threads through a Cuda parallel computing architecture, and obtaining a pixel value corresponding to each pixel point in the panoramic initialization image.
The embodiment of the present application further provides a panoramic view image stitching device 500, as shown in fig. 5, a schematic structural diagram of the panoramic view image stitching device in the embodiment of the present application is provided, the device 500 includes: an image initialization unit 510, a weight initialization unit 520, a mapping table initialization unit 530 and an image stitching unit 540, wherein:
an image initialization unit 510, configured to obtain a panoramic all-around initialization image and partition information of the panoramic all-around initialization image, where the partition information of the panoramic all-around initialization image includes four individual observation areas, four common observation areas, and an area where a vehicle body is located;
a weight initialization unit 520, configured to determine a dual-weighted fusion policy corresponding to each pixel point in the common observation area, where the dual-weighted fusion policy includes angle weighting and distance weighting corresponding to the pixel point;
a mapping table initializing unit 530, configured to construct a pixel point correspondence mapping table according to partition information of the panoramic all-around view initialized image, a dual-weighted fusion policy corresponding to each pixel point in the common observation area, and internal and external parameters of four fisheye cameras installed in front of, behind, on the left of, and on the right of a vehicle body, where the pixel point correspondence mapping table indicates a correspondence between a pixel point of the individual observation area and a pixel point of a fisheye image collected by the corresponding fisheye camera, and indicates a correspondence between a pixel point of the common observation area and pixel points of two fisheye images collected by the corresponding fisheye cameras, and corresponding angle weighting and distance weighting;
and the image stitching unit 540 is configured to obtain four fisheye images collected by four fisheye cameras, obtain a pixel value corresponding to each pixel point in the panoramic all-around initialized image according to the pixel point correspondence mapping table, and write the pixel value into the pixel point position, so as to obtain a stitched panoramic all-around image.
In an embodiment of the present application, the weight initializing unit 520 is configured to obtain pixel positions of pixels in the common observation area, pixel positions of two fisheye cameras corresponding to the common observation area, and boundary line positions of the common observation area and two adjacent independent observation areas; determining the deviation degree of the pixel point to the two boundary lines according to the pixel position of the pixel point in the common observation area and the boundary line position of the common observation area, and obtaining the angle weighting corresponding to the pixel point according to the deviation degree of the pixel point to the two boundary lines; and determining the relative distance between the pixel point and the two fisheye cameras according to the pixel position of the pixel point in the common observation area and the pixel positions of the two fisheye cameras corresponding to the common observation area, and obtaining the distance weight corresponding to the pixel point according to the relative distance between the pixel point and the two fisheye cameras.
In one embodiment of the present application, the weight initialization unit 520 includes a data acquisition module and an angle weighting calculation module;
the data acquisition module is used for acquiring vehicle corner points corresponding to the common observation area; obtaining the boundary line position of the common observation area according to the pixel coordinates of the vehicle corner points;
the angle weighting calculation module is used for acquiring a transverse pixel distance value of a transverse pixel coordinate of a pixel point in the common observation area and a transverse pixel coordinate of a corresponding vehicle corner point, and acquiring a longitudinal pixel distance value of a longitudinal pixel coordinate of the pixel point in the common observation area and a longitudinal pixel coordinate of a corresponding vehicle corner point; and acquiring a pixel distance sum value of the transverse pixel distance value and the longitudinal pixel distance value, taking a ratio of the transverse pixel distance value to the pixel distance sum value as a first angle weight, and taking a ratio of the longitudinal pixel distance value to the pixel distance sum value as a second angle weight.
In an embodiment of the present application, the image initialization unit 510 is configured to obtain a pixel size of an initialized panoramic all-round-looking initialization image and a physical length and width size of a vehicle; determining pixel positions of four vehicle corner points of the vehicle in the panoramic all-around initialization image according to the pixel size of the panoramic all-around initialization image and the physical length and width size of the vehicle by taking an image center pixel point of the panoramic all-around initialization image as the pixel center of the vehicle; and dividing nine rectangular areas of the panoramic all-round looking initialized image by taking the pixel positions of four vehicle corner points of the vehicle as reference points to obtain the partition information of the panoramic all-round looking initialized image.
In an embodiment of the present application, the image initializing unit 510 is specifically configured to use a sub-rectangular area located in a central area of the panoramic initialized image as an area where the vehicle body is located; taking the rectangular sub-regions right above, right below, right left and right of the region where the vehicle body is located as an independent observation region of a fisheye camera in front of the vehicle body, an independent observation region of a fisheye camera behind the vehicle body, an independent observation region of a fisheye camera on the left of the vehicle body and an independent observation region of a fisheye camera on the right of the vehicle body in sequence; and sequentially taking the rectangular sub-areas at the upper left, the upper right, the lower left and the lower right of the area where the vehicle body is positioned as a common observation area of the fish-eye camera at the front of the vehicle body and the fish-eye camera at the left of the vehicle body, a common observation area of the fish-eye camera at the front of the vehicle body and the fish-eye camera at the right of the vehicle body, a common observation area of the fish-eye camera at the rear of the vehicle body and the fish-eye camera at the left of the vehicle body and a common observation area of the fish-eye camera at the rear of the vehicle body and the fish-eye camera at the right of the vehicle body.
In an embodiment of the present application, the image stitching unit 540 is further configured to create an individual pixel value processing thread for each pixel point in the panoramic all-around initialized image, where the pixel value processing thread is configured to read a corresponding pixel value of a corresponding fisheye image according to the pixel point correspondence mapping table and perform interpolation calculation; and realizing the parallel operation of all pixel value processing threads through a Cuda parallel computing architecture, and obtaining the pixel value corresponding to each pixel point in the panoramic all-around initialization image.
It can be understood that the above panoramic all-around image stitching device can implement the steps of the panoramic all-around image stitching method provided in the foregoing embodiments, and the relevant explanations about the panoramic all-around image stitching method are applicable to the panoramic all-around image stitching device, and are not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
And the processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the panoramic all-around image splicing device on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
acquiring a panoramic all-round looking initialized image and partition information of the panoramic all-round looking initialized image, wherein the partition information of the panoramic all-round looking initialized image comprises four independent observation areas, four common observation areas and an area where an automobile body is located;
determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, wherein the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point;
according to the partition information of the panoramic all-around vision initialization image, a double weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four fisheye cameras installed in the front, the rear, the left and the right of a vehicle body, a pixel point correspondence relation mapping table is constructed, the pixel point correspondence relation mapping table indicates the correspondence relation between the pixel point of the single observation area and the pixel point of the fisheye image collected by the corresponding fisheye camera, and indicates the correspondence relation between the pixel point of the common observation area and the pixel points of the two fisheye images collected by the corresponding fisheye cameras and the corresponding angle weighting and distance weighting;
and acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-around view initialization image according to the pixel point corresponding relation mapping table, and writing the pixel value into the pixel point position to obtain a spliced panoramic all-around view image.
The method performed by the panoramic all-around image stitching apparatus according to the embodiment shown in fig. 1 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is positioned in the memory, the processor reads the information in the memory and completes the steps of the panoramic all-round looking image splicing method by combining the hardware of the processor.
The electronic device may further execute the method executed by the panoramic all-around image stitching device in fig. 1, and implement the functions of the panoramic all-around image stitching device in the embodiment shown in fig. 1, which are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which, when executed by an electronic device including multiple application programs, enable the electronic device to perform the method performed by the panoramic all-around image stitching apparatus in the embodiment shown in fig. 1, and are specifically configured to perform:
acquiring a panoramic all-around view initialization image and partition information of the panoramic all-around view initialization image, wherein the partition information of the panoramic all-around view initialization image comprises four independent observation areas, four common observation areas and an area where a vehicle body is located;
determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, wherein the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point;
according to the partition information of the panoramic all-around vision initialization image, a double weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four fisheye cameras installed in the front, the rear, the left and the right of a vehicle body, a pixel point correspondence relation mapping table is constructed, the pixel point correspondence relation mapping table indicates the correspondence relation between the pixel point of the single observation area and the pixel point of the fisheye image collected by the corresponding fisheye camera, and indicates the correspondence relation between the pixel point of the common observation area and the pixel points of the two fisheye images collected by the corresponding fisheye cameras and the corresponding angle weighting and distance weighting;
and acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-round looking initialized image according to the pixel point correspondence mapping table, and writing the pixel value into the position of the pixel point to obtain a spliced panoramic all-round looking image.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A panoramic all-round view image splicing method is characterized by comprising the following steps:
acquiring a panoramic all-round looking initialized image and partition information of the panoramic all-round looking initialized image, wherein the partition information of the panoramic all-round looking initialized image comprises four independent observation areas, four common observation areas and an area where an automobile body is located;
determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, wherein the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point;
according to the partition information of the panoramic all-around vision initialization image, a double weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four fisheye cameras installed in the front, the rear, the left and the right of a vehicle body, a pixel point correspondence relation mapping table is constructed, the pixel point correspondence relation mapping table indicates the correspondence relation between the pixel point of the single observation area and the pixel point of the fisheye image collected by the corresponding fisheye camera, and indicates the correspondence relation between the pixel point of the common observation area and the pixel points of the two fisheye images collected by the corresponding fisheye cameras and the corresponding angle weighting and distance weighting;
and acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-round looking initialized image according to the pixel point correspondence mapping table, and writing the pixel value into the position of the pixel point to obtain a spliced panoramic all-round looking image.
2. The method of claim 1, wherein determining a dual weighted fusion policy corresponding to each pixel point within the common observation region comprises:
acquiring pixel positions of pixel points in the common observation region, pixel positions of two paths of fisheye cameras corresponding to the common observation region and boundary line positions of the common observation region and two adjacent independent observation regions;
determining the deviation degree of the pixel point to the two boundary lines according to the pixel position of the pixel point in the common observation area and the boundary line position of the common observation area, and obtaining the angle weighting corresponding to the pixel point according to the deviation degree of the pixel point to the two boundary lines;
and determining the relative distance between the pixel point and the two fisheye cameras according to the pixel position of the pixel point in the common observation area and the pixel positions of the two fisheye cameras corresponding to the common observation area, and obtaining the distance weight corresponding to the pixel point according to the relative distance between the pixel point and the two fisheye cameras.
3. The method of claim 2, wherein the boundary line position of the common observation region is obtained by:
acquiring vehicle corner points corresponding to the common observation area;
and obtaining the position of the boundary line of the common observation area according to the pixel coordinates of the edge point of the vehicle.
4. The method as claimed in claim 3, wherein the determining the degree of deviation of the pixel point with respect to the two boundary lines according to the pixel position of the pixel point in the common observation region and the boundary line position of the common observation region, and obtaining the angle weighting corresponding to the pixel point according to the degree of deviation of the pixel point with respect to the two boundary lines comprises:
acquiring a horizontal pixel distance value of a horizontal pixel coordinate of a pixel point in the common observation area and a horizontal pixel coordinate of a corresponding vehicle corner point, and acquiring a longitudinal pixel distance value of a longitudinal pixel coordinate of a pixel point in the common observation area and a longitudinal pixel coordinate of a corresponding vehicle corner point;
and acquiring the pixel distance sum of the transverse pixel distance value and the longitudinal pixel distance value, taking the ratio of the transverse pixel distance value to the pixel distance sum as a first angle weighting, and taking the ratio of the longitudinal pixel distance value to the pixel distance sum as a second angle weighting.
5. The method of claim 1, wherein the partition information of the panorama look-around initialization image is acquired by:
acquiring the pixel size of the panoramic all-round-looking initialization image and the physical length and width size of a vehicle;
determining pixel positions of four vehicle corner points of the vehicle in the panoramic all-round looking initialized image according to the pixel size of the panoramic all-round looking initialized image and the physical length and width size of the vehicle by taking an image center pixel point of the panoramic all-round looking initialized image as the pixel center of the vehicle;
and dividing nine rectangular areas of the panoramic all-around initialization image by taking the pixel positions of four vehicle corner points of the vehicle as reference points to obtain the partition information of the panoramic all-around initialization image.
6. The method as claimed in claim 5, wherein the dividing the initialization image of panoramic surround view into nine rectangular areas with the pixel positions of the four vehicle corner points of the vehicle as reference points to obtain the partition information of the initialization image of panoramic surround view comprises:
taking a rectangular sub-area located in the center area of the panoramic all-round initialized image as an area where the vehicle body is located;
sequentially taking rectangular sub-areas right above, right below, right left and right of the area where the car body is located as an independent observation area of a fisheye camera in front of the car body, an independent observation area of a fisheye camera behind the car body, an independent observation area of a fisheye camera on the left of the car body and an independent observation area of a fisheye camera on the right of the car body;
and sequentially taking the rectangular sub-areas at the upper left, the upper right, the lower left and the lower right of the area where the vehicle body is positioned as a common observation area of the fish-eye camera at the front of the vehicle body and the fish-eye camera at the left of the vehicle body, a common observation area of the fish-eye camera at the front of the vehicle body and the fish-eye camera at the right of the vehicle body, a common observation area of the fish-eye camera at the rear of the vehicle body and the fish-eye camera at the left of the vehicle body and a common observation area of the fish-eye camera at the rear of the vehicle body and the fish-eye camera at the right of the vehicle body.
7. The method as claimed in claim 1, wherein the obtaining the pixel value corresponding to each pixel point in the panoramic view initialized image according to the mapping table of pixel point correspondences comprises:
creating an independent pixel value processing thread for each pixel point in the panoramic all-around initialization image, wherein the pixel value processing thread is used for reading the corresponding pixel value of the corresponding fisheye image according to the pixel point corresponding relation mapping table and carrying out interpolation calculation;
and realizing the parallel operation of all pixel value processing threads through a Cuda parallel computing architecture, and obtaining the pixel value corresponding to each pixel point in the panoramic all-around initialization image.
8. A panoramic view image stitching device, the device comprising:
the system comprises an image initialization unit, a processing unit and a display unit, wherein the image initialization unit is used for acquiring a panoramic all-around initialization image and partition information of the panoramic all-around initialization image, and the partition information of the panoramic all-around initialization image comprises four independent observation areas, four common observation areas and an area where a vehicle body is located;
the weight initialization unit is used for determining a dual-weighted fusion strategy corresponding to each pixel point in the common observation area, and the dual-weighted fusion strategy comprises angle weighting and distance weighting corresponding to the pixel point;
the mapping table initialization unit is used for constructing a pixel point corresponding relation mapping table according to partition information of the panoramic all-around vision initialization image, a double weighting fusion strategy corresponding to each pixel point in the common observation area and internal and external parameters of four fisheye cameras arranged in front of, behind, on the left of and on the right of a vehicle body, wherein the pixel point corresponding relation mapping table indicates the corresponding relation between the pixel point of the single observation area and the pixel point of the fisheye image collected by the corresponding fisheye camera, and indicates the corresponding relation between the pixel point of the common observation area and the pixel points of the two fisheye images collected by the corresponding fisheye cameras and the corresponding angle weighting and distance weighting;
and the image splicing unit is used for acquiring four fisheye images acquired by four fisheye cameras, acquiring a pixel value corresponding to each pixel point in the panoramic all-around view initialization image according to the pixel point corresponding relation mapping table, and writing the pixel value into the position of the pixel point to obtain a spliced panoramic all-around view image.
9. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the panoramic surround view image stitching method of any one of claims 1 to 7.
10. A computer-readable storage medium storing one or more programs which, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform the panoramic surround image stitching method according to any one of claims 1 to 7.
CN202211278247.0A 2022-10-19 2022-10-19 Panoramic all-around image splicing method and device, electronic equipment and storage medium Pending CN115587935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211278247.0A CN115587935A (en) 2022-10-19 2022-10-19 Panoramic all-around image splicing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211278247.0A CN115587935A (en) 2022-10-19 2022-10-19 Panoramic all-around image splicing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115587935A true CN115587935A (en) 2023-01-10

Family

ID=84779950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211278247.0A Pending CN115587935A (en) 2022-10-19 2022-10-19 Panoramic all-around image splicing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115587935A (en)

Similar Documents

Publication Publication Date Title
US12080025B2 (en) Camera-only-localization in sparse 3D mapped environments
CN107945112B (en) Panoramic image splicing method and device
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN109754363B (en) Around-the-eye image synthesis method and device based on fish eye camera
CN115797454B (en) Multi-camera fusion sensing method and device under bird's eye view angle
CN109509153A (en) A kind of panorama mosaic method and system of towed vehicle image
CN112215747A (en) Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium
CN114705121A (en) Vehicle pose measuring method and device, electronic equipment and storage medium
CN114119992A (en) Multi-mode three-dimensional target detection method and device based on image and point cloud fusion
WO2021110497A1 (en) Estimating a three-dimensional position of an object
CN114332349B (en) Binocular structured light edge reconstruction method, system and storage medium
CN117495676A (en) Panoramic all-around image stitching method and device, electronic equipment and storage medium
CN113610927A (en) AVM camera parameter calibration method and device and electronic equipment
CN110148086B (en) Depth filling method and device for sparse depth map and three-dimensional reconstruction method and device
CN116012805B (en) Target perception method, device, computer equipment and storage medium
CN115587935A (en) Panoramic all-around image splicing method and device, electronic equipment and storage medium
CN117315046A (en) Method and device for calibrating looking-around camera, electronic equipment and storage medium
US20210073560A1 (en) Apparatus and method for providing top view image of parking space
JP6266340B2 (en) Lane identification device and lane identification method
US10778950B2 (en) Image stitching method using viewpoint transformation and system therefor
CN117315035B (en) Vehicle orientation processing method and device and processing equipment
Miljković et al. Vehicle Distance Estimation Based on Stereo Camera System with Implementation on a Real ADAS Board
CN118397593A (en) Method and related device for detecting obstacle
CN116188601A (en) Parameter adjustment method and device and electronic equipment
CN115965525A (en) Method and device for generating ground feature point cloud around vehicle body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination