CN112435161A - Panoramic all-around image splicing method and system, electronic equipment and storage medium - Google Patents

Panoramic all-around image splicing method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN112435161A
CN112435161A CN202011265428.0A CN202011265428A CN112435161A CN 112435161 A CN112435161 A CN 112435161A CN 202011265428 A CN202011265428 A CN 202011265428A CN 112435161 A CN112435161 A CN 112435161A
Authority
CN
China
Prior art keywords
image
pixel
fisheye
determining
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011265428.0A
Other languages
Chinese (zh)
Inventor
童柏琛
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mushroom Car Union Information Technology Co Ltd
Original Assignee
Mushroom Car Union Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mushroom Car Union Information Technology Co Ltd filed Critical Mushroom Car Union Information Technology Co Ltd
Priority to CN202011265428.0A priority Critical patent/CN112435161A/en
Publication of CN112435161A publication Critical patent/CN112435161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a method and a system for splicing panoramic all-round images, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining a fisheye image; determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image; and converting the fisheye image into a bird's-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird's-eye view. The embodiment of the invention can further eliminate the problem of excessively obvious splicing seams under the condition of ensuring the video display frame rate, realize gradual transition of image splicing positions and reduce the calculation complexity in the splicing process.

Description

Panoramic all-around image splicing method and system, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a panoramic all-around image splicing method and system, electronic equipment and a storage medium.
Background
The automobile is parked in a narrow parking lot through a narrow road or traffic flow, and due to the fact that the visual field of a driver is limited, collision is easy to happen, and unnecessary loss is caused. In the aspect of automatic parking environment perception, a visual sensor is applied to automatic parking, so that a 360-degree panoramic looking-around technology is developed rapidly.
The method has the advantages that the environmental information around the automobile body is obtained in real time by the aid of the 4-path fisheye cameras arranged in front of, behind, on the left of and on the right of the automobile body, zero dead angle detection of the environment around the automobile body is achieved, monitoring capability of a driver for blind areas is improved, meanwhile, the method can be combined with an image processing technology, functions of parking space identification, obstacle detection, driving-available area identification and the like are achieved, and the method is an important component of an automatic parking system.
Because the exposure consistency of the four cameras and the continuity of the splicing seams need to be considered, most pixel values of the four images can be repeatedly traversed in the calculation process of each frame by the existing panoramic splicing method, and the real-time image data processing capacity can be greatly influenced on a low-computation-force platform.
Therefore, how to reduce the computational complexity in the splicing process under the condition of ensuring the video display frame rate becomes an urgent problem to be solved.
Disclosure of Invention
Aiming at the defects in the prior art, the embodiment of the invention provides a panoramic all-round-view image splicing method and system, electronic equipment and a storage medium, and at least solves the technical problems that when a fisheye image is converted into a panoramic all-round-view image, the calculation complexity is high, and the real-time performance of image conversion on a low-computation-force platform is difficult to guarantee.
In a first aspect, an embodiment of the present invention provides a method for stitching panoramic images, including:
obtaining a fisheye image;
determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image;
and converting the fisheye image into a bird's-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird's-eye view.
Optionally, in the panoramic all-around image stitching method,
the determining a mapping relationship of pixel points in the fisheye image on an image having a common visible area with the fisheye image includes:
determining whether the image coordinates of any pixel point in the fisheye image are in the public visible area;
if the fish-eye image belongs to the public visible area, obtaining the image coordinates of the pixel points and the image coordinates of the pixel points corresponding to the image coordinates in the image in the public visible area with the fish-eye image to form a mapping relation;
and if the pixel point does not belong to the common visual area, determining that the pixel point is not related to the image of the fisheye image with the common visual area.
Optionally, in the panoramic all-around image stitching method,
the converting the fisheye image into the aerial view according to the mapping relationship comprises:
for the pixel points belonging to the public visible area, determining the pixel values of the pixel points in the aerial view according to the pixel values of the pixel points corresponding to the image coordinates in the fisheye image, the pixel values of the pixel points corresponding to the image coordinates in the image of the fisheye image in the public visible area and the mapping relation;
and for the pixel points which do not belong to the public visible area, determining the pixel values of the pixel points in the aerial view according to the pixel values of the fisheye images corresponding to the image coordinates of the pixel points.
Optionally, in the panoramic all-around image stitching method,
before the step of determining the pixel value corresponding to the pixel point in the bird's-eye view image according to the pixel value corresponding to the image coordinate of the pixel point in the fish-eye image, the pixel value corresponding to the image coordinate of the pixel point in the image in the common visible area of the fish-eye image, and the mapping relationship, the method further comprises the following steps:
determining a first pixel value and a second pixel value of a first pixel point;
comparing the difference in color space between the first pixel value and the second pixel value;
determining whether the pixel value difference is greater than a preset threshold;
the first pixel point is located in the public visible area; the first pixel value is a pixel value corresponding to a first pixel point in the fisheye image; and the second pixel point is the pixel value of the first pixel point corresponding to the image of the public visible area of the fisheye image.
Optionally, in the panoramic all-around image stitching method,
the determining, for the pixel points belonging to the public visible region, the pixel values of the pixel points in the bird's-eye view image according to the pixel values of the pixel points in the fish-eye image corresponding to the image coordinates, the pixel values of the pixel points in the fish-eye image corresponding to the image in the public visible region, and the mapping relationship specifically includes:
if the pixel value difference is not larger than the preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a distance weighted fusion method;
and if the pixel value difference is larger than a preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a constant value weighting fusion method.
Optionally, in the panoramic all-around image stitching method,
if the pixel value difference is not larger than the preset threshold value, determining the pixel value corresponding to the first pixel point in the aerial view by using a distance weighted fusion method, including:
obtaining the distance between the image coordinate of the first pixel point and the boundary at the two sides of the projection of the public visible area;
and taking the distance between the two projected side boundaries of the public visual area as a weighted value, realizing the weighted fusion of the pixel values of the first pixel points, and determining the corresponding pixel values of the first pixel points in the aerial view.
Optionally, in the panoramic all-around image stitching method,
if the pixel value difference is larger than the preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a constant value weighted fusion method, including:
and according to a preset weight value, realizing the weighted fusion of the pixel values of the first pixel point, and determining the corresponding pixel value of the pixel point in the aerial view.
In a second aspect, an embodiment of the present invention provides a panoramic view image stitching system, including:
the acquisition module is used for acquiring a fisheye image;
the determining module is used for determining the mapping relation of pixel points in the fisheye image on the image with the common visual area with the fisheye image;
and the conversion module is used for converting the fisheye image into a bird's-eye view according to the mapping relation and obtaining a panoramic all-around view image according to the bird's-eye view.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the processor and the memory complete communication with each other through a bus; the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the steps of the panoramic annular image stitching method.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the panoramic annular image stitching method.
According to the panoramic all-round-view image splicing method and system, the electronic device and the storage medium, the mapping relation of each pixel point in the image is determined, algorithm processing can be mainly concentrated in the public visible area of the four-way fisheye camera, and the fused complete bird's-eye view image can be obtained only by traversing all the pixel points once.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a panoramic view image stitching method according to an embodiment of the present invention;
fig. 2 is a fisheye image obtained by a forward camera according to an embodiment of the present invention;
fig. 3 is a fisheye image obtained by a left-side camera according to an embodiment of the present invention;
FIG. 4 is a front bird's eye view provided by an embodiment of the present invention;
FIG. 5 is a left aerial view provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a front aerial view and a left aerial view spliced together according to an embodiment of the present invention;
FIG. 7 is a 360 panoramic annular view provided by an embodiment of the present invention;
FIG. 8 is a schematic view illustrating an unbalanced local viewing angle of the bird's eye view according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a constant value weighted fusion method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a panoramic all-around image stitching system according to an embodiment of the present invention;
fig. 11 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
360 look around the scene, divide four ways fisheye camera, control the camera and install in rear-view mirror below, preceding camera is installed in car logo department and back camera and is installed in trunk department. Between each two adjacent cameras there is 20% -30% of the common viewing area.
The current main technology of 360-degree around-the-eye splicing comprises the following steps: distortion correction of internal parameters of the fisheye camera; external parameters of the four fish-eye cameras are calibrated after the four fish-eye cameras are fixed at four positions outside the vehicle body, and a bird-eye view image is obtained through perspective transformation; in order to ensure that the aerial view image does not have a sense of incongruity in the transition area, the splicing seams need to be calculated in the public visual areas of the adjacent cameras, and meanwhile, the exposure degree needs to be balanced, so that the vehicle is in two different illumination scenes simultaneously, and the splicing seams can have the phenomenon of obvious bright and dark transition of the splicing seams due to different exposure degrees of the adjacent cameras.
Fig. 2 is a fisheye image obtained by a forward camera according to an embodiment of the present invention, and fig. 3 is a fisheye image obtained by a left camera according to an embodiment of the present invention, where the fisheye images obtained are as shown in fig. 2 and fig. 3.
Fig. 4 is a forward aerial view provided by an embodiment of the present invention, and fig. 5 is a left aerial view provided by an embodiment of the present invention, and after the internal reference distortion removal processing is performed on each fisheye image according to the prior art, the forward aerial view and the left aerial view shown in fig. 4 and 5 will be obtained.
Fig. 6 is a schematic diagram illustrating splicing a forward aerial view and a left aerial view according to an embodiment of the present invention, where as shown in fig. 6, if adjacent forward aerial views and left aerial views are spliced, a certain overlapping area exists between the spliced aerial views, and a black line in the diagram is a boundary of the overlapping area. This part is the common viewable area between the fisheye image (forward) and the fisheye image adjacent to it (left).
Based on the characteristic that the fisheye image is spliced after being converted into the aerial view and has an overlapped area, the invention provides the panoramic all-around image splicing method, so that four fisheye images can be directly converted into the panoramic all-around image at one time, repeated conversion calculation is not needed, calculation resources are saved, and the real-time performance of image conversion is improved. Fig. 1 is a flowchart of a panoramic view image stitching method according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
step S1, obtaining a fisheye image;
step S2, determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image;
and step S3, converting the fisheye image into a bird 'S-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird' S-eye view.
Specifically, for convenience of explaining the embodiment of the present invention, the left-hand image and the fisheye image obtained by the front camera are spliced and fused as an example, and the principles of the splicing and fusing method of the fisheye images obtained by other adjacent cameras are the same as those of the embodiment of the present invention, and thus, the explanation is not repeated here.
In step S1, fisheye images are acquired by the fisheye camera, and the fisheye images are as shown in fig. 2 and 3, taking the forward direction and the left side in the four fisheye images as an example.
In step S2, a mapping relationship between any one pixel in the fisheye image and an adjacent fisheye image is determined. It should be noted that, for convenience of explanation, in the embodiment of the present invention, the adjacent fisheye image is an image representing a common visible region with the fisheye image.
In specific implementation, for each pixel point, if the pixel point is not in the overlapping area range of the aerial view, the pixel point only corresponds to the mapping relation of one fisheye image; and if the pixel point is in the overlapping area range of the aerial view, the pixel point corresponds to the mapping relation of the two adjacent fisheye images.
It should be noted that the size of the overlapping area range is related to the size of the common visible area of two adjacent fisheye cameras set in advance, and may be specifically adjusted according to the actual situation, which is not limited in the embodiment of the present invention.
In step S3, the fisheye image is corrected according to the mapping relationship of all the pixel points in the fisheye image obtained in step S2, the fisheye image is subjected to top view transformation, four paths of fisheye images are directly merged and fused based on the pixel values of the pixel points in the bird 'S-eye view of the fisheye image obtained by the top view transformation of the fisheye image, and the fisheye image is converted into the bird' S-eye view, so that the panoramic annular view is obtained.
It should be noted that, in the embodiment of the present invention, fig. 4 to 6 are only used as an auxiliary description for explaining the present solution, and actually, the present solution does not need to perform image stitching processing after converting the four fish-eye images into the corresponding four bird's-eye views respectively. Directly converting the four fisheye images into a complete bird-eye view (i.e., a panoramic annular view) according to the pixel values of the pixels corresponding to the bird-eye view after the four fisheye images are subjected to the overlook conversion and the mapping relationship corresponding to each pixel point, as shown in fig. 7, fig. 7 is a 360-degree panoramic annular view provided by the embodiment of the invention.
It should be noted that, in this embodiment, four fisheye images are used, and in addition, the position and the number of the fisheye cameras can be adjusted according to actual requirements, so as to further adjust the number of the fisheye images used, which is not limited in this embodiment.
Further, on the basis of the embodiment, the obtained plane image can be converted into a three-dimensional image, so that the user experience is improved, and the relative position relationship between the vehicle and the object is reflected more clearly.
According to the panoramic all-round-view image splicing method provided by the embodiment of the invention, the mapping relation of each pixel point in the image is determined, the algorithm processing can be mainly concentrated in the public visible area of the four-way fisheye camera, and the integrated complete bird-eye view image can be obtained only by traversing all the pixel points once. Under the condition of ensuring the video display frame rate, the problem that the splicing seam is excessively obvious is further eliminated, and the exposure difference and the perspective difference of adjacent cameras are buffered in the image public area, so that objects on the ground transition at the splicing position and gradually transition at the image splicing position are realized.
Based on the foregoing embodiment, optionally, in the method for stitching a panoramic all-around image, the determining a mapping relationship of pixel points in the fisheye image on an image having a common visible area with the fisheye image includes:
determining whether the image coordinates of any pixel point in the fisheye image are in the public visible area;
if the fish-eye image belongs to the public visible area, obtaining the image coordinates of the pixel points and the image coordinates of the pixel points corresponding to the image coordinates in the image in the public visible area with the fish-eye image to form a mapping relation;
and if the pixel point does not belong to the common visual area, determining that the pixel point is not related to the image of the fisheye image with the common visual area.
Specifically, the mapping relationship actually reflects a process of determining a pixel value of each pixel point in a process of converting the fisheye image into the top view.
For each pixel point, if the image coordinate of the pixel point is not in the common visible area between the fish-eye image and the adjacent fish-eye image, which means that the pixel point is not in the overlapping area range of the bird-eye image after being converted into the bird-eye view, the pixel point only corresponds to the mapping relationship of one fish-eye image, that is, when the fish-eye image is converted into the bird-eye view, the pixel value of the corresponding pixel point in one fish-eye image is only used to determine that the pixel point is irrelevant to the adjacent fish-eye image.
If the image coordinates of the pixel point are in the public visible area between the fish-eye image and the adjacent fish-eye image, which means that the pixel point is also in the overlapping area range of the bird's-eye view after being converted into the bird's-eye view, when the fish-eye coordinates are converted into the bird's-eye view, the pixel value is related to the pixel value of the corresponding pixel point in the two adjacent fish-eye images, and the image coordinates of the pixel point corresponding to the pixel point in the adjacent fish-eye images are obtained to form a mapping relation.
For example, if a bird's eye view obtained after four fish-eye images are processed should be a 280 × 300 image, and the coordinates of the pixel at the upper left corner in the bird's eye view are (0, 0), the pixel at coordinates (2, 0) should be in the overlapping region of the forward bird's eye view and the left bird's eye view, and at this time, the corresponding point of the pixel in the left fish-eye image is related to the pixel, and the related pixel also exists in the forward fish-eye image, and the pixels are mapped.
It should be noted that the above construction method of the corresponding mapping relationship is only used as a specific example to explain the embodiment of the present invention, and besides, the mapping relationship may also be reflected in other manners, which is not limited in this embodiment.
On the basis of the above embodiment, the embodiment of the present invention determines, through the relationship between the pixel point image coordinates and the consolidated visible region between the fisheye image and the adjacent fisheye image, whether the coordinates of the pixel point are related to only one fisheye image or two fisheye images when the fisheye image is converted into the bird's-eye view, and constructs a corresponding mapping relationship. The algorithm processing is mainly concentrated in the public visible area of the four fish-eye cameras, so that the complexity of calculation is reduced, and calculation resources are saved.
Based on the foregoing embodiment, optionally, in the method for stitching a panoramic all-around image, the converting the fisheye image into a bird's-eye view according to the mapping relationship includes:
for the pixel points belonging to the public visible area, determining the pixel values of the pixel points in the aerial view according to the pixel values of the pixel points corresponding to the image coordinates in the fisheye image, the pixel values of the pixel points corresponding to the image coordinates in the image of the fisheye image in the public visible area and the mapping relation;
and for the pixel points which do not belong to the public visible area, determining the pixel values of the pixel points in the aerial view according to the pixel values of the fisheye images corresponding to the image coordinates of the pixel points.
Specifically, the mapping relationship of the pixels substantially aims to realize the conversion of the pixel values in the process of converting the fisheye image into the bird's-eye view image. That is to say, when the fish-eye image is converted into the bird's eye view, the pixel values of the overlapping area are recalculated, and the conversion and the optimization of the overlapping area are realized in one step.
And determining the pixel value corresponding to the fish-eye image which needs to be used when the fish-eye image is converted into the bird's eye view according to the mapping relation.
Determining the pixel value of the pixel point in the aerial view according to the pixel value of the pixel point in the fish-eye image and the fish-eye image adjacent to the fish-eye image according to the image coordinate of the pixel point and the mapping relation;
and for the pixel points which do not belong to the public visible area, determining the pixel value corresponding to the pixel point in the aerial view according to the pixel value of the fisheye image corresponding to the image coordinate of the pixel point.
Based on the foregoing embodiment, optionally, in the method for stitching a panoramic all-around image, before the step of determining, according to the corresponding pixel value of the image coordinate of the pixel point in the fisheye image, the corresponding pixel value of the image coordinate of the pixel point in the image in which the fisheye image has the public visible area, and the mapping relationship, the corresponding pixel value of the pixel point in the bird's-eye view image is further included:
determining a first pixel value and a second pixel value of a first pixel point;
comparing the difference in color space between the first pixel value and the second pixel value;
determining whether the pixel value difference is greater than a preset threshold;
the first pixel point is located in the public visible area; the first pixel value is a pixel value corresponding to a first pixel point in the fisheye image; and the second pixel point is the pixel value of the first pixel point corresponding to the image of the public visible area of the fisheye image.
Specifically, fig. 8 is a schematic view illustrating a local view imbalance of an aerial view provided by an embodiment of the present invention, as shown in fig. 8, a situation that a stereoscopic barrier has an unbalanced view in a process of crossing a fusion area may occur, and in fig. 8, a double image of a cone barrel in a common visual area of a left camera and a front camera is caused by a difference in view. Therefore, a judgment of increasing the difference of overlapped pixels in the process of color homogenization is required.
In the embodiment of the invention, in order to further enhance the optimization of the fusion of the overlapped areas, different pixel value determination methods are adopted when the fish-eye image is converted into the bird's eye view, so that the image can solve the problem that the stereoscopic barrier is ghosted, and the problem of unbalanced viewing angle is solved.
Before image conversion is carried out, a pixel difference threshold value is set in advance, the difference between the corresponding pixel value in the fisheye image and the corresponding pixel value in the adjacent fisheye image in the color space is compared, and whether the pixel value difference is larger than the preset threshold value or not is determined. And selecting a pixel value fusion method according to the result obtained by comparison.
The color space includes an RGB color space, an HSI color space, an HVS color space, and the like, and may be selected according to actual conditions.
Based on the foregoing embodiment, optionally, in the method for stitching the panoramic all-around image, the determining, according to the pixel value corresponding to the image coordinate of the pixel point in the fisheye image, the pixel value corresponding to the image coordinate of the pixel point in the image in which the fisheye image has the public visible area, and the mapping relationship, the pixel value corresponding to the pixel point in the bird's-eye view image specifically includes:
if the pixel value difference is not larger than the preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a distance weighted fusion method;
and if the pixel value difference is larger than a preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a constant value weighting fusion method.
Specifically, the adopted pixel value fusion mode is determined according to the difference between the corresponding pixel value in the fisheye image and the corresponding pixel value in the adjacent fisheye image in the color space.
And when the pixel value difference is not larger than a preset threshold value, determining the corresponding pixel value of the pixel point in the aerial view by using a distance weighted fusion method. In the distance weighted fusion mode, the weight is not fixed and is related to the position of the pixel point, the distance weighted fusion mode is used, the image coordinate position of the pixel point can be effectively reflected to the pixel point, the condition that the two fisheye images have light and shade difference during fusion is effectively improved, the abnormal sense of transition of the fusion area of the two adjacent fisheye images is reduced, the buffer is brought to exposure difference and perspective difference, and the transition of an object on the ground at the splicing part has a gradual change trend.
And when the difference of the pixel values is larger than a preset threshold value, determining the corresponding pixel value of the pixel point in the aerial view by using a constant value weighted fusion method. And selecting a proper weight value for weighted average fusion, and selecting more pixel values in the fish-eye image of one party according to the positions of pixel points. The blurring problem existing in the picture is improved, so that the image is more real.
Based on the foregoing embodiment, optionally, in the method for stitching a panoramic view image, if it is determined that the difference between the pixel values is not greater than the preset threshold, determining the pixel value corresponding to the first pixel point in the bird's-eye view image by using a distance weighted fusion method includes:
obtaining the distance between the image coordinate of the first pixel point and the boundary at the two sides of the projection of the public visible area;
and taking the distance between the two projected side boundaries of the public visual area as a weighted value, realizing the weighted fusion of the pixel values of the first pixel points, and determining the corresponding pixel values of the first pixel points in the aerial view.
Specifically, fig. 6 is a schematic diagram illustrating stitching a forward aerial view and a left aerial view provided by an embodiment of the present invention, as shown in fig. 6, solid lines in the diagram are projected two-side boundaries of a common visual area, coordinates of an image where a point P is located in the diagram are marked as (x, y), and a dotted line in the diagram is a distance from the point P to the projected two-side boundaries.
After the difference of the pixel values is determined to be not larger than a preset threshold value, the distance between the image coordinates of the pixel points and the boundaries of the two sides of the projection of the public visible area is obtained;
and taking the distance between the two projected side boundaries of the public visual area as a weighted value, realizing the weighted fusion of the pixel values of the pixel points, and determining the corresponding pixel values of the pixel points in the aerial view.
The pixel value in the final 360-degree view of the ring is
Figure BDA0002775907100000121
Any point P is seen from the whole bird's eye view(x,y)The perspective transformation relation defined by the pixel value of each camera external reference is as follows:
Pdst(x,y)=Psrc(mapx(x,y),mapy(x,y))
find out the original PsrcAnd mapping the pixel points to pixel values corresponding to the image coordinates before the aerial view.
Still taking the forward direction and the left side as an example, as shown in fig. 6, taking the upper boundary of the overlapped area as the top projection boundary l _ left of the left camera, the distance from the point P to the straight line l _ left is
Figure BDA0002775907100000131
The pixel value in the left fisheye image is
Figure BDA0002775907100000132
The distance from P to the straight line l _ front is defined as the lower boundary of the overlapped region as the top projection boundary l _ front of the front camera
Figure BDA0002775907100000133
The pixel value in the forward fisheye image is
Figure BDA0002775907100000134
The pixel value of the pixel point P on the final 360-degree looking-around top view is
Figure BDA0002775907100000135
Figure BDA0002775907100000136
It should be noted that, in addition to using the distance from the pixel point to the boundary at both sides of the projection as the weight of the weighted fusion, it can be understood that the distance mentioned in the embodiment of the present invention may also be the distance between two straight lines drawn from the pixel point in the horizontal and vertical directions and the intersection point of the boundary at both sides of the projection, and this distance is used as the weight of the weighted fusion.
Based on the foregoing embodiment, optionally, in the method for stitching a panoramic view image, if it is determined that the difference between the pixel values is greater than a preset threshold, determining the pixel value corresponding to the first pixel point in the bird's-eye view image by using a constant value weighted fusion method includes:
and according to a preset weight value, realizing the weighted fusion of the pixel values of the first pixel point, and determining the corresponding pixel value of the pixel point in the aerial view.
Specifically, fig. 9 is a schematic diagram of the constant value weighting fusion method provided in the embodiment of the present invention, and as shown in fig. 9, coordinates of a pixel point P in the diagram are (x, y). And after the pixel value difference is larger than a preset threshold value, realizing the weighted fusion of the pixel values of the pixel points according to a preset weight value, and determining the corresponding pixel value of the pixel points in the aerial view.
Because the weight is fixed, more pixel values in which fisheye image is reserved need to be selected according to the coordinates of the pixel points.
For example, the slope of the connecting line between the corner (m, n) of the fusion region and the boundary point is obtained according to the coordinates of the corner of the fusion region, and the position relationship between the pixel point and the slope is determined according to the comparison of the slopes.
The position relation of the points determined according to the slope is specifically defined as:
Figure BDA0002775907100000141
as can be seen from the above formula, if the relationship in the formula is satisfied, it indicates that the pixel is located above the oblique line, that is, the pixel value of the pixel P in the forward fisheye image is more retained. Correspondingly, for the pixel points below the changed oblique line, more pixel values of the reserved points P in the left fisheye image need to be simply rewritten, and the weight k is given to the pixel value of the left fisheye image.
For any point P(x,y)The pixel value in the final 360-degree view of the ring is
Figure BDA0002775907100000142
Figure BDA0002775907100000143
Wherein the content of the first and second substances,
Figure BDA0002775907100000144
the pixel value of the pixel point P in the left fisheye image,
Figure BDA0002775907100000145
the pixel value of the pixel point P in the forward fisheye image is shown.
The weight k is a value which is more than 0.75 and less than 0.9, and the image fused by selecting the weight in the value range can be blurred to a certain degree, so that the abnormal sense of transition of the three-dimensional barrier at a critical position and the ground light reflection effect are reduced, and the image fusion effect can be effectively improved.
It should be noted that, in this embodiment, only the fusion of the forward fisheye image and the left fisheye image is performed, and the fusion method for other fisheye images is the same as the basic method in the embodiment of the present invention, and only simple adjustment is required, which is not described herein again.
In addition, in the embodiment of the present invention, both the distance weighted fusion method and the fixed value weighted fusion method are implemented for the pixels in the public visual area, and for the pixels not in the public visual area, only the pixel values of the pixels in the corresponding fisheye image need to be converted into the corresponding pixel values in the bird's eye view image, and the specific conversion method and the corresponding formula are not limited in this embodiment.
Fig. 10 is a schematic structural diagram of a panoramic all-around image stitching system according to an embodiment of the present invention, and as shown in fig. 10, the panoramic all-around image stitching system includes:
an obtaining module 101, configured to obtain a fisheye image;
the determining module 102 is configured to determine a mapping relationship between pixel points in the fisheye image and an image in which a common visible region exists with the fisheye image;
and the conversion module 103 is configured to convert the fisheye image into a bird's-eye view according to the mapping relationship, and obtain a panoramic all-around view image according to the bird's-eye view.
Specifically, for convenience of explaining the embodiment of the present invention, the left-hand image and the fisheye image obtained by the front camera are spliced and fused as an example, and the principles of the splicing and fusing method of the fisheye images obtained by other adjacent cameras are the same as those of the embodiment of the present invention, and thus, the explanation is not repeated here.
The obtaining module 101 is configured to obtain a fisheye image through a fisheye camera, where the fisheye image obtained by taking the forward direction and the left side in the four fisheye images as examples is shown in fig. 2 and fig. 3.
The determining module 102 is configured to determine a mapping relationship of any one pixel point in the fisheye image on an adjacent fisheye image.
For each pixel point, if the pixel point is not in the overlapping area range of the aerial view, the pixel point only corresponds to the mapping relation of one fisheye image; and if the pixel point is in the overlapping area range of the aerial view, the pixel point corresponds to the mapping relation of the two adjacent fisheye images.
It should be noted that the size of the overlapping area range is related to the size of the common visible area of two adjacent fisheye cameras set in advance, and may be specifically adjusted according to the actual situation, which is not limited in the embodiment of the present invention.
The conversion module 103 is configured to convert the fisheye image into the bird's eye view according to the mapping relationship of all the pixel points in the fisheye image acquired by the determination module 102.
It should be noted that, in the embodiment of the present invention, fig. 4 to 6 are only used as an auxiliary description for explaining the present solution, and actually, the present solution does not need to perform image stitching processing after converting the four fish-eye images into the corresponding four bird's-eye views respectively. Directly converting the four fisheye images into a complete bird's-eye view according to the four fisheye images and the mapping relationship corresponding to each pixel point, as shown in fig. 7, fig. 7 is the complete bird's-eye view provided by the embodiment of the invention.
According to the panoramic all-round-view image splicing system provided by the embodiment of the invention, the mapping relation of each pixel point in the image is determined, the algorithm processing can be mainly concentrated in the public visible area of the four-way fisheye camera, and the integrated complete bird-eye view image can be obtained only by traversing all the pixel points once. Under the condition of ensuring the video display frame rate, the problem that the splicing seam is excessively obvious is further eliminated, and the exposure difference and the perspective difference of adjacent cameras are buffered in the image public area, so that objects on the ground transition at the splicing position and gradually transition at the image splicing position are realized.
It should be noted that, the panoramic all-around image stitching system provided in the embodiment of the present invention is used for executing the panoramic all-around image stitching method, and a specific implementation manner thereof is consistent with the method implementation manner, and is not described herein again.
Fig. 11 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 11, the electronic device may include: a processor (processor)111, a communication interface (communication interface)112, a memory (memory)113 and a communication bus (bus)114, wherein the processor 111, the communication interface 112 and the memory 113 complete communication with each other through the communication bus 114. The processor 111 may call logic instructions in the memory 113 to execute the above power demand load prediction method, including: obtaining a fisheye image; determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image; and converting the fisheye image into a bird's-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird's-eye view.
In addition, the logic instructions in the memory 113 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
In another aspect, an embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer, the computer is capable of executing the panoramic all-around image stitching method provided by the above-mentioned method embodiments, including: obtaining a fisheye image; determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image; and converting the fisheye image into a bird's-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird's-eye view.
In still another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method for stitching a panoramic annular image provided in the foregoing embodiments when executed by a processor, and the method includes: obtaining a fisheye image; determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image; and converting the fisheye image into a bird's-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird's-eye view.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A panoramic all-round view image splicing method is characterized by comprising the following steps:
obtaining a fisheye image;
determining the mapping relation of pixel points in the fisheye image on the image with the common visual area in the fisheye image;
and converting the fisheye image into a bird's-eye view according to the mapping relation, and obtaining a panoramic all-around view image according to the bird's-eye view.
2. The method of claim 1, wherein the determining the mapping relationship of pixel points in the fish-eye image on the image with the common visual area with the fish-eye image comprises:
determining whether the image coordinates of any pixel point in the fisheye image are in the public visible area;
if the fish-eye image belongs to the public visible area, obtaining the image coordinates of the pixel points and the image coordinates of the pixel points corresponding to the image coordinates in the image in the public visible area with the fish-eye image to form a mapping relation;
and if the pixel point does not belong to the common visual area, determining that the pixel point is not related to the image of the fisheye image with the common visual area.
3. The method of claim 2, wherein the converting the fisheye image into a bird's eye view according to the mapping relationship comprises:
for the pixel points belonging to the public visible area, determining the pixel values corresponding to the pixel points in the aerial view according to the pixel values corresponding to the image coordinates of the pixel points in the fisheye image, the pixel values corresponding to the image coordinates of the pixel points in the image of the public visible area existing in the fisheye image, and the mapping relation;
and for the pixel points which do not belong to the public visible area, determining the pixel values of the pixel points in the aerial view according to the pixel values of the fisheye images corresponding to the image coordinates of the pixel points.
4. The method according to claim 3, wherein before the step of determining pixel values corresponding to pixel points in the bird's eye view image according to pixel values corresponding to image coordinates of pixel points in the fish-eye image, pixel values corresponding to image coordinates of pixel points in the image in the common visual area with the fish-eye image, and the mapping relationship, the method further comprises:
determining a first pixel value and a second pixel value of a first pixel point;
comparing the difference in color space between the first pixel value and the second pixel value;
determining whether the pixel value difference is greater than a preset threshold;
the first pixel point is located in the public visible area; the first pixel value is a pixel value corresponding to a first pixel point in the fisheye image; and the second pixel point is the pixel value of the first pixel point corresponding to the image of the public visible area of the fisheye image.
5. The method according to claim 4, wherein the determining, for the pixel points belonging to the public visible region, the pixel values corresponding to the pixel points in the bird's eye view image according to the pixel values corresponding to the image coordinates of the pixel points in the fish-eye image, the pixel values corresponding to the image coordinates of the pixel points in the image in the public visible region corresponding to the fish-eye image, and the mapping relationship specifically comprises:
if the pixel value difference is not larger than the preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a distance weighted fusion method;
and if the pixel value difference is larger than a preset threshold value, determining the corresponding pixel value of the first pixel point in the aerial view by using a constant value weighting fusion method.
6. The method of claim 5,
if the pixel value difference is not larger than the preset threshold value, determining the pixel value corresponding to the first pixel point in the aerial view by using a distance weighted fusion method, including:
obtaining the distance between the image coordinate of the first pixel point and the boundary at the two sides of the projection of the public visible area;
and taking the distance between the two projected side boundaries of the public visual area as a weighted value, realizing the weighted fusion of the pixel values of the first pixel points, and determining the corresponding pixel values of the first pixel points in the aerial view.
7. The method of claim 5, wherein if it is determined that the pixel value difference is greater than a predetermined threshold, determining the corresponding pixel value of the first pixel point in the bird's eye view by using a constant-value weighted fusion method comprises:
and according to a preset weight value, realizing the weighted fusion of the pixel values of the first pixel point, and determining the corresponding pixel value of the pixel point in the aerial view.
8. A panoramic surround view image stitching system, comprising:
the acquisition module is used for acquiring a fisheye image;
the determining module is used for determining the mapping relation of pixel points in the fisheye image on the image with the common visual area with the fisheye image;
and the conversion module is used for converting the fisheye image into a bird's-eye view according to the mapping relation and obtaining a panoramic all-around view image according to the bird's-eye view.
9. An electronic device, comprising a memory and a processor, wherein the processor and the memory communicate with each other via a bus; the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the panoramic annular view image splicing method according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the panoramic annular image stitching method according to any one of claims 1 to 7.
CN202011265428.0A 2020-11-12 2020-11-12 Panoramic all-around image splicing method and system, electronic equipment and storage medium Pending CN112435161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011265428.0A CN112435161A (en) 2020-11-12 2020-11-12 Panoramic all-around image splicing method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011265428.0A CN112435161A (en) 2020-11-12 2020-11-12 Panoramic all-around image splicing method and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112435161A true CN112435161A (en) 2021-03-02

Family

ID=74701052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011265428.0A Pending CN112435161A (en) 2020-11-12 2020-11-12 Panoramic all-around image splicing method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112435161A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390222A (en) * 2022-03-24 2022-04-22 北京唱吧科技股份有限公司 Switching method and device suitable for 180-degree panoramic video and storage medium
CN114863375A (en) * 2022-06-10 2022-08-05 无锡雪浪数制科技有限公司 Gas station vehicle multi-view positioning method based on 3D visual recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140010411A1 (en) * 2012-07-03 2014-01-09 Li-You Hsu Automatic airview correction method
CN104851076A (en) * 2015-05-27 2015-08-19 武汉理工大学 Panoramic 360-degree-view parking auxiliary system for commercial vehicle and pick-up head installation method
CN106952311A (en) * 2017-03-02 2017-07-14 山东省科学院自动化研究所 Auxiliary parking system and method based on panoramic mosaic data mapping tables
CN107424120A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of image split-joint method in panoramic looking-around system
CN111369439A (en) * 2020-02-29 2020-07-03 华南理工大学 Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140010411A1 (en) * 2012-07-03 2014-01-09 Li-You Hsu Automatic airview correction method
CN104851076A (en) * 2015-05-27 2015-08-19 武汉理工大学 Panoramic 360-degree-view parking auxiliary system for commercial vehicle and pick-up head installation method
CN106952311A (en) * 2017-03-02 2017-07-14 山东省科学院自动化研究所 Auxiliary parking system and method based on panoramic mosaic data mapping tables
CN107424120A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of image split-joint method in panoramic looking-around system
CN111369439A (en) * 2020-02-29 2020-07-03 华南理工大学 Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390222A (en) * 2022-03-24 2022-04-22 北京唱吧科技股份有限公司 Switching method and device suitable for 180-degree panoramic video and storage medium
CN114390222B (en) * 2022-03-24 2022-07-08 北京唱吧科技股份有限公司 Switching method and device suitable for 180-degree panoramic video and storage medium
CN114863375A (en) * 2022-06-10 2022-08-05 无锡雪浪数制科技有限公司 Gas station vehicle multi-view positioning method based on 3D visual recognition
CN114863375B (en) * 2022-06-10 2023-06-30 无锡雪浪数制科技有限公司 Multi-view positioning method for gas station vehicles based on 3D visual recognition

Similar Documents

Publication Publication Date Title
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
CN104851076B (en) Panoramic looking-around parking assisting system and camera installation method for commercial car
US10007853B2 (en) Image generation device for monitoring surroundings of vehicle
US10187590B2 (en) Multi-camera vehicle vision system with image gap fill
CN111179168B (en) Vehicle-mounted 360-degree panoramic all-around monitoring system and method
CN108364263B (en) Vehicle-mounted image processing method for standard definition input and high definition output
CN111080557B (en) Luminance equalization processing method and related device
CN109754363B (en) Around-the-eye image synthesis method and device based on fish eye camera
JP2002359838A (en) Device for supporting driving
CN112070886B (en) Image monitoring method and related equipment for mining dump truck
CN112435161A (en) Panoramic all-around image splicing method and system, electronic equipment and storage medium
CN113421183B (en) Method, device and equipment for generating vehicle panoramic view and storage medium
JP2010018102A (en) Driving support device
CN113525234A (en) Auxiliary driving system device
KR101657673B1 (en) Apparatus and method for generating panorama view
KR102235951B1 (en) Imaging Apparatus and method for Automobile
CN111860632B (en) Multipath image consistency fusion method
CN112800989A (en) Method and device for detecting zebra crossing
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN114742726A (en) Blind area detection method and device, electronic equipment and storage medium
CN113516733B (en) Method and system for filling blind areas at bottom of vehicle
KR101241012B1 (en) Method for improving images of around view monitor system
CN204652529U (en) Vehicle Surround Video treatment system
EP4336453A1 (en) Method and apparatus for real-time image-based lighting of 3d surround view
JP2024056563A (en) Display processing device, display processing method, and operation program for display processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination