CN113240582A - Image splicing method and device - Google Patents

Image splicing method and device Download PDF

Info

Publication number
CN113240582A
CN113240582A CN202110391998.2A CN202110391998A CN113240582A CN 113240582 A CN113240582 A CN 113240582A CN 202110391998 A CN202110391998 A CN 202110391998A CN 113240582 A CN113240582 A CN 113240582A
Authority
CN
China
Prior art keywords
image
shooting
parameter
acquisition module
compensation parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110391998.2A
Other languages
Chinese (zh)
Other versions
CN113240582B (en
Inventor
易荣刚
李俊英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110391998.2A priority Critical patent/CN113240582B/en
Publication of CN113240582A publication Critical patent/CN113240582A/en
Application granted granted Critical
Publication of CN113240582B publication Critical patent/CN113240582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The embodiment of the invention discloses an image splicing method and device, which are used for improving the overall visual effect of spliced images. The method comprises the following steps: determining a compensation parameter according to a first image and a second image acquired by a first image acquisition module and a second image acquisition module in an image splicing device, wherein the first image is acquired by the first image acquisition module, the second image is acquired by the second image acquisition module, and the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles; determining a first shooting parameter of a first image acquisition module; adjusting a second shooting parameter of a second image acquisition module according to the first shooting parameter and the compensation parameter; and the image parameters of a third image obtained by shooting by using the first shooting parameter by the first image acquisition module and a fourth image obtained by shooting by using the adjusted second shooting parameter by the second image acquisition module are the same or close to each other.

Description

Image splicing method and device
Technical Field
The invention relates to the technical field of image videos, in particular to an image splicing method and device.
Background
With the development of image video technology, especially the application of image splicing technology in various fields is more and more extensive. The panoramic image is obtained and spliced into one of the most important technologies in the field of image splicing, and a plurality of images obtained by different image acquisition equipment in the same scene are spliced seamlessly to obtain the panoramic image. However, when a plurality of images are stitched together, the matching degree of each image with respect to image parameters such as spatial position, brightness, and color is involved, and if the difference of the image parameters of each image is large, the quality of a panoramic image is affected when the plurality of images are stitched together.
Disclosure of Invention
The embodiment of the invention provides an image splicing method and device, which are used for solving the problem that brightness and color of non-transition areas of spliced images obtained by different devices in the same scene are inconsistent, so that the spliced images have better visual effect.
In a first aspect, an embodiment of the present invention provides an image stitching method, which is applied to an image stitching device, where the image stitching device includes a first image acquisition module and a second image acquisition module, and includes:
determining a compensation parameter according to a first image and a second image, wherein the first image is acquired by the first image acquisition module, the second image is acquired by the second image acquisition module, and the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles;
determining a first shooting parameter of the first image acquisition module;
adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter;
the first image acquisition module uses the first shooting parameter to shoot a third image and the second image acquisition module uses the adjusted second shooting parameter to shoot a fourth image, wherein the image parameters of the third image obtained by shooting by the first image acquisition module and the fourth image obtained by shooting by the second image acquisition module are the same or close to each other.
Optionally, the compensation parameter includes a first compensation parameter and/or a second compensation parameter, where the first compensation parameter is a luminance ratio of the first image and the second image; the second compensation parameter is a difference in white balance gain of the first image and the second image.
Optionally, when the compensation parameter includes the first compensation parameter, adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the first compensation parameter, where brightness of a third image obtained by the first image acquisition module through shooting with the first shooting parameter is the same as or close to brightness of a fourth image obtained by the second image acquisition module through shooting with the adjusted second shooting parameter;
and when the compensation parameters comprise the second compensation parameters, adjusting second shooting parameters of the second image acquisition module according to the first shooting parameters and the second compensation parameters, wherein the colors of a third image shot by the first image acquisition module by using the first shooting parameters and a fourth image shot by the second image acquisition module by using the adjusted second shooting parameters are the same or close to each other.
Optionally, when the compensation parameter includes a first compensation parameter, the first shooting parameter includes at least one of exposure, gain, and aperture value, and the second shooting parameter includes at least one of exposure, gain, and aperture value;
when the compensation parameter includes a second compensation parameter, the first photographing parameter includes a white balance gain, and the second photographing parameter includes a white balance gain.
Optionally, the first compensation parameter is a luminance ratio of the first image and the second image, and includes: the first compensation parameter is a luminance ratio of an overlapping area on the first image and the second image;
the second compensation parameter is a difference in white balance gain of the first image and the second image, including: the second compensation parameter is a difference in white balance gain of an overlapping area on the first image and the second image.
Optionally, the second compensation parameter includes a white balance gain compensation parameter of the R component and/or a white balance gain compensation parameter of the B component; wherein the white balance gain compensation parameter of the R component satisfies:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
wherein delta _ Rgain is used to indicate a white balance gain compensation parameter for the R component, ROI _ Rgain1White balance gain, ROI _ Rgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image;
the white balance gain compensation parameter of the B component satisfies:
delta_Bgain=ROI_Bgain2-ROI_Bgain1
wherein delta _ Bgain is used to indicate a white balance gain compensation parameter, ROI _ Bgain, of the B component1White balance gain, ROI _ Bgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image.
In a second aspect, an embodiment of the present invention provides an image stitching apparatus, including:
the first image acquisition module is used for acquiring a first image;
the second image acquisition module is used for acquiring a second image; the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles;
the processing module is used for determining a compensation parameter according to the first image and the second image;
the processing module is further used for determining a first shooting parameter of the first image acquisition module; adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter:
the first image acquisition module uses the first shooting parameters to obtain a third image, and the second image acquisition module uses the adjusted second shooting parameters to obtain a fourth image.
Optionally, the compensation parameter includes a first compensation parameter and/or a second compensation parameter, where the first compensation parameter is a luminance ratio of the first image and the second image; the second compensation parameter is a difference in white balance gain of the first image and the second image.
Optionally, when the compensation parameter includes the first compensation parameter, the processing module is specifically configured to: adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the first compensation parameter, wherein the brightness of a third image obtained by the first image acquisition module through shooting by using the first shooting parameter is the same as or close to the brightness of a fourth image obtained by the second image acquisition module through shooting by using the adjusted second shooting parameter;
when the compensation parameter includes the second compensation parameter, the processing module is specifically configured to: and adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the second compensation parameter, wherein the color of a third image obtained by shooting by using the first shooting parameter by the first image acquisition module is the same as or close to that of a fourth image obtained by shooting by using the adjusted second shooting parameter by the second image acquisition module.
Optionally, when the compensation parameter includes a first compensation parameter, the first shooting parameter includes at least one of exposure, gain, and aperture value, and the second shooting parameter includes at least one of exposure, gain, and aperture value;
when the compensation parameter includes a second compensation parameter, the first photographing parameter includes a white balance gain, and the second photographing parameter includes a white balance gain.
Optionally, the first compensation parameter is a luminance ratio of the first image and the second image, and includes: the first compensation parameter is a luminance ratio of an overlapping area on the first image and the second image;
the second compensation parameter is a difference in white balance gain of the first image and the second image, including: the second compensation parameter is a difference in white balance gain of an overlapping area on the first image and the second image.
Optionally, the second compensation parameter includes a white balance gain compensation parameter of an R component and/or a white balance gain compensation parameter of a B component, where the white balance gain compensation parameter of the R component satisfies:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
wherein delta _ Rgain is used to indicate a white balance gain compensation parameter for the R component, ROI _ Rgain1White balance gain, ROI _ Rgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image;
the white balance gain compensation parameter of the B component satisfies:
delta_Bgain=ROI_Bgain2-ROI_Bgain1
wherein delta _ Bgain is used to indicate a white balance gain compensation parameter, ROI _ Bgain, of the B component1White balance gain, ROI _ Bgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image.
In a third aspect, an embodiment of the present invention provides an image stitching apparatus, where the image stitching apparatus includes a memory and a processor, where the memory stores computer instructions, and when the computer instructions are executed on the processor, the processor is caused to execute the method provided in the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the method as provided in the first aspect above.
In a fifth aspect, embodiments of the present invention provide a computer program product, which when run on a computer causes the computer to perform the method as provided in the first aspect above.
In the embodiment of the application, the image stitching device comprises a first image acquisition module and a second image acquisition module. The image splicing device determines a compensation parameter according to a first image and a second image, wherein the first image is acquired by the first image acquisition module, the second image is acquired by the second image acquisition module, and the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles; determining a first shooting parameter of the first image acquisition module; adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter; the first image acquisition module uses the first shooting parameters to obtain a third image, and the second image acquisition module uses the adjusted second shooting parameters to obtain a fourth image. In this way, the image parameters of the images (i.e. the third image and the fourth image) acquired by the two image acquisition modules are the same or close to each other, so that the quality of the image obtained by splicing the images acquired by the two image acquisition modules is high.
Drawings
Fig. 1 is a scene schematic diagram of an image stitching method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image stitching method according to an embodiment of the present invention;
fig. 3 is a diagram illustrating an embodiment of an image stitching method according to the present invention;
fig. 4 is a diagram illustrating an embodiment of an image stitching method according to the present invention;
fig. 5 is a detailed schematic diagram of an image stitching apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another image stitching device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are some, not all embodiments of the solution of the invention. All other embodiments obtained by a person skilled in the art without any inventive work based on the embodiments described in the present application are within the scope of the protection of the technical solution of the present invention.
As described above, when acquiring a panoramic image, a plurality of images acquired by different image acquisition devices need to be stitched. In the current image stitching technology, when a plurality of images are stitched, the difference of image parameters existing on different images is not considered, and the mode can influence the visual effect of the stitched images.
In view of this, the embodiment of the present invention provides an image stitching method, which may be applied to an image stitching device including more than one image acquisition module, and specifically, the image stitching device includes a first image acquisition module and a second image acquisition module. The image splicing device determines a compensation parameter according to a first image and a second image, wherein the first image is acquired by the first image acquisition module, the second image is acquired by the second image acquisition module, and the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles; determining a first shooting parameter of the first image acquisition module; adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter; the first image acquisition module uses the first shooting parameters to obtain a third image, and the second image acquisition module uses the adjusted second shooting parameters to obtain a fourth image. In this way, the image parameters of the images (i.e. the third image and the fourth image) acquired by the two image acquisition modules are the same or close to each other, so that the quality of the image obtained by splicing the images acquired by the two image acquisition modules is high.
The technical scheme provided by the embodiment of the invention is described in the following with the accompanying drawings of the specification.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application. As shown in fig. 1, the image stitching device includes a first image capturing module and a second image capturing module, and the two image capturing modules respectively capture the same object or the same scene from different angles to obtain a first image and a second image. Optionally, there may be an overlapping region of the first image and the second image.
Compared with the first image and the second image, the image spliced by the first image and the second image comprises more scenes or larger scenes, and the shooting visual angle is expanded by splicing the images acquired by the plurality of image acquisition modules to obtain the panoramic image.
In the embodiment of the application, the image stitching device determines a compensation parameter according to the first image and the second image; and adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter. And enabling the image parameters of a third image obtained by shooting by the first image acquisition module by using the first shooting parameters and the image parameters of a fourth image obtained by shooting by the second image acquisition module by using the adjusted second shooting parameters to be the same or close to each other. In this way, the image parameters of the images acquired by the two image acquisition modules (i.e., the third image and the fourth image) are the same or close to each other, and the quality of the image obtained by splicing the images acquired by the two image acquisition modules is higher.
It can be understood that, in addition to the above application scenarios, the image stitching device of the present application may further include a greater number of image acquisition modules, and accordingly, the image stitching device may stitch a greater number of images to obtain a panoramic image. It should be noted that the above-mentioned application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the present invention is not limited in any way in this respect. Rather, embodiments of the present invention may be applied in any scenario where applicable.
Referring to fig. 2, a schematic flowchart of an image stitching method provided by an embodiment of the present invention, which may be applied to the scene shown in fig. 1, specifically, the method includes the following steps:
step 201: the first image acquisition module acquires a first image, and the second image acquisition module acquires a second image. As shown in fig. 1, the first image acquisition module and the second image acquisition module are located on the same image stitching device, and the two image acquisition modules perform image acquisition simultaneously, where the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles;
for example, the first image capturing module may be a camera, a lens group, a camera group, or the like, and the second image capturing module may be a camera, a lens group, a camera group, or the like. The first image acquisition module and the second image acquisition module are arranged at different positions on the same image splicing device. The first image acquisition module and the second image acquisition module may be the same or different.
Step 202: and determining a compensation parameter according to the first image and the second image.
In the embodiment of the present application, the compensation parameter includes a first compensation parameter that is a luminance ratio of the first image and the second image and/or a second compensation parameter that is a difference in white balance gain of the first image and the second image.
The process of determining the first compensation parameter and the second compensation parameter from the first image and the second image, respectively, is described below.
In the first case, the compensation parameter includes the first compensation parameter, i.e., the luminance ratio, and includes the following steps 1 to 3.
Step 1, determining an overlapping area of the first image and the second image.
The method comprises the following steps of firstly, carrying out image matching on the first image and the second image by taking the first image as a substrate to obtain a geometric correction parameter tform matrix. The image matching method is not limited to the use of image matching techniques such as feature points, optical flow, CNN, and the like. The correction parameters are a 3X3 matrix, and the matrix is inverted to be a tform _ inv matrix.
And secondly, calculating to obtain the accurate position coordinate range of the first image in the second image by utilizing the positive mapping relation, namely tform matrix, of the first image and the second image. The original four endpoint coordinates (0,0), (w 1-1, 0), (0, h 1-1), (w 1-1, h 1-1) of the first image are subjected to positive mapping calculation to obtain mapped endpoint coordinates, and intersection is taken between the obtained coordinates and the image size of the second image (endpoints are (0,0), (w 2-1, 0), (0, h 2-1), (w 2-1, h 2-1)), so that the obtained coordinates are the overlapped area 2 of the first image and the second image. The positive mapping calculation method comprises the following steps:
Figure BDA0003017072270000081
w=x·t11+y·t21+t31
u=x·t12+y·t22+t32
z=x·t13+y·t23+t33
(tx,ty)=(w./z,u./z)
wherein, w1Is the width of the first image, h1Is the height, w, of the first image2Is the width of the second image, h2Is the height of the second image, x and y in the formula are the four original endpoint values, the t matrix is a tform matrix, and tx and ty are the endpoint coordinates (bx) obtained after the mapping is calculated0,by0),(bx1,by1),(bx2,by2),(bx3,by4)。
And thirdly, calculating to obtain the accurate position coordinate range of the second image in the first image after correction by using the inverse mapping relation from the second image to the first image, namely a tform _ inv matrix. Original four endpoint coordinates (0,0), (w) of the second image2–1,0),(0,h2–1),(w2–1,h2-1) performing an inverse mapping calculation to obtain a mapProjected end point coordinates (bx)0’,by0’),(bx1’,by1’),(bx2’,by2’),(bx3’,by4') and the determined coordinates are compared with the image size of the first image (end points are (0,0), (w)1–1,0),(0,h1–1),(w1–1,h1-1)) to obtain an intersection, i.e. an overlapping area 1 in the first image with the second image, in a manner consistent with a positive mapping, which is not described in detail herein.
It should be noted here that the overlap region 2 and the overlap region 1 refer to the coordinate ranges of the overlap region on the respective images of the first image and the second image, respectively, that is, the overlap region 1 and the overlap region 2 are different coordinate ranges of the same overlap region on different images.
And 2, respectively counting original image file (Raw) domain statistical information of the overlapping region of the first image and the second image. The RAW domain statistical information comprises the brightness value of each pixel point in the overlapping region and/or the sum of the brightness values of all the pixel points in the overlapping region,
and 3, determining the brightness ratio of the overlapping area of the first image and the second image according to the RAW domain statistical information of the overlapping area of the first image and the second image, wherein the brightness ratio is the first compensation parameter.
Assuming that the sum of all the pixel brightness values of the first image in the overlapping region is sum _ Y1, and the sum of all the pixel brightness values of the second image in the overlapping region is sum _ Y2. Then the luminance ratio of the first image and the second image in the overlapping area is sum _ Y1/sum _ Y2, i.e., the first compensation parameter is sum _ Y1/sum _ Y2.
It should be noted that step 1 above may or may not be executed, and if not, the brightness ratio may be the brightness ratio of the first image whole graph and the second image whole graph.
In the second case, the compensation parameter includes the second compensation parameter, i.e., the white balance difference, and the determination process of the second compensation parameter includes the following steps 1 to 3.
Step 1, determining an overlapping area of the first image and the second image.
For step 1, please refer to the implementation manner of step 1 in the first case, which is not described herein again.
And 2, counting RAW domain statistical information of the overlapping region of the first image and the second image, wherein the RAW domain statistical information comprises the white balance gain of each pixel point in the overlapping region and/or the sum of the white balance gains of all the pixel points in the overlapping region.
And 3, determining the difference value of the white balance gains of the two images in the overlapped area, namely the second compensation parameter according to the RAW domain statistical information of the overlapped area of the first image and the second image.
Assuming that the white balance gain of the overlapping region on the first image comprises ROI _ Rgain1And/or ROI _ Bgain1Wherein, ROI _ Rgain1Is the white balance gain, ROI _ Bgain, of the R component of the overlapping region on said first image1Is the white balance gain of the overlap region B component on the first image.
Similarly, assume that the white balance gain of the overlapped region on the second image is ROI _ Rgain2And ROI _ Bgain2Wherein, ROI _ Rgain2Is the white balance gain, ROI _ Bgain, of the R component of the overlapping region on said second image2Is the white balance gain of the overlap area B component on the second image.
Wherein the white balance gain difference of the overlap region includes delta _ Rgain and delta _ Bgain, the delta _ Rgain indicating the white balance gain difference of the R component; delta _ Bgain is used to indicate the white balance gain difference of the B component, and the specific formula is as follows:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
delta_Bgain=ROI_Bgain2-ROI_Bgain1
and the delta _ Rgain and the delta _ Bgain are the second compensation parameters.
After the first and second compensation parameters are determined, step 203 may be performed.
Step 203: and determining a first shooting parameter of the first image acquisition module. The first photographing parameter includes at least one of exposure, gain, aperture value, or white balance gain.
Wherein, before step 203, the method further comprises: judging whether the brightness of the first image is proper (for example, whether the brightness is in a preset brightness range) through the counted RAW domain information of the first image, wherein if the brightness of the first image is proper, the exposure, gain and aperture value in the first shooting parameter are the current exposure, gain and aperture value of the first image; and if the brightness of the first image is not appropriate, inputting the RAW domain statistical information into an Automatic Exposure (AE) algorithm module, wherein the AE algorithm module can adaptively adjust the RAW domain information of the image to enable the adjusted brightness of the image to be appropriate, and the AE algorithm outputs Exposure, gain and aperture values corresponding to the adjusted brightness. The exposure, gain and aperture value in the first shooting parameter are the exposure, gain and aperture value output by the AE algorithm.
Step 204: and adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter.
When the compensation parameters comprise the first compensation parameters, adjusting second shooting parameters of the second image acquisition module according to the first shooting parameters and the first compensation parameters, wherein the brightness of a third image obtained by shooting by the first image acquisition module by using the first shooting parameters is the same as or close to that of a fourth image obtained by shooting by the second image acquisition module by using the adjusted second shooting parameters;
when the compensation parameters comprise the second compensation parameters, second shooting parameters of the second image acquisition module are adjusted according to the first shooting parameters and the second compensation parameters, and the colors of a third image obtained by shooting through the first shooting parameters by the first image acquisition module and a fourth image obtained by shooting through the second image acquisition module through the adjusted second shooting parameters are the same or close to each other.
Illustratively, the second photographing parameters include exposure, gain, aperture value, and white balance gain.
Wherein, the exposure, the gain and the aperture value are obtained according to the first compensation parameter and the first shooting parameter, and the specific method is realized by dynamic adjustment in an AE adjusting module in the image splicing device.
For example, assume that the first image acquisition module is referred to as Sensor1 and the second image acquisition module is referred to as Sensor 2. The AE adjusting module converts the exposure, gain and aperture value required to be configured by the Sensor2 according to the first compensation parameter and the exposure, gain and aperture value (i.e. the first shooting parameter) currently configured by the Sensor1 of the first image, and configures the converted exposure, gain and aperture value to the Sensor2, namely, the adjustment of the exposure, gain and aperture value in the second shooting parameter is completed.
The adjustment of the White Balance gain is obtained according to the second compensation parameter and the first shooting parameter, and the specific method is realized by an Automatic White Balance (AWB) adjusting module in the image splicing device.
For example, assume that the first image acquisition module is referred to as Sensor1 and the second image acquisition module is referred to as Sensor 2. The AWB adjustment module sums the second compensation parameter obtained in step 202 with the current white balance gain value (i.e., the first shooting parameter) of the first image to obtain a white balance gain that needs to be configured by Sensor2, and configures the white balance gain to Sensor2, that is, completes adjustment of the white balance gain in the second shooting parameter.
Two examples are presented below.
Example 1, take the example that the compensation parameter is a luminance ratio of the first image and the second image.
Referring to fig. 3, the process includes:
step 301, inputting a first image and a second image, and acquiring RAW domain statistical information of the two images.
Step 302: and judging whether the static calibration function is enabled, if so, entering the step 303, and otherwise, entering the step 310.
The static calibration function refers to a function of the image stitching device for determining a first compensation parameter. The user may select to turn on or off the static calibration function, and thus after inputting the first image and the second image, it may be determined whether the static calibration function is on (i.e., enabled).
Step 303: and judging whether the calibration preparation work is completed, if so, entering the step 307, and otherwise, entering the step 304. The calibration preparation work refers to work for saving the coordinates of the overlapping area of the first image and the second image in the image splicing device, and if the coordinates of the overlapping area of the first image and the second image are saved in the image splicing device, the calibration preparation work is judged to be completed; if the coordinates of the overlapping area of the first image and the second image are not stored in the image splicing device, it is determined that the calibration preparation work is not completed; or the calibration preparation work may also refer to starting a correlation operation module (such as an AE adjustment module and/or an AWB adjustment module) in the image splicing apparatus.
Step 304: and judging whether the first image brightness is proper, if so, entering the step 305, otherwise, entering the step 310.
Step 305: judging whether an overlapping area exists, if so, entering a step 306, and if not, ending; the overlapping region is obtained through a feature matching module, and if the overlapping region does not exist, it is indicated that the first image and the second image do not have the overlapping region matched with each other, that is, the two images cannot be subjected to image stitching.
Step 306: and reading the coordinates of the overlapping area, and finishing the calibration preparation work.
Step 307: and respectively counting RAW domain statistical information of the first image and the second image in an overlapping region.
Step 308: and respectively calculating the sum _ Y1 and sum _ Y2 of the brightness values of all the pixel points in the overlapping area of the two images. The sum of the brightness values of all the pixels in the overlapping area on the first image is sum _ Y1, and the sum of the brightness values of all the pixels in the overlapping area on the second image is sum _ Y2.
Step 309: the luminance ratio, i.e. the first compensation parameter, is determined and saved, and the static annotation function is turned off, the luminance ratio being sum _ Y1/sum _ Y2.
The above is an execution step in which the brightness of the first image is appropriate. If the brightness of the first image is not appropriate, step 310 is performed directly after step 304.
Step 310: and inputting RAW domain statistical information of the first image into an AE algorithm module. And the output of the AE algorithm module is exposure, gain and aperture value which correspond to the first image under proper brightness after the first image is subjected to self-adaptive adjustment.
Step 311: sensor1 configures the current exposure, gain, aperture value. The Sensor1 is the first image acquisition module.
Step 312: the exposure, gain, and aperture value required to be allocated by Sensor2 are calculated from the luminance ratio and the current exposure, gain, and aperture value of Sensor 1.
Step 313: sensor2 configures the exposure, gain, aperture values it needs to configure. I.e., the exposure, gain, aperture values calculated in step 312 are configured for Sensor 2.
Example 2, the compensation parameter is a white balance gain difference between the first image and the second image. Referring to fig. 4, the process includes:
step 401: and inputting the first image and the second image, and acquiring RAW domain statistical information of the two images.
Step 402: and inputting the RAW domain statistical information of the first image in the overlapping region into an AWB algorithm module. The output of the AWB algorithm module is the current white balance gain of the first image. The white balance gain includes: rgain1、Bgain1
Step 403: sensor1 configured with white balance gain Rgain1、Bgain1
Step 404: and judging whether the coordinates of the overlapping area need to be acquired again, if so, entering the step 405, and otherwise, entering the step 407. Whether the overlapped area coordinate needs to be acquired again is judged according to the zone bit of the overlapped area coordinate needs to be acquired again, and when the image splicing device is started each time, the zone bit of the overlapped area coordinate needs to be acquiedout to be set to 1, and the overlapped area coordinate needs to be acquired again; the marker position 0 of the reacquired overlap region coordinates after the image stitching device has acquired the corresponding overlap region coordinates indicates that reacquiring the overlap region coordinates is not required.
Step 405: and reading the coordinates of the overlapping area of the first image and the second image.
Step 406: and setting a flag bit of the reacquired overlapping area coordinate to 0.
Step 407: and judging whether the overlapped area coordinates exist, if so, entering the step 408, and if not, ending. If the coordinates of the overlapping area do not exist, it is indicated that the first image and the second image do not have the mutually matched overlapping area, that is, the two images cannot be subjected to image stitching.
Step 408: acquiring a white balance gain of the first image in an overlapping area, comprising: ROI _ Rgain1、ROI_Bgain1
Step 409: acquiring the white balance gain of the second image in the overlapping area, comprising: ROI _ Rgain2、ROI_Bgain2
Step 410: calculating a white balance gain difference of a first image and a second image in an overlapping area:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
delta_Bgain=ROI_Bgain2-ROI_Bgain1
step 411: calculating the second image final white balance gain:
Rgain2=Rgain1+delta_Rgain
Bgain2=Bgain1+delta_Bgain
step 412: sensor2 configured with white balance gain Rgain2And Bgain2Is the final white balance gain.
Fig. 5 is a diagram illustrating an embodiment of the present invention. Taking fig. 5 as an example, a detailed description will be made of an image stitching method provided by an embodiment of the present invention, referring to fig. 5, taking pictures of two camera lenses (i.e., the first image acquisition module and the second image acquisition module) of a binocular camera (i.e., the image stitching device) to obtain two images, i.e., a first image and a second image in fig. 5, inputting RAW domain statistical information of the two images into a feature matching module to obtain an overlapping region and overlapping region coordinates of the first image and the second image, and a specific method thereof may refer to corresponding descriptions in the embodiment shown in fig. 2.
In one possible implementation, the AE adjustment module is in parallel with the AWB adjustment module, as shown in fig. 5. In the embodiment of the present invention, the AE adjusting module and the AWB adjusting module may also be in series, and the AE adjusting module may be before the AWB adjusting module or after the AWB adjusting module.
The RAW domain statistical information of the first image and the second image is input into the AE adjusting module, so that brightness shooting parameters (namely exposure, gain and aperture value) corresponding to the second image can be adjusted according to image parameters of the two images to obtain brightness shooting parameters corresponding to the two adjusted images, and two images shot by two lenses of the binocular camera at the next shooting time can be guaranteed to have the same brightness.
The RAW domain statistical information of the first image and the second image is input into the AWB adjustment module, so that the color shooting parameters (namely, white balance gains) corresponding to the second image can be adjusted according to the image parameters of the two images to obtain the color shooting parameters corresponding to the two adjusted images, and the two images shot by the two lenses of the binocular camera at the next shooting time can be guaranteed to have the same color.
The shooting parameters of the two lenses after the AE adjusting module and the AWB adjusting module are configured to the sensors (namely, Sensor1 and Sensor2) of the two camera lenses, so that images shot by the two lenses after the adjustment of the binocular camera have the same brightness and color, the subsequent panoramic images can be spliced conveniently, and the spliced panoramic images have better overall visual effect.
Based on the same inventive concept, the embodiment of the invention provides an image splicing device, and the image splicing device can realize the corresponding functions of the image splicing method. The image stitching device can be a hardware structure, a software module or a hardware structure and a software module. The image splicing device can be realized by a chip system, and the chip system can be formed by a chip and can also comprise the chip and other discrete devices. Referring to fig. 6, the apparatus includes an acquisition module 601 and a processing module 602, wherein:
the acquisition module 601 comprises a first image acquisition module and a second image acquisition module, wherein the first image acquisition module is used for acquiring the first image, and the second image acquisition module is used for acquiring the second image.
A processing module 602, configured to determine a compensation parameter according to the first image and the second image; the first shooting parameter of the first image acquisition module is also determined; adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter;
the first image acquisition module uses the first shooting parameter to shoot a third image and the second image acquisition module uses the adjusted second shooting parameter to shoot a fourth image, wherein the image parameters of the third image obtained by shooting by the first image acquisition module and the fourth image obtained by shooting by the second image acquisition module are the same or close to each other.
In a possible implementation, the compensation parameter determined by the processing module includes a first compensation parameter and/or a second compensation parameter, and includes:
the first compensation parameter is a luminance ratio of the first image and the second image;
the second compensation parameter is a difference in white balance gain of the first image and the second image.
In a possible implementation manner, when the compensation parameter includes the first compensation parameter, a second shooting parameter of the second image acquisition module is adjusted according to the first shooting parameter and the first compensation parameter, and brightness of a third image obtained by the first image acquisition module through shooting with the first shooting parameter is the same as or close to brightness of a fourth image obtained by the second image acquisition module through shooting with the adjusted second shooting parameter;
and when the compensation parameters comprise the second compensation parameters, adjusting second shooting parameters of the second image acquisition module according to the first shooting parameters and the second compensation parameters, wherein the colors of a third image shot by the first image acquisition module by using the first shooting parameters and a fourth image shot by the second image acquisition module by using the adjusted second shooting parameters are the same or close to each other.
In a possible embodiment, when the compensation parameter includes a first compensation parameter, the first shooting parameter includes at least one of an exposure, a gain, and an aperture value, and the second shooting parameter includes at least one of an exposure, a gain, and an aperture value;
when the compensation parameter includes a second compensation parameter, the first photographing parameter includes a white balance gain, and the second photographing parameter includes a white balance gain.
In a possible implementation, the first compensation parameter is a luminance ratio of the first image and the second image, including: the first compensation parameter is a luminance ratio of an overlapping area on the first image and the second image;
the second compensation parameter is a difference in white balance gain of the first image and the second image, including: the second compensation parameter is a difference in white balance gain of the first image and the second image in an overlapping region.
In one possible embodiment, the second compensation parameter includes a white balance gain compensation parameter of the R component and a white balance gain compensation parameter of the B component;
the white balance gain compensation parameter of the R component satisfies:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
wherein delta _ Rgain is used to indicate theWhite balance gain compensation parameter for the R component, ROI _ Rgain1White balance gain, ROI _ Rgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image;
the white balance gain compensation parameter of the B component satisfies:
delta_Bgain=ROI_Bgain2-ROI_Bgain1
wherein delta _ Bgain is used to indicate a white balance gain compensation parameter, ROI _ Bgain, of the B component1White balance gain, ROI _ Bgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image.
All relevant contents of the steps related to the embodiment of the image stitching method may be cited to the functional description of the functional module corresponding to the image stitching device in the embodiment of the present application, and are not described herein again.
The division of the modules in the embodiments of the present invention is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 7, based on the same inventive concept, an embodiment of the present invention provides an image stitching apparatus, which includes at least one processor 701, where the processor 701 is configured to execute a computer program stored in a memory, and implement the steps of the image stitching method shown in fig. 3 provided by the embodiment of the present invention.
Alternatively, the processor 701 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, that may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the image stitching method disclosed by the embodiment of the invention can be directly implemented by a hardware processor, or implemented by combining hardware and software modules in the processor.
Optionally, the image stitching apparatus may further include a memory 702 connected to the at least one processor 701, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may execute the steps included in the foregoing image stitching method by executing the instructions stored in the memory 702.
In this embodiment of the present invention, a specific connection medium between the processor 701 and the Memory 702 is not limited, and the Memory 702 may include at least one type of storage medium, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read-Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and the like. The memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 of embodiments of the present invention may also be circuitry or any other device capable of performing a storage function to store program instructions and/or data.
By programming the processor 701, the codes corresponding to the image stitching method described in the foregoing embodiment may be solidified into the chip, so that the chip can execute the steps of the image stitching method when running, and how to program the processor 701 is a technique known by those skilled in the art, and will not be described herein again. The physical device corresponding to the processing module 602 may be the processor 701. The image stitching device can be used for executing the method provided by the embodiment shown in fig. 2. Therefore, regarding the functions that can be realized by each functional module in the device, reference may be made to the corresponding description in the embodiment shown in fig. 2, which is not repeated herein.
Optionally, the image splicing apparatus in fig. 7 may further include an image capturing module (including a first image capturing module and a second image capturing module), such as a camera. The first image acquisition module and the second image acquisition module can be different cameras, such as cameras arranged at different positions on the image splicing device. The camera can be a common camera or a wide-angle camera, and the like, and the embodiment of the application is not limited.
Based on the same inventive concept, embodiments of the present invention further provide a computer-readable storage medium, which stores computer instructions, and when the computer instructions are executed on a computer, the computer is caused to execute the steps of the image stitching method as described above.
In some possible embodiments, the aspects of the image stitching method provided in the present application may also be implemented in the form of a program product, which includes program code for causing the detection apparatus to perform the steps in the image stitching method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on an electronic device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (13)

1. An image stitching method is applied to an image stitching device, wherein the image stitching device comprises a first image acquisition module and a second image acquisition module, and is characterized by comprising the following steps:
determining a compensation parameter according to a first image and a second image, wherein the first image is acquired by the first image acquisition module, the second image is acquired by the second image acquisition module, and the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles;
determining a first shooting parameter of the first image acquisition module;
adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter; the first image acquisition module uses the first shooting parameter to shoot a third image and the second image acquisition module uses the adjusted second shooting parameter to shoot a fourth image, wherein the image parameters of the third image obtained by shooting by the first image acquisition module and the fourth image obtained by shooting by the second image acquisition module are the same or close to each other.
2. The method of claim 1, wherein the compensation parameter comprises a first compensation parameter and/or a second compensation parameter, wherein the first compensation parameter is a luminance ratio of the first image and the second image; the second compensation parameter is a difference in white balance gain of the first image and the second image.
3. The method of claim 2,
when the compensation parameters comprise the first compensation parameters, adjusting second shooting parameters of the second image acquisition module according to the first shooting parameters and the first compensation parameters, wherein the brightness of a third image obtained by shooting by the first image acquisition module by using the first shooting parameters is the same as or close to that of a fourth image obtained by shooting by the second image acquisition module by using the adjusted second shooting parameters;
and when the compensation parameters comprise the second compensation parameters, adjusting second shooting parameters of the second image acquisition module according to the first shooting parameters and the second compensation parameters, wherein the colors of a third image shot by the first image acquisition module by using the first shooting parameters and a fourth image shot by the second image acquisition module by using the adjusted second shooting parameters are the same or close to each other.
4. The method of claim 3,
when the compensation parameter comprises a first compensation parameter, the first shooting parameter comprises at least one of exposure, gain and aperture value, and the second shooting parameter comprises at least one of exposure, gain and aperture value;
when the compensation parameter includes a second compensation parameter, the first photographing parameter includes a white balance gain, and the second photographing parameter includes a white balance gain.
5. The method of claim 2,
the first compensation parameter is a luminance ratio of the first image and the second image, including: the first compensation parameter is a luminance ratio of an overlapping area on the first image and the second image;
the second compensation parameter is a difference in white balance gain of the first image and the second image, including: the second compensation parameter is a difference in white balance gain of an overlapping area on the first image and the second image.
6. The method of claim 2,
the second compensation parameter includes a white balance gain compensation parameter of an R component and/or a white balance gain compensation parameter of a B component, wherein the white balance gain compensation parameter of the R component satisfies:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
wherein delta _ Rgain is used to indicate a white balance gain compensation parameter for the R component, ROI _ Rgain1White balance gain, ROI _ Rgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image;
the white balance gain compensation parameter of the B component satisfies:
delta_Bgain=ROI_Bgain2-ROI_Bgain1
wherein delta _ Bgain is used to indicate a white balance gain compensation parameter, ROI _ Bgain, of the B component1White balance gain, ROI _ Bgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the second image.
7. An image stitching device, comprising:
the first image acquisition module is used for acquiring a first image;
the second image acquisition module is used for acquiring a second image; the first image and the second image are images obtained by shooting the same object or the same shooting scene from different angles;
the processing module is used for determining a compensation parameter according to the first image and the second image;
the processing module is further used for determining a first shooting parameter of the first image acquisition module; adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the compensation parameter;
the first image acquisition module uses the first shooting parameter to shoot a third image and the second image acquisition module uses the adjusted second shooting parameter to shoot a fourth image, wherein the image parameters of the third image obtained by shooting by the first image acquisition module and the fourth image obtained by shooting by the second image acquisition module are the same or close to each other.
8. The apparatus of claim 7, wherein the compensation parameter comprises a first compensation parameter and/or a second compensation parameter, wherein the first compensation parameter is a luminance ratio of the first image and the second image; the second compensation parameter is a difference in white balance gain of the first image and the second image.
9. The apparatus of claim 8,
when the compensation parameter includes the first compensation parameter, the processing module is specifically configured to: adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the first compensation parameter, wherein the brightness of a third image obtained by the first image acquisition module through shooting by using the first shooting parameter is the same as or close to the brightness of a fourth image obtained by the second image acquisition module through shooting by using the adjusted second shooting parameter;
when the compensation parameter includes the second compensation parameter, the processing module is specifically configured to: and adjusting a second shooting parameter of the second image acquisition module according to the first shooting parameter and the second compensation parameter, wherein the color of a third image obtained by shooting by using the first shooting parameter by the first image acquisition module is the same as or close to that of a fourth image obtained by shooting by using the adjusted second shooting parameter by the second image acquisition module.
10. The apparatus of claim 9,
when the compensation parameter comprises a first compensation parameter, the first shooting parameter comprises at least one of exposure, gain and aperture value, and the second shooting parameter comprises at least one of exposure, gain and aperture value;
when the compensation parameter includes a second compensation parameter, the first photographing parameter includes a white balance gain, and the second photographing parameter includes a white balance gain.
11. The apparatus of claim 8,
the first compensation parameter is a luminance ratio of the first image and the second image, including: the first compensation parameter is a luminance ratio of an overlapping area on the first image and the second image;
the second compensation parameter is a difference in white balance gain of the first image and the second image, including: the second compensation parameter is a difference in white balance gain of an overlapping area on the first image and the second image.
12. The apparatus of claim 8,
the second compensation parameter includes a white balance gain compensation parameter of an R component and/or a white balance gain compensation parameter of a B component, wherein the white balance gain compensation parameter of the R component satisfies:
delta_Rgain=ROI_Rgain2-ROI_Rgain1
wherein delta _ Rgain is used to indicate a white balance gain compensation parameter for the R component, ROI _ Rgain1White balance gain, ROI _ Rgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image;
the white balance gain compensation parameter of the B component satisfies:
delta_Bgain=ROI_Bgain2-ROI_Bgain1
wherein delta _ Bgain is used to indicate a white balance gain compensation parameter, ROI _ Bgain, of the B component1White balance gain, ROI _ Bgain, for indicating an overlapping area on the first image with the second image2A white balance gain indicating an overlapping area on the second image with the first image.
13. A computer-readable storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 6.
CN202110391998.2A 2021-04-13 2021-04-13 Image stitching method and device Active CN113240582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110391998.2A CN113240582B (en) 2021-04-13 2021-04-13 Image stitching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110391998.2A CN113240582B (en) 2021-04-13 2021-04-13 Image stitching method and device

Publications (2)

Publication Number Publication Date
CN113240582A true CN113240582A (en) 2021-08-10
CN113240582B CN113240582B (en) 2023-12-12

Family

ID=77128083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110391998.2A Active CN113240582B (en) 2021-04-13 2021-04-13 Image stitching method and device

Country Status (1)

Country Link
CN (1) CN113240582B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040179A (en) * 2021-10-20 2022-02-11 重庆紫光华山智安科技有限公司 Image processing method and device
CN115460354A (en) * 2021-11-22 2022-12-09 北京罗克维尔斯科技有限公司 Image brightness processing method and device, electronic equipment, vehicle and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107659769A (en) * 2017-09-07 2018-02-02 维沃移动通信有限公司 A kind of image pickup method, first terminal and second terminal
KR101831429B1 (en) * 2017-11-10 2018-02-22 (주)대지이엔지 Apparatus for air shooting able to get the image of blind spot and to control resolution automatically
CN109598673A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Image split-joint method, device, terminal and computer readable storage medium
CN110012209A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Panorama image generation method, device, storage medium and electronic equipment
WO2020042858A1 (en) * 2018-08-29 2020-03-05 上海商汤智能科技有限公司 Image stitching method and device, on-board image processing device, electronic apparatus, and storage medium
WO2020093651A1 (en) * 2018-11-09 2020-05-14 浙江宇视科技有限公司 Method and apparatus for automatically detecting and suppressing fringes, electronic device and computer-readable stroage medium
CN111182217A (en) * 2020-01-07 2020-05-19 徐梦影 Image white balance processing method and device
WO2021026822A1 (en) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 Image processing method and apparatus, image photographing device, and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107659769A (en) * 2017-09-07 2018-02-02 维沃移动通信有限公司 A kind of image pickup method, first terminal and second terminal
CN109598673A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Image split-joint method, device, terminal and computer readable storage medium
KR101831429B1 (en) * 2017-11-10 2018-02-22 (주)대지이엔지 Apparatus for air shooting able to get the image of blind spot and to control resolution automatically
CN110012209A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Panorama image generation method, device, storage medium and electronic equipment
WO2020042858A1 (en) * 2018-08-29 2020-03-05 上海商汤智能科技有限公司 Image stitching method and device, on-board image processing device, electronic apparatus, and storage medium
WO2020093651A1 (en) * 2018-11-09 2020-05-14 浙江宇视科技有限公司 Method and apparatus for automatically detecting and suppressing fringes, electronic device and computer-readable stroage medium
WO2021026822A1 (en) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 Image processing method and apparatus, image photographing device, and mobile terminal
CN111182217A (en) * 2020-01-07 2020-05-19 徐梦影 Image white balance processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIAO W S: ".Real-time spherical panorama image stitching using Open CL.In:Proceedings of International Conference on Computer Graphics and Virtual Reality", 《IN:PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND VIRTUAL REALITY, LAS VEGAS》, pages 113 - 119 *
黎吉国;王悦;张新峰;马思伟;: "一种鱼眼视频全景拼接中的亮度补偿算法", 中国科学:信息科学, no. 03, pages 33 - 45 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040179A (en) * 2021-10-20 2022-02-11 重庆紫光华山智安科技有限公司 Image processing method and device
CN114040179B (en) * 2021-10-20 2023-06-06 重庆紫光华山智安科技有限公司 Image processing method and device
CN115460354A (en) * 2021-11-22 2022-12-09 北京罗克维尔斯科技有限公司 Image brightness processing method and device, electronic equipment, vehicle and storage medium

Also Published As

Publication number Publication date
CN113240582B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
US11430103B2 (en) Method for image processing, non-transitory computer readable storage medium, and electronic device
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
US9325899B1 (en) Image capturing device and digital zooming method thereof
WO2018176925A1 (en) Hdr image generation method and apparatus
US20180160046A1 (en) Depth-based zoom function using multiple cameras
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113240582B (en) Image stitching method and device
WO2019056527A1 (en) Capturing method and device
US10699377B2 (en) Method, device, and camera for blending a first and a second image having overlapping fields of view
CN109951639A (en) Camera stabilization system, method, electronic equipment and computer readable storage medium
JP7123736B2 (en) Image processing device, image processing method, and program
CN112930677B (en) Method for switching between first lens and second lens and electronic device
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
CN110288511B (en) Minimum error splicing method and device based on double camera images and electronic equipment
JP6172935B2 (en) Image processing apparatus, image processing method, and image processing program
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN108416333B (en) Image processing method and device
CN108198189B (en) Picture definition obtaining method and device, storage medium and electronic equipment
WO2022267939A1 (en) Image processing method and apparatus, and computer-readable storage medium
TW201931303A (en) Method of providing image and electronic device for supporting the method
CN112233189A (en) Multi-depth camera external parameter calibration method and device and storage medium
US20180063426A1 (en) Method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene
CN106454140B (en) A kind of information processing method and electronic equipment
CN113592739A (en) Method and device for correcting lens shadow and storage medium
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant