CN108305235B - Method and device for fusing multiple pictures - Google Patents

Method and device for fusing multiple pictures Download PDF

Info

Publication number
CN108305235B
CN108305235B CN201710021329.XA CN201710021329A CN108305235B CN 108305235 B CN108305235 B CN 108305235B CN 201710021329 A CN201710021329 A CN 201710021329A CN 108305235 B CN108305235 B CN 108305235B
Authority
CN
China
Prior art keywords
picture
target
pixel
illumination
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710021329.XA
Other languages
Chinese (zh)
Other versions
CN108305235A (en
Inventor
杨帅
夏思烽
刘家瑛
郭宗明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Peking University Founder Group Co Ltd
Beijing Founder Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Peking University Founder Group Co Ltd, Beijing Founder Electronics Co Ltd filed Critical Peking University
Priority to CN201710021329.XA priority Critical patent/CN108305235B/en
Publication of CN108305235A publication Critical patent/CN108305235A/en
Application granted granted Critical
Publication of CN108305235B publication Critical patent/CN108305235B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a device for fusing multiple pictures, wherein the method comprises the following steps: determining the illumination attributes of a first picture containing a character image and at least one second picture containing at least part of a background image in the first picture, determining a target picture from the first picture and the at least one second picture according to the illumination attributes, converting the illumination attributes of non-target pictures into the images consistent with the target picture, marking the positions of the same characteristic points in the first picture and the at least one second picture, overlapping the first picture and the at least one second picture based on the positions of the same characteristic points, and selecting pixel points in an overlapping area to form an integrated picture. The invention solves the problem that the proportion of the portrait in the self-timer picture obtained by the prior art is far larger than that of the background, so that the background image in the picture can not display the complete scene of the self-timer.

Description

Method and device for fusing multiple pictures
Technical Field
The invention relates to the technical field of image synthesis, in particular to a method and a device for fusing multiple pictures.
Background
With the development of multimedia technologies such as pictures, more and more people use portable devices to take self-timer.
At present, the process of using portable equipment to carry out self-shooting is that handheld portable equipment shoots through a front camera to obtain a self-shooting picture generally, however, due to the limitation of shooting distance and the front camera, the proportion of the portrait in the self-shooting picture obtained in the mode is often far greater than that of the background, and the background image in the picture cannot display the complete scene where a self-shooter is located.
Disclosure of Invention
The invention provides a method and a device for fusing multiple pictures, which are used for solving the problem that the proportion of the portrait in a self-timer picture obtained in the prior art is far larger than that of the background, so that the background image in the picture can not display the complete scene of a self-timer.
In a first aspect, the present invention provides a method for fusing multiple pictures, including:
determining illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain a character image, but contains at least part of the background image in the first picture;
sequencing the illumination attributes of the first picture and the at least one second picture, and selecting the picture sequenced at the first position as a target picture;
according to the illumination attribute of the target picture, converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture;
determining feature points in the first picture and the at least one second picture, and marking the positions of the same feature points in the first picture and the at least one second picture;
the first picture and the at least one second picture are overlapped based on the positions of the same feature points;
determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, reserving the pixel points, and forming first pixel points of the fused picture; screening the pixel points in the overlapped area except for the first picture to obtain second screened pixel points;
and fusing the first pixel points, the second pixel points and the pixel points in the non-coincident region to form the fused picture.
Optionally, the illumination attribute is a brightness parameter;
correspondingly, the converting the illumination attribute of the non-target picture to be consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture includes:
converting the target picture and the non-target picture from an RGB color space to a luminance and chrominance YUV color space;
acquiring brightness parameters of the target picture and the non-target picture;
converting the brightness parameter of the non-target picture into the brightness parameter consistent with that of the target picture according to the brightness parameter of the target picture so as to enable the brightness of the non-target picture to be consistent with that of the target picture;
and converting the target picture and the non-target picture from a YUV color space to an RGB color space.
Optionally, before the converting, according to the illumination attribute of the target picture, the illumination attribute of the non-target picture to be consistent with the illumination attribute of the target picture, the method further includes:
acquiring a white balance color distribution parameter of a red, green and blue (RGB) color space of each of the first picture and the at least one second picture;
determining the RGB pixel value of each pixel point in each picture, and dividing the RGB pixel value of each pixel point by the white balance color distribution parameter of the RGB color space of each picture so as to enable the color attributes of each picture in the first picture and the at least one second picture to be consistent.
Optionally, after the converting, according to the illumination attribute of the target picture, the illumination attribute of the non-target picture to be consistent with the illumination attribute of the target picture, the method further includes:
and multiplying the RGB pixel value of each pixel point in the first picture and the at least one second picture by the white balance color distribution parameter of the RGB color space of the respective picture to recover the color attribute of the first picture and the at least one second picture.
Optionally, after the fusing the first pixel point, the second pixel point, and the pixel point in the non-overlapping region to form the fused picture, the method further includes:
and smoothing the splicing boundary of the picture fused by the first picture and the at least one second picture by adopting a Poisson fusion method.
Optionally, the determining feature points in the first picture and the at least one second picture, and marking the positions of the same feature points in the first picture and the at least one second picture includes:
adopting a Scale Invariant Feature Transform (SIFT) algorithm to determine SIFT feature points of the first picture and the at least one second picture;
and marking the same SIFT feature points in the first picture and the at least one second picture to obtain the corresponding relation of the same SIFT feature points in the first picture and the at least one second picture.
Optionally, the overlapping the first picture and the at least one second picture based on the position of the same feature point includes:
obtaining an affine transformation matrix from the at least one second picture to the first picture by adopting a sampling consistency algorithm based on the same feature points in the marked first picture and the at least one second picture;
and according to the affine transformation matrix from the at least one second picture to the first picture, overlapping the same feature points in the first picture and the at least one second picture in pairs to obtain the overlapped first picture and the overlapped at least one second picture.
In a second aspect, the present invention provides a device for fusing multiple pictures, including:
the first determining module is used for determining the illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain a character image, but contains at least part of the background image in the first picture;
the selection module is used for sequencing the illumination attributes of the first picture and the at least one second picture and selecting the picture sequenced at the first position as a target picture;
the conversion module is used for converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture according to the illumination attribute of the target picture;
a marking module, configured to determine feature points in the first picture and the at least one second picture, and mark positions of the same feature points in the first picture and the at least one second picture;
the overlapping module is used for overlapping the first picture and the at least one second picture based on the position of the same characteristic point;
the second determining module is used for determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, the pixel points are reserved, and the first pixel points of the fused pictures are formed; screening the pixel points in the overlapped area except for the first picture to obtain second screened pixel points;
and the fusion module is used for fusing the first pixel points, the second pixel points and the pixel points in the non-overlapped area to form the fused picture.
Optionally, the illumination attribute is a brightness parameter;
the conversion module is specifically used for converting the illumination attribute of the non-target picture into a process consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture,
converting the target picture and the non-target picture from an RGB color space to a luminance and chrominance YUV color space; acquiring brightness parameters of the target picture and the non-target picture; converting the brightness parameter of the non-target picture into the brightness parameter consistent with that of the target picture according to the brightness parameter of the target picture so as to enable the brightness of the non-target picture to be consistent with that of the target picture; and converting the target picture and the non-target picture from a YUV color space to an RGB color space.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring white balance color distribution parameters of red, green and blue (RGB) color spaces of each of the first picture and the at least one second picture before the conversion module converts the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture according to the illumination attribute of the target picture;
and the third determining module is used for determining the RGB pixel value of each pixel point in each picture and dividing the RGB pixel value of each pixel point by the white balance color distribution parameter of the RGB color space of each picture so as to enable the color attributes of each picture in the first picture and the at least one second picture to be consistent.
Optionally, the apparatus further comprises:
and the restoring module is used for converting the illumination attribute of the non-target picture into the white balance color distribution parameter which is consistent with the illumination attribute of the target picture by the converting module according to the illumination attribute of the target picture, and multiplying the RGB pixel value of each pixel point of the first picture and the at least one second picture by the RGB color space of each picture so as to restore the color attribute of the first picture and the at least one second picture.
Optionally, the apparatus further comprises:
and the smoothing processing module is used for smoothing the splicing boundary of the picture fused by the first picture and the at least one second picture by adopting a Poisson fusion method after the fusion module fuses the first pixel points, the second pixel points and the pixel points in the non-overlapped region to form the fused picture.
Optionally, the marking module is specifically configured to determine feature points in the first picture and the at least one second picture, and mark the same feature points in the first picture and the at least one second picture,
adopting a Scale Invariant Feature Transform (SIFT) algorithm to determine SIFT feature points of the first picture and the at least one second picture; and marking the same SIFT feature points in the first picture and the at least one second picture to obtain the corresponding relation of the same SIFT feature points in the first picture and the at least one second picture.
Optionally, the matching module is specifically configured to perform a matching process on the first picture and the at least one second picture based on the position of the same feature point,
obtaining an affine transformation matrix from the at least one second picture to the first picture by adopting a sampling consistency algorithm based on the same feature points in the marked first picture and the at least one second picture; and according to the affine transformation matrix from the at least one second picture to the first picture, overlapping the same feature points in the first picture and the at least one second picture in pairs to obtain the overlapped first picture and the overlapped at least one second picture.
According to the embodiment of the invention, the illumination attributes of the first picture and the at least one second picture are determined, the illumination attributes of the first picture and the at least one second picture are sequenced, the picture sequenced at the first position is selected as the target picture, the illumination attribute of the non-target picture is converted to be consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture, the characteristic points are determined in the first picture and the at least one second picture, the positions of the same characteristic points in the first picture and the at least one second picture are marked, then the first picture and the at least one second picture are overlapped based on the positions of the same characteristic points, and the pixel value taking is carried out on the image overlapping area of the overlapped first picture and the at least one second picture, because the first picture is a picture including an image, the second picture is a picture not including a person image, however, the picture comprises at least part of the background image in the first picture, so that the picture obtained by fusing the first picture and the at least one second picture comprises the figure image of the first picture, the first picture and the background image of the second picture, and the problem that the background image in the picture cannot display the complete scene of the self-timer due to the fact that the proportion of the figure in the self-timer picture obtained in the prior art is far larger than that of the background is solved.
Drawings
FIG. 1 is a flow chart illustrating a method for multi-picture fusion in an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method for multi-picture fusion in accordance with another exemplary embodiment;
fig. 3 is a schematic structural diagram illustrating a multi-picture fusion apparatus according to an exemplary embodiment;
fig. 4 is a schematic structural diagram of a multi-picture fusion apparatus according to another exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a method for fusing multiple pictures in an exemplary embodiment, where the embodiment includes:
step 101: determining illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain a person image, but contains at least part of the background image in the first picture.
The illumination attribute may be an attribute parameter representing the brightness of the picture.
The first picture can be a self-portrait picture or other pictures containing figure images; the second picture may be a positive shot picture containing only a part of the background of the self-portrait picture, or may be another picture containing at least a part of the background image of the first picture, which is not limited in this invention. For example, the first picture may be an image including a person obtained by self-shooting through a front camera of the mobile device, and the second picture may be a picture obtained by forward shooting through a rear camera of the mobile device, wherein the picture includes at least part of a background image of the first picture; in general, the second picture also includes a picture of a background that is not displayed in the first picture, such as a background that is not displayed in the first picture except for two sides of the background of the first picture.
Step 102: and sequencing the illumination attributes of the first picture and the at least one second picture, and selecting the picture sequenced at the first position as a target picture.
Specifically, the first picture and the at least one second picture are sorted according to the illumination attribute, and the picture with the highest illumination attribute index may be selected to be arranged at the first position, for example, the picture with the highest brightness is selected to be arranged at the first position as the target picture, or the picture with the highest brightness and color average index is selected to be arranged at the first position as the target picture.
Step 103: and converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture according to the illumination attribute of the target picture.
Specifically, the brightness attribute parameters of the target picture are obtained, and the brightness attribute parameters of the non-target picture are converted to be consistent with the brightness attribute parameters of the target picture according to the brightness attribute parameters of the target picture.
Step 104: determining feature points in the first picture and the at least one second picture, and marking the positions of the same feature points in the first picture and the at least one second picture.
The characteristic points of the picture are points which can reflect the essential characteristics of the picture and can mark a target object in the picture, and the characteristic points are points which are easy to identify and have unique positions in the picture, such as corner points, intersection points and the like in the picture. For example, the SIFT feature point is one of the feature points, which belongs to the local features of the picture, and its characteristics are that the rotation, scaling, and brightness change of the picture are kept unchanged, and a certain degree of stability is kept for the view angle change, affine transformation, and noise.
Step 105: and superposing the first picture and the at least one second picture based on the position of the same feature point.
Specifically, after the positions of the same feature points in the first picture and the at least one second picture are marked, matching the same feature points of the first picture and the at least one second picture, and establishing a corresponding relationship between every two same feature points of the first picture and the at least one second picture; and according to the corresponding relation between every two identical characteristic points of the first picture and the at least one second picture, overlapping every two identical characteristic points of the first picture and the at least one second picture to obtain the overlapped first picture and the overlapped at least one second picture.
Step 106: determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, reserving the pixel points, and forming the first pixel points of the fused picture; and screening the pixel points in the overlapped area except the first picture to obtain second screened pixel points.
Specifically, when the pixel points in the overlapping region include the pixel points belonging to the first picture, the pixel points of the first picture are selected and reserved to obtain the first pixel points of the fused picture, so that the character image cannot be covered when the character image from the first picture appears in the overlapping region.
And for other overlapping regions of the pixel points which do not contain the first picture, namely the overlapping regions which only contain the pixel points of the second picture, screening the pixel points in the overlapping regions according to a preset method, selecting the pixel points with high pixel point quality to reserve, and obtaining the screened second pixel points, so that the image quality reserved in the other overlapping regions which do not contain the pixel points of the first picture is higher.
Step 107: and fusing the first pixel points, the second pixel points and the pixel points in the non-coincident region to form a fused picture.
The first pixel points are pixel points of a first picture from the overlapping area, the second pixel points are pixel points of a second picture with higher pixel quality from the overlapping area which does not contain the pixel points of the first picture, and the first pixel points, the second pixel points and the pixel points of the non-overlapping area are fused to form a fused picture.
As can be seen from this embodiment, this embodiment determines the illumination attributes of a first picture containing a person image and at least one second picture containing at least a portion of a background image in the first picture; sorting according to the illumination attributes, and selecting the picture sorted at the first position as a target picture; converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture; determining feature points in the first picture and the at least one second picture, and marking the positions of the same feature points in the first picture and the at least one second picture; the first picture and the at least one second picture are overlapped based on the positions of the same feature points; determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, reserving the pixel points, and forming the first pixel points of the fused picture; screening the pixel points in the overlapped area except the first picture to obtain second screened pixel points; the method comprises the steps of fusing a first pixel point, a second pixel point and a pixel point of a non-coincident region to form a fused picture, wherein the fused picture comprises a figure image and a background image of the first picture and all background images of the second picture.
Fig. 2 is a flowchart illustrating a method for fusing multiple pictures according to another exemplary embodiment, where on the basis of the foregoing embodiment, the present embodiment includes:
step 201: determining illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain the image of the person, but contains at least part of the background image in the first picture.
Step 202: and sequencing the illumination attributes of the first picture and the at least one second picture, and selecting the picture sequenced at the first position as a target picture.
Step 203: and acquiring a white balance color distribution parameter of a red, green and blue (RGB) color space of each of the first picture and the at least one second picture.
The white balance color distribution parameter of the RGB color space of the picture is an index for describing the white accuracy after the red, green and blue three primary colors are mixed and generated in the picture display.
Step 204: determining the RGB pixel value of each pixel point in each picture in the first picture and the at least one second picture, and dividing the RGB pixel value of each pixel point by the white balance color distribution parameter of the RGB color space of each picture so as to enable the color attributes of each picture in the first picture and the at least one second picture to be consistent.
The color attribute of the picture is related to the shooting parameter of the picture, for example, the shooting parameter may be the illumination type at the time of shooting. For example, assuming that the first picture is a picture taken under a fluorescent lamp light, the second picture is a picture taken under an incandescent lamp light, an object observed by human eyes under the fluorescent lamp light and the incandescent lamp light is white, but an object picture taken under the fluorescent lamp light by a digital camera is green, and an object picture taken under the incandescent lamp light is red, that is, there is a color deviation between the first picture and the at least one second picture and a real object, the color deviation of each picture is eliminated by dividing the RGB pixel value of each pixel point of the first picture and the at least one second picture by the white balance color distribution parameter of the RGB color space of the respective picture, so that the color attributes of each picture in the first picture and the at least one second picture are consistent.
Step 205: converting the target picture and the non-target picture from an RGB color space to a YUV color space, and acquiring brightness parameters of the target picture and the non-target picture; and converting the brightness parameters of the non-target pictures into the brightness parameters consistent with those of the target pictures according to the brightness parameters of the target pictures so as to ensure that the brightness of the non-target pictures is consistent with that of the target pictures, and converting the target pictures and the non-target pictures from YUV color space to RGB color space.
Wherein the illumination attribute is a brightness parameter.
Specifically, for the process of converting a target picture and a non-target picture from an RGB color space to a YUV color space, the RGB color space of the picture contains pixel parameters of the picture, and is represented by (r, g, b); the YUV color space of the picture contains the luminance chrominance parameters of the picture, which are expressed by (Y, U, V); and converting the target picture and the non-target picture from an RGB color space to a YUV color space through a conversion formula.
Wherein, the conversion formula is as follows:
Figure BDA0001207716930000101
wherein [ r, g, b ]]TIs the pixel parameter of the RGB color space of the picture, [ Y, U, V]TIs the luminance chrominance parameter of the YUV color space of the picture, where Y is the luminance parameter and U and V are the chrominance parameters.
Specifically, aiming at obtaining brightness parameters of a target picture and a non-target picture; a process of converting the luminance parameter of the non-target picture to be consistent with the luminance parameter of the target picture according to the luminance parameter of the target picture, so as to make the luminance of the non-target picture consistent with the luminance parameter of the target picture, is described by a specific application example, as follows:
suppose the brightness parameter of the target picture is YtThe brightness parameter of any one non-target picture is Ys
Obtaining a brightness conversion matrix Y by adopting a histogram matching methodmThe following are:
Figure BDA0001207716930000102
in the formula, CsAnd CtIs YsAnd YtC (-) is histogram equalization, C-1(. is a square column)The inverse of the graph equalization.
Then converting the luminance into matrix YmThe corresponding gradient hold equation is solved as follows:
Figure BDA0001207716930000103
wherein I is an identity matrix; dXAnd DYIs a matrix for extracting gradients in the X and Y directions; λ is a parameter for controlling weight, and takes the value of 1; y iss' converting the luminance parameter of the non-target picture into the luminance parameter.
After solving the gradient retention equation, the converted brightness parameter Y of the non-target picture is obtaineds', due to Ys' is according to the luminance parameter Y of the target picturetAnd the luminance of the target picture is consistent with the luminance of the non-target picture after conversion.
Specifically, the conversion formula required for converting the target picture and the non-target picture from the YUV color space to the RGB color space is the inverse operation of the conversion formula in the step (1), and is not described herein again.
Step 206: and multiplying the RGB pixel value of each pixel point in the first picture and the at least one second picture by the white balance color distribution parameter of the RGB color space of the respective picture to recover the color attribute of the first picture and the at least one second picture.
Step 207: and determining characteristic points in the first picture and the at least one second picture, and marking the positions of the same characteristic points in the first picture and the at least one second picture.
Specifically, SIFT feature points of a first picture and at least one second picture are determined by adopting a Scale Invariant Feature Transform (SIFT) algorithm; and marking the same SIFT feature points in the first picture and the at least one second picture to obtain the corresponding relation of the same SIFT feature points in the first picture and the at least one second picture.
Step 208: and overlapping the first picture and the at least one second picture based on the positions of the same characteristic points.
Specifically, based on the same feature points in the marked first picture and the at least one second picture, an affine transformation matrix from the at least one second picture to the first picture is obtained by adopting a sampling consistency algorithm; and according to the affine transformation matrix from the at least one second picture to the first picture, overlapping the same characteristic points in the first picture and the at least one second picture pairwise to obtain the overlapped first picture and the overlapped at least one second picture.
Step 209: determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, reserving the pixel points, and forming the first pixel points of the fused picture; and screening the pixel points in the overlapped area except the first picture to obtain second screened pixel points.
The method comprises the steps of selecting a pixel point in a superposition region containing no pixel point of a first picture, selecting a pixel point with high quality from the superposition region containing no pixel point of the first picture, and selecting a pixel point with high quality from the superposition region containing no pixel point of a second picture.
And selecting the pixel value with the minimum energy function value of the pixel point in the overlapping area by adopting a Markov random field energy equation according to a preset algorithm for screening the pixel point in the overlapping area.
The process of obtaining the energy function value of the pixel point by the markov random field energy equation is described in detail as follows:
the markov random field energy equation is established as follows:
Figure BDA0001207716930000111
wherein E (L) is an energy function of a Markov random field energy equation, and the smaller the numerical value is, the better the pixel value is; alpha-2 is used to balance the smoothing term EsAnd data item EdThe weight of (c); l denotes an allocation, L: (p) i designates which picture of the second pictures the pixel value of the pixel point p is selected; n is a radical of4Expressing the relation of four adjacent domains, wherein a pixel point q is an adjacent pixel of a pixel p; Ω denotes an overlapping region;
in the above formula, Es(L (p), L (q)) is a smoothing term representing the structural continuity of two adjacent pixel values. Let l (p) j, l (q) j, then Es(L (p), L (q)) is defined as:
Es(L(p),L(q))=||Ii”(p)-Ij”(p)||1+||Ii”(q)-Ij”(q)||1,
in the above formula, Ed(L (p)) is a data item indicating the consistency of picture boundaries, Ed(L (p)) is defined as:
Figure BDA0001207716930000121
wherein, Pi(p) and
Figure BDA0001207716930000122
are each independently ofi"and
Figure BDA0001207716930000123
the block with the middle pixel p as the center,
Figure BDA0001207716930000124
is the block difference value, δ Ω is the boundary of the overlap region.
Then, a graph cut algorithm is adopted to solve the Markov random field energy equation to obtain the optimal energy function value E (L) of the pixel point p.
Step 210: and fusing the first pixel points, the second pixel points and the pixel points in the non-coincident region to form a fused picture.
Step 211: and smoothing the splicing boundary of the picture fused by the first picture and the at least one second picture by adopting a Poisson fusion method.
The splicing boundary of the picture fused by the first picture and the at least one second picture is the fused splicing boundary of the first pixel point, the second pixel point and the pixel point of the non-overlapped region.
As can be seen from this embodiment, the present invention forms a fused picture by determining the illumination attributes of a first picture including a person image and at least one second picture including at least a part of a background image in the first picture, determining a target picture according to the illumination attributes, converting the illumination attributes of non-target pictures to be consistent with the target picture, marking the same feature point positions in the first picture and the at least one second picture, overlapping the first picture and the at least one second picture based on the same feature point positions, and selecting pixel points in an overlapping area. The invention solves the problem that the proportion of the portrait in the self-timer picture obtained by the prior art is far larger than that of the background, so that the background image in the picture can not display the complete scene of the self-timer; simultaneously, dividing the RGB pixel values of the first picture and the at least one second picture by the respective white balance color distribution parameters to enable the color attributes of the first picture and the at least one second picture to be consistent; and processing the splicing boundary of the image overlapping region of the first picture and the at least one second picture by adopting a Poisson fusion method so as to smoothen the splicing boundary of the overlapping region.
Fig. 3 is a schematic structural diagram of a multi-picture fusion apparatus according to an exemplary embodiment, where the embodiment includes: a first determination module 301, a selection module 302, a conversion module 303, a marking module 304, an overlapping module 305, a second determination module 306, and a fusion module 307;
a first determining module 301, configured to determine illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain the figure image, but contains at least part of the background image in the first picture;
a selecting module 302, configured to sort the illumination attributes of the first picture and the at least one second picture, and select a picture sorted at the first position as a target picture;
the conversion module 303 is configured to convert the illumination attribute of the non-target picture into a value consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture;
a marking module 304, configured to determine feature points in the first picture and the at least one second picture, and mark the same positions of the feature points in the first picture and the at least one second picture;
an overlay module 305, configured to overlay the first picture and the at least one second picture based on the same position of the feature point;
a second determining module 306, configured to determine all pixel points of all pictures included in the overlapped region, and if a pixel point belongs to the first picture, retain the pixel point to form a first pixel point of the fused picture; screening the pixel points in the overlapped area except the first picture to obtain second screened pixel points;
and a fusion module 307, configured to fuse the first pixel point, the second pixel point, and the pixel point in the non-overlapping region to form a fused picture.
According to the embodiment, the illumination attributes of the first picture containing the image of the person and the at least one second picture containing at least part of the background image in the first picture are determined; sorting according to the illumination attributes, and selecting the picture sorted at the first position as a target picture; converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture; determining feature points in the first picture and the at least one second picture, and marking the positions of the same feature points in the first picture and the at least one second picture; the first picture and the at least one second picture are overlapped based on the positions of the same feature points; determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, reserving the pixel points, and forming the first pixel points of the fused picture; screening the pixel points in the overlapped area except the first picture to obtain second screened pixel points; the method comprises the steps of fusing a first pixel point, a second pixel point and a pixel point of a non-coincident region to form a fused picture, wherein the fused picture comprises a figure image and a background image of the first picture and all background images of the second picture.
Fig. 4 is a schematic structural diagram of a device for fusing multiple pictures, according to another exemplary embodiment, on the basis of the foregoing embodiment, the present embodiment further includes: an obtaining module 308, a third determining module 309, a recovering module 310, and a smoothing module 311;
an obtaining module 308, configured to obtain a white balance color distribution parameter of a red, green, blue, RGB color space of each of the first picture and the at least one second picture;
the third determining module 309 determines the RGB pixel value of each pixel point in each picture, and divides the RGB pixel value of each pixel point by the white balance color distribution parameter of the RGB color space of each picture, so that the color attributes of each picture in the first picture and the at least one second picture are consistent.
The restoring module 310 is configured to, after the conversion module 303 converts the illumination attribute of the non-target picture into a value consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture, multiply the RGB pixel values of each pixel point in the first picture and the at least one second picture by the corresponding white balance color distribution parameter, so as to restore the color attributes of the first picture and the at least one second picture.
And a smoothing module 311, configured to smooth, by using a poisson fusion method, a splicing boundary of an image overlapping region of the first picture and the at least one second picture after the fusion module 307.
As can be seen from this embodiment, the present invention forms a fused picture by determining the illumination attributes of a first picture including a person image and at least one second picture including at least a part of a background image in the first picture, determining a target picture according to the illumination attributes, converting the illumination attributes of non-target pictures to be consistent with the target picture, marking the same feature point positions in the first picture and the at least one second picture, overlapping the first picture and the at least one second picture based on the same feature point positions, and selecting pixel points in an overlapping area. The invention solves the problem that the proportion of the portrait in the self-timer picture obtained by the prior art is far larger than that of the background, so that the background image in the picture can not display the complete scene of the self-timer; simultaneously, dividing the RGB pixel values of the first picture and the at least one second picture by the respective white balance color distribution parameters to enable the color attributes of the first picture and the at least one second picture to be consistent; and processing the splicing boundary of the image overlapping region of the first picture and the at least one second picture by adopting a Poisson fusion method so as to smoothen the splicing boundary of the overlapping region.
On the basis of the above-described embodiment, with reference to fig. 4,
optionally, the lighting property comprises a brightness parameter; a conversion module 303, specifically configured to convert the target picture and the non-target picture from a red, green, blue, RGB, color space to a luminance and chrominance YUV color space; acquiring brightness parameters of a target picture and a non-target picture; converting the brightness parameters of the non-target pictures into the brightness parameters consistent with those of the target pictures according to the brightness parameters of the target pictures so as to enable the brightness of the non-target pictures to be consistent with that of the target pictures; converting a target picture and a non-target picture from a YUV color space to an RGB color space; and multiplying the RGB pixel value of each pixel point of the non-target picture by the white balance color distribution parameter of the target picture so as to enable the color distribution of the non-target picture to be consistent with that of the target picture.
Optionally, the marking module 304 is specifically configured to determine SIFT feature points of the first picture and the at least one second picture by using a Scale Invariant Feature Transform (SIFT) algorithm; and marking the same SIFT feature points in the first picture and the at least one second picture to obtain the corresponding relation of the same SIFT feature points in the first picture and the at least one second picture.
Optionally, the matching module 305 is specifically configured to obtain an affine transformation matrix from the at least one second picture to the first picture by using a sampling consistency algorithm based on the same feature points in the marked first picture and the at least one second picture; and according to the affine transformation matrix from the at least one second picture to the first picture, overlapping the same characteristic points in the first picture and the at least one second picture pairwise to obtain the overlapped first picture and the overlapped at least one second picture.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A method for fusing multiple pictures, comprising:
determining illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain a character image, but contains at least part of the background image in the first picture;
sequencing the illumination attributes of the first picture and the at least one second picture, and selecting the picture sequenced at the first position as a target picture;
according to the illumination attribute of the target picture, converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture;
determining feature points in the first picture and the at least one second picture, and marking the positions of the same feature points in the first picture and the at least one second picture;
the first picture and the at least one second picture are overlapped based on the positions of the same feature points;
determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, reserving the pixel points, and forming first pixel points of the fused picture; screening pixel points in other overlapped areas which do not contain the first picture pixel point to obtain a screened second pixel point;
and fusing the first pixel points, the second pixel points and the pixel points in the non-coincident region to form a fused picture.
2. The method of claim 1,
the illumination attribute is a brightness parameter;
correspondingly, the converting the illumination attribute of the non-target picture to be consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture includes:
converting the target picture and the non-target picture from an RGB color space to a luminance and chrominance YUV color space;
acquiring brightness parameters of the target picture and the non-target picture;
converting the brightness parameter of the non-target picture into the brightness parameter consistent with that of the target picture according to the brightness parameter of the target picture so as to enable the brightness of the non-target picture to be consistent with that of the target picture;
and converting the target picture and the non-target picture from a YUV color space to an RGB color space.
3. The method according to claim 1 or 2, wherein before converting the illumination attribute of the non-target picture to be consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture, the method further comprises:
acquiring a white balance color distribution parameter of a red, green and blue (RGB) color space of each of the first picture and the at least one second picture;
determining the RGB pixel value of each pixel point in each picture, and dividing the RGB pixel value of each pixel point by the white balance color distribution parameter of the RGB color space of each picture so as to enable the color attributes of each picture in the first picture and the at least one second picture to be consistent.
4. The method according to claim 3, wherein after converting the illumination attribute of the non-target picture to be consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture, the method further comprises:
and multiplying the RGB pixel value of each pixel point in the first picture and the at least one second picture by the white balance color distribution parameter of the RGB color space of the respective picture to recover the color attribute of the first picture and the at least one second picture.
5. The method according to claim 1, wherein after fusing the first pixel point, the second pixel point, and the pixel point in the non-overlapped region to form a fused picture, further comprising:
and smoothing the splicing boundary of the picture fused by the first picture and the at least one second picture by adopting a Poisson fusion method.
6. The method according to claim 1, wherein the determining feature points in the first picture and the at least one second picture and marking the positions of the same feature points in the first picture and the at least one second picture comprises:
adopting a Scale Invariant Feature Transform (SIFT) algorithm to determine SIFT feature points of the first picture and the at least one second picture;
and marking the same SIFT feature points in the first picture and the at least one second picture to obtain the corresponding relation of the same SIFT feature points in the first picture and the at least one second picture.
7. The method according to claim 1, wherein said registering the first picture and the at least one second picture based on the position of the same feature point comprises:
obtaining an affine transformation matrix from the at least one second picture to the first picture by adopting a sampling consistency algorithm based on the same feature points in the marked first picture and the at least one second picture;
and according to the affine transformation matrix from the at least one second picture to the first picture, overlapping the same feature points in the first picture and the at least one second picture in pairs to obtain the overlapped first picture and the overlapped at least one second picture.
8. A device for fusing multiple pictures, comprising:
the first determining module is used for determining the illumination attributes of the first picture and the at least one second picture; the first picture is a picture containing a figure image; the second picture does not contain a character image, but contains at least part of the background image in the first picture;
the selection module is used for sequencing the illumination attributes of the first picture and the at least one second picture and selecting the picture sequenced at the first position as a target picture;
the conversion module is used for converting the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture according to the illumination attribute of the target picture;
a marking module, configured to determine feature points in the first picture and the at least one second picture, and mark positions of the same feature points in the first picture and the at least one second picture;
the overlapping module is used for overlapping the first picture and the at least one second picture based on the position of the same characteristic point;
the second determining module is used for determining all pixel points of all pictures contained in the overlapped area, if the pixel points belong to the first picture, the pixel points are reserved, and the first pixel points of the fused pictures are formed; screening pixel points in other overlapped areas which do not contain the first picture pixel point to obtain a screened second pixel point;
and the fusion module is used for fusing the first pixel points, the second pixel points and the pixel points in the non-overlapped area to form a fused picture.
9. The apparatus of claim 8,
the illumination attribute is a brightness parameter;
the conversion module is specifically used for converting the illumination attribute of the non-target picture into a process consistent with the illumination attribute of the target picture according to the illumination attribute of the target picture,
converting the target picture and the non-target picture from an RGB color space to a luminance and chrominance YUV color space; acquiring brightness parameters of the target picture and the non-target picture; converting the brightness parameter of the non-target picture into the brightness parameter consistent with that of the target picture according to the brightness parameter of the target picture so as to enable the brightness of the non-target picture to be consistent with that of the target picture; and converting the target picture and the non-target picture from a YUV color space to an RGB color space.
10. The apparatus of claim 8 or 9, further comprising:
the acquisition module is used for acquiring white balance color distribution parameters of red, green and blue (RGB) color spaces of each of the first picture and the at least one second picture before the conversion module converts the illumination attribute of the non-target picture into the illumination attribute consistent with that of the target picture according to the illumination attribute of the target picture;
and the third determining module is used for determining the RGB pixel value of each pixel point in each picture and dividing the RGB pixel value of each pixel point by the white balance color distribution parameter of the RGB color space of each picture so as to enable the color attributes of each picture in the first picture and the at least one second picture to be consistent.
11. The apparatus of claim 10, further comprising:
and the restoring module is used for converting the illumination attribute of the non-target picture into the white balance color distribution parameter which is consistent with the illumination attribute of the target picture by the converting module according to the illumination attribute of the target picture, and multiplying the RGB pixel value of each pixel point in the first picture and the at least one second picture by the RGB color space of each picture so as to restore the color attribute of the first picture and the at least one second picture.
12. The apparatus of claim 8, further comprising:
and the smoothing processing module is used for smoothing the splicing boundary of the picture fused by the first picture and the at least one second picture by adopting a Poisson fusion method after the fusion module fuses the first pixel points, the second pixel points and the pixel points in the non-overlapped region to form a fused picture.
13. The apparatus of claim 8,
the marking module is specifically configured to determine feature points in the first picture and the at least one second picture, and mark the same feature points in the first picture and the at least one second picture,
adopting a Scale Invariant Feature Transform (SIFT) algorithm to determine SIFT feature points of the first picture and the at least one second picture; and marking the same SIFT feature points in the first picture and the at least one second picture to obtain the corresponding relation of the same SIFT feature points in the first picture and the at least one second picture.
14. The apparatus of claim 8,
the coincidence module is specifically configured to perform a coincidence process of the first picture and the at least one second picture based on the positions of the same feature points,
obtaining an affine transformation matrix from the at least one second picture to the first picture by adopting a sampling consistency algorithm based on the same feature points in the marked first picture and the at least one second picture; and according to the affine transformation matrix from the at least one second picture to the first picture, overlapping the same feature points in the first picture and the at least one second picture in pairs to obtain the overlapped first picture and the overlapped at least one second picture.
CN201710021329.XA 2017-01-11 2017-01-11 Method and device for fusing multiple pictures Expired - Fee Related CN108305235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710021329.XA CN108305235B (en) 2017-01-11 2017-01-11 Method and device for fusing multiple pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710021329.XA CN108305235B (en) 2017-01-11 2017-01-11 Method and device for fusing multiple pictures

Publications (2)

Publication Number Publication Date
CN108305235A CN108305235A (en) 2018-07-20
CN108305235B true CN108305235B (en) 2022-02-18

Family

ID=62872200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710021329.XA Expired - Fee Related CN108305235B (en) 2017-01-11 2017-01-11 Method and device for fusing multiple pictures

Country Status (1)

Country Link
CN (1) CN108305235B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059641B (en) * 2019-04-23 2023-02-03 重庆工商大学 Depth bird recognition algorithm based on multiple preset points
CN110135442B (en) * 2019-05-20 2021-12-14 驭势科技(北京)有限公司 Evaluation system and method of feature point extraction algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318517A (en) * 2014-11-19 2015-01-28 北京奇虎科技有限公司 Image splicing method and device and client terminal
CN105096287A (en) * 2015-08-11 2015-11-25 电子科技大学 Improved multi-time Poisson image fusion method
CN106210501A (en) * 2015-04-08 2016-12-07 大同大学 Image synthesizing method and image processing apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009097552A1 (en) * 2008-02-01 2009-08-06 Omnivision Cdm Optics, Inc. Image data fusion systems and methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318517A (en) * 2014-11-19 2015-01-28 北京奇虎科技有限公司 Image splicing method and device and client terminal
CN106210501A (en) * 2015-04-08 2016-12-07 大同大学 Image synthesizing method and image processing apparatus
CN105096287A (en) * 2015-08-11 2015-11-25 电子科技大学 Improved multi-time Poisson image fusion method

Also Published As

Publication number Publication date
CN108305235A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
US11455516B2 (en) Image lighting methods and apparatuses, electronic devices, and storage media
CN111062378B (en) Image processing method, model training method, target detection method and related device
US10645268B2 (en) Image processing method and apparatus of terminal, and terminal
Hu et al. Exposure stacks of live scenes with hand-held cameras
CN111127318B (en) Panoramic image splicing method in airport environment
Mavridaki et al. A comprehensive aesthetic quality assessment method for natural images using basic rules of photography
CN104486552B (en) A kind of method and electronic equipment obtaining image
CN105578063A (en) Image processing method and terminal
CN109474780A (en) A kind of method and apparatus for image procossing
CN108154514A (en) Image processing method, device and equipment
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
CN108876723A (en) A kind of construction method of the color background of gray scale target image
CN113395440A (en) Image processing method and electronic equipment
CN108305235B (en) Method and device for fusing multiple pictures
WO2022218082A1 (en) Image processing method and apparatus based on artificial intelligence, and electronic device, computer-readable storage medium and computer program product
CN108353133B (en) Apparatus and method for reducing exposure time set for high dynamic range video/imaging
CN108257086A (en) A kind of method and device of distant view photograph processing
KR101513931B1 (en) Auto-correction method of composition and image apparatus with the same technique
CN105893578A (en) Method and device for selecting photos
WO2023151210A1 (en) Image processing method, electronic device and computer-readable storage medium
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
WO2023273111A1 (en) Image processing method and apparatus, and computer device and storage medium
CN115619636A (en) Image stitching method, electronic device and storage medium
CN105160329B (en) A kind of tooth recognition methods, system and camera terminal based on YUV color spaces
CN105894068B (en) FPAR card design and rapid identification and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230324

Address after: 100871 No. 5, the Summer Palace Road, Beijing, Haidian District

Patentee after: Peking University

Address before: 100871 No. 5, the Summer Palace Road, Beijing, Haidian District

Patentee before: Peking University

Patentee before: PEKING UNIVERSITY FOUNDER GROUP Co.,Ltd.

Patentee before: BEIJING FOUNDER ELECTRONICS Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220218