CN117974501A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117974501A
CN117974501A CN202311556320.0A CN202311556320A CN117974501A CN 117974501 A CN117974501 A CN 117974501A CN 202311556320 A CN202311556320 A CN 202311556320A CN 117974501 A CN117974501 A CN 117974501A
Authority
CN
China
Prior art keywords
image
expression
target image
edge
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311556320.0A
Other languages
Chinese (zh)
Inventor
李志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiku Software Technology Shanghai Co ltd
Original Assignee
Aiku Software Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aiku Software Technology Shanghai Co ltd filed Critical Aiku Software Technology Shanghai Co ltd
Priority to CN202311556320.0A priority Critical patent/CN117974501A/en
Publication of CN117974501A publication Critical patent/CN117974501A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an image processing method and device, and belongs to the field of image processing. Wherein the method comprises the following steps: acquiring a plurality of first images, wherein the plurality of first images are images obtained by continuous shooting; determining the expression similarity between each first image and the reference image to obtain a plurality of expression similarities; determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities; performing brightness compensation processing on the static area in the first target image according to at least one second image to obtain the second target image, wherein the second image is an image except the first target image in the plurality of first images; and performing edge compensation processing on the second target image to obtain a third target image.

Description

Image processing method and device
Technical Field
The application belongs to the field of image processing, and particularly relates to an image processing method and an image processing device.
Background
Based on the use requirement of the user, in order to capture an image of a specific expression, a photographer often needs to accurately grasp shooting timing when the user makes the specific expression, which makes a great test on shooting skills of the photographer. Or an image of a specific expression that the user is satisfied with may be manually selected after a plurality of first images are continuously photographed. However, in the first image obtained by continuous shooting, a virtual image caused by shooting delay often occurs, and the imaging effect is affected.
Therefore, it is currently difficult to take an image of a specific expression that is satisfactory to the user.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and an image processing device, which can solve the problem that the image with a specific expression satisfactory to a user is difficult to shoot at present.
In a first aspect, an embodiment of the present application provides an image processing method, including:
Acquiring a plurality of first images, wherein the plurality of first images are images obtained by continuous shooting;
Determining the expression similarity between each first image and the reference image to obtain a plurality of expression similarities;
Determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities;
Performing brightness compensation processing on the static area in the first target image according to at least one second image to obtain the second target image, wherein the second image is an image except the first target image in the plurality of first images;
And performing edge compensation processing on the second target image to obtain a third target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a plurality of first images, wherein the plurality of first images are images obtained by continuous shooting;
the first determining module is used for determining the expression similarity between each first image and the reference image to obtain a plurality of expression similarities;
The second determining module is used for determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value in the plurality of expression similarities;
The first compensation module is used for carrying out brightness compensation processing on the static area in the first target image according to at least one second image to obtain the second target image, wherein the second image is an image except the first target image in the plurality of first images;
And the second compensation module is used for carrying out edge compensation processing on the second target image to obtain a third target image. In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a plurality of first images are acquired, wherein the plurality of first images are images obtained by continuous shooting; respectively determining expression similarity between each first image and the reference image; determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities; the portrait expression of the first image corresponding to the maximum value is the first image closest to the portrait expression of the reference image, namely the first image with the most natural expression and the most stretched expression, the first image corresponding to the maximum value is determined to be the first target image, the first target image with the most natural expression and the most stretched expression can be rapidly determined from a plurality of first images which are continuously shot, and a user can be helped to rapidly identify the satisfactory image with the specific expression. According to at least one second image, brightness compensation processing is carried out on a static area in a first target image to obtain a second target image, the second image is an image except the first target image in a plurality of first images, exposure details of the first target image can be improved to generate a second target image with higher quality, finally edge compensation processing is carried out on the second target image to obtain a third target image, dependence of pixel points near a boundary in the second target image on at least one second image and the first target image can be balanced, therefore, the problem of artifact is eliminated, and a third target image with natural stretching expression, good exposure degree and natural edge is obtained.
Drawings
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a preset edge direction according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an expression class according to an embodiment of the present application;
fig. 4 is a block diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
Fig. 6 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application.
As shown in fig. 1, the image processing method may include steps 110 to 150, and the method is applied to an image processing apparatus, specifically as follows:
Step 110, acquiring a plurality of first images, wherein the plurality of first images are images obtained by continuous shooting;
In the embodiment of the application, the plurality of first images are images obtained by continuous shooting, the difference of shooting time between any two adjacent first images is smaller than a preset time length, and the preset time length can be 0.05 second, 0.1 second and other time lengths.
Step 120, determining the expression similarity between each first image and the reference image to obtain a plurality of expression similarities;
the first image comprises a portrait area, and the reference image also comprises a portrait area; the face area of the person object corresponding to the portrait area presents an expression, for example: happiness, sadness, excitement, etc.
For each first image, calculating the expression similarity between the portrait area in the first image and the portrait area of the reference image, wherein one first image corresponds to one expression similarity.
In one possible embodiment, before step 120, steps 210-220 may be further included:
Step 210, identifying expression categories corresponding to the plurality of first images;
Specifically, the expression category of the face is identified from a plurality of first images that are continuously photographed.
Expression categories include, but are not limited to anger, happiness, sadness.
And 220, acquiring a reference image from the first gallery according to the expression category.
The first gallery may be a local gallery or a gallery on a network. For example, if the expression category corresponding to the plurality of first images is happy, a reference image with the expression category being happy is obtained from the first gallery.
And acquiring a reference image corresponding to the expression category corresponding to the first image from the first gallery according to the expression category.
For example, if the expression category corresponding to the plurality of first images is happy, a reference image with the expression category being happy is obtained from the first gallery.
It can be appreciated that the expression category can be automatically identified, or can be selected in advance by the user, and the user can select according to the shooting style required by the user, such as smiling, cool, strange, etc.
Therefore, according to the expression categories, the reference images are acquired from the first gallery, the reference images consistent with the expression categories corresponding to the first images can be accurately determined, the expression similarity between each first image and the reference images can be conveniently and subsequently determined, and the efficiency of subsequently determining the expression similarity is improved.
In one possible embodiment, in step 120, steps 310-330 may be specifically included:
Step 310, extracting expression characteristic information of a plurality of first images respectively to obtain a plurality of first expression characteristic information;
The first expression feature information includes a plurality of feature description values, the first expression feature information is used for describing an expression of an object in the first image, the feature description values are used for describing edge features of the first image in a preset edge direction, namely, pixel point categories in the first image are represented, and the pixel point categories include: high frequency region and flat region.
For example, for any pixel, the first expression feature information of the first image is (1,1,1,1,0,0,0,0).
Step 320, determining second expression characteristic information corresponding to the reference image;
The second expression characteristic information comprises a plurality of characteristic description values and is used for describing the expression of the object in the reference image.
For example, the second expression feature information corresponding to the reference image is (0,1,1,1,0,0,0,1);
Step 330, determining a plurality of expression similarities according to the plurality of first expression feature information and the plurality of second expression feature information.
And determining the expression similarity corresponding to the first image according to the first expression characteristic information (1,1,1,1,0,0,0,0) and the second expression characteristic information (0,1,1,1,0,0,0,1) corresponding to the reference image.
Therefore, the first expression characteristic information of the first image and the second expression characteristic information of the reference image are respectively extracted, so that the expression similarity between the first image and the reference image can be conveniently determined.
Fig. 2 shows the structure and orientation of the Kirsch operator, an image processing filter commonly used for edge detection, for finding edges and contours in images. Wherein M0 and M4 detect vertical edges in the neighborhood, M2 and M6 detect horizontal edges in the neighborhood, and M1 and M5 and M3 and M7 detect two diagonal edges, respectively.
The step of extracting the first expression feature information of the plurality of first images may be specifically implemented based on a Kirsch operator, where the following is a convolution kernel in the Kirsch operator, and how they detect edges:
1. m0 (0 degree direction) and M4 (180 degree direction):
m0 convolution kernel: [ -3, -3, -3;0, 0;3, 3];
m4 convolution kernel: [3, 3;0, 0; -3, -3, -3];
these two convolution kernels are used to detect edges in the vertical direction. They emphasize the brightness variation in the vertical direction by applying convolution operations in the image, thereby detecting the vertical edges.
2. M2 (90 degree direction) and M6 (270 degree direction):
M2 convolution kernel: [ -3,0,3; -3,0,3; -3,0,3];
m6 convolution kernel: [3,0, -3;3,0, -3;3,0, -3];
These two convolution kernels are used to detect edges in the horizontal direction. They emphasize the brightness variation in the horizontal direction by applying convolution operations in the image, thereby detecting horizontal edges.
3. M1 (45 degree direction) and M5 (225 degree direction):
m1 convolution kernel: [ -3,0; -3,0,3;0,3,3];
m5 convolution kernel: [0,3,3; -3,0,3; -3,0];
These two convolution kernels are used to detect edges in the 45 degree and 225 degree directions. They emphasize the brightness variation in the diagonal direction by applying convolution operation in the image, thereby detecting the diagonal edge.
The Kirsch operator can detect edge and contour information in various directions in an image by applying these different directions of convolution kernels to the image. Finally, the intensity and direction of the edges can be determined from these convolution results.
The human expression characteristic information can be overlapped with the intensity values of the human expression edge response values in the horizontal, vertical and diagonal directions in each picture by modifying and encoding the Kirsch operator, the intensity values are normalized, and the first two significant edge directions are selected. The interference caused by the random noise on the change of the edge response value can be effectively reduced in a superposition mode, and meanwhile, redundant information can be reduced. And on the gray level image, encoding the pixel intensity in the selected significant direction so as to acquire the portrait expression characteristic information.
In a possible embodiment, in step 210, the following steps may be specifically included:
Step 410, for each first image, determining edge feature values of the first image in a plurality of preset edge directions;
Step 420, calculating an edge threshold according to the edge feature value, wherein the edge threshold is used for distinguishing pixel categories in the first image, and the pixel categories include: a high frequency region and a flat region;
And 430, obtaining first expression feature information according to the edge response value and the edge threshold value, wherein the first expression feature information comprises a plurality of feature description values, and the number of the feature description values in the first expression feature information is the number of preset edge directions.
The Kirsch operator uses 8 templates to convolve and derivative each pixel point on the image, the 8 templates represent 8 directions, the maximum response is made to 8 specific edge directions on the image, and the maximum value is taken as the edge output of the image in the operation.
The plurality of preset edge directions are 8 directions, and the number of feature description values in the first expression feature information is 8.
In step 410, the following steps may be specifically included:
First, an edge response value R i (x, y) of the portrait region is calculated by the formula (1)
Ri(x,y)=I*Mi;0≤i≤7 (1)
Wherein I is a3×3 neighborhood of the original image, mi represents Kirsch masks in different directions, and the edge response values R i (x, y) in different directions are obtained by convolution of I and eight directions Mi.
The salient direction D j (x, y) is selected by equation (2):
Argmax is used in equation (2) to obtain the position index i of the J-th largest value in Rs i (x, y),
The sum Rs i (x, y) of the absolute values of the edge response values of both sides in the same direction, that is, the edge feature value, is calculated by the formula (3):
Rsi(x,y)=|Ri(x,y)|+|Ri+4(x,y)|;0≤i≤3 (3)
Wherein Rs i (x, y) in formula (3) is an edge eigenvalue.
The step of calculating the edge threshold according to the edge characteristic value may specifically include the following steps:
After the position index is obtained, the original data is mapped to the [0,1] interval through linear transformation by using a normalization method on Rs i (x, y) through a formula (4), and a normalization value Nor i (x, y) is obtained;
The automatic threshold θ is set according to the following equation: after the image is convolved using a Kirsch mask. The intensity of the response value is larger in the edge region of the image, and smaller in the flat region.
The main information for determining the facial expression category is mainly concentrated in the region with abundant edges such as eyes, nose and mouth, and the feature information is not discriminated sufficiently in the flat region of the face, so the importance is not strong, and the edge and flat region can be discriminated by setting an edge threshold θ according to Rs i (x, y) value in the first direction by formula (5):
Wherein the edge threshold is θ;
c is the bin value in the directional gradient histogram (Histogram of Oriented Gradient, HOG) of Nor i (x, y). C is a set of plural numbers, Representing all complex sets, HRs functions are functions with complex numbers as arguments.
For any preset edge direction, the feature description value corresponding to any preset edge direction is 1 when the edge feature value is greater than the edge threshold value, and is 0 when the edge feature value is less than or equal to the edge threshold value.
That is, if Rs i (x, y) is greater than θ, the corresponding feature description value of the edge direction is 1;
If Rs i (x, y) is less than or equal to θ, the feature description value corresponding to the edge direction is 0; therefore, the feature description values corresponding to 8 preset edge directions are obtained, and first expression feature information is obtained based on the feature description values corresponding to the 8 preset edge directions, wherein the first expression feature information comprises 8 feature description values.
Here, since the edge threshold is used to distinguish the pixel categories in the first image, the pixel categories include: the high-frequency region and the flat region can be used for representing the high-frequency region and the flat region in the first image according to the edge response value and the edge threshold value, wherein the high-frequency region can represent the regions such as eyes, eyebrows, noses, mouths and the like, and the high-frequency region can also represent the expression of the portrait. Therefore, the first expression characteristic information capable of representing the portrait expression can be obtained rapidly and accurately.
And 130, determining a first target image from the plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities.
The method comprises the steps of comparing a plurality of reference images with a plurality of reference images to obtain similarity, and determining the maximum value in the similarity as the expression similarity of the first image. The expression categories corresponding to the reference images are consistent.
Then, the above steps are performed for each first image, so each first image corresponds to one expression similarity.
And identifying the facial expression characteristic information of the person through a target support vector machine, so as to obtain a first target image corresponding to the expression category of the person in the continuous pictures. The target support vector machine is a support vector machine trained by adopting a reference image annotation training set, and the reference image annotation set is a training set which is extracted by human image expression characteristic information and is annotated. And comparing the first expression characteristic information of the continuous pictures with the second expression characteristic information corresponding to the reference image, and selecting the first image with the highest similarity as the first target image.
As shown in fig. 3, expression categories may include a smile category 310 and a sad category 320; and if the expression category of the first image is smile, the smile effect in the first target image is the best image in the plurality of first images.
The maximum value is determined from the plurality of expression similarities, the portrait expression of the first image corresponding to the maximum value is the first image closest to the portrait expression of the reference image, namely the first image with the most natural and stretched expression, the first image corresponding to the maximum value is determined to be the first target image, the first target image with the most natural and stretched expression can be rapidly determined from the plurality of first images which are continuously shot, and a user can be helped to rapidly identify a satisfactory image.
And 140, performing brightness compensation processing on the static area in the first target image according to at least one second image to obtain the second target image, wherein the second image is an image except the first target image in the plurality of first images. And carrying out brightness compensation processing on the static area in the first target image by using at least one second image with different exposure to obtain the second target image, wherein the exposure detail and the dynamic range of the image can be improved to generate a second target image with higher quality.
In a possible embodiment, in step 140, the following steps may be specifically included:
weighting and fusing at least one second image to obtain a third image;
Identifying a static area in the first target image, wherein the static area is an area except a portrait area in the first target image;
and carrying out brightness compensation processing on the static area in the first target image according to the third image to obtain a second target image.
The step of weighting and fusing the at least one second image to obtain a third image may further include the following steps:
and performing image alignment processing on the first target image and the at least one second image to obtain the aligned first target image and the at least one second image. For facilitating subsequent processing based on the aligned first target image and the at least one second image.
Since there may be a minute movement or shake of the different images, image alignment processing needs to be performed on the first target image and at least one second image so that the subsequent processing can be accurately performed.
Image alignment is a critical step in ensuring that pixel correspondence between different exposure images is correct. Image alignment may be achieved using feature matching and image registration techniques. Through feature point matching, scale-invariant feature transform (SIFT) or SURF algorithms can be used to find the corresponding points between the two images, which are then aligned using transforms. This can be expressed as the following equation (6):
I_aligned = warp(I,H) (6)
where i_aligned is the aligned image, I is the original image, and H is the transform matrix.
The SIFT is used for detecting and describing local features in the image, searching extreme points in a spatial scale, and extracting position, scale and rotation invariants of the extreme points. The key points found by SIFT are some points which are very prominent and cannot be changed due to factors such as illumination, affine transformation and noise, such as corner points, edge points, bright spots of dark areas, dark spots of bright areas and the like, and the tolerance to light, noise and slight viewing angle changes is quite high.
The SIFT algorithm is a case of recording gradient directions near each extreme point and taking one main gradient direction as a reference to consider feature rotation. The method comprises the steps of subtracting extreme values from gray level images with different fuzzy degrees, screening and determining real key points, then solving the main direction of the key points according to gradients and weights, and determining 8-direction vectors for 16 blocks near the key points, so that feature points corresponding to the two images can be found out through the feature marks, and an image transformation matrix is determined through a plurality of corresponding feature points. That is, sift is a scale-space based image local feature description operator that remains invariant to image scaling, rotation, and even affine transformations.
The step of obtaining a third image by weighting and fusing at least one second image may specifically include: and carrying out weighted fusion on the static areas in at least one second image to obtain a third image.
Therefore, the static areas of the second images with different exposure degrees are weighted and fused together, and a third image with better effect is generated.
The step of performing brightness compensation processing on the static area in the first target image according to the third image to obtain the second target image may specifically include:
The static region of the third image is merged with the character region of the first target image to generate an image having a wider dynamic range. The mask may be used to identify the persona area and the static area, and then the two may be combined. This can be expressed as: final (x, y) =f (x, y) if pixel IS IN STATIC region; original (x, y) if pixel is in person region;
Where Final (x, y) is the pixel value of the second target image, F (x, y) is the pixel value of the third image, and Original (x, y) is the pixel value in the first target image.
The step of performing brightness compensation processing on the static area in the first target image according to the third image to obtain the second target image may further include the following steps:
and performing tone mapping processing on the second target image to obtain a tone-mapped second target image.
Due to display device limitations, it is often desirable to map high dynamic range imaging (HIGH DYNAMIC RANGE IMAGING, HDR) images into a standard display range for viewing on a terminal.
Tone mapping generally involves mapping high dynamic range pixel values into a limited range to accommodate the display capabilities of the screen. The mapping function may take the form of gamma correction:
Output(x,y)=Input(x,y)^γ
output (x, y) is the mapped pixel value. Input (x, y) is the pixel value of the Input (HDR) image. Gamma is the gamma value, controlling the curve shape of the map. In general, gamma greater than 1 will enhance contrast and dark detail, while gamma less than 1 will reduce contrast and enhance bright detail.
The gamma correction mapping function is applied by just exponentiating each pixel value of the input image and taking the result as the pixel value of the output image. The brightness range of the image can be mapped into a range suitable for the display device by this step to ensure that the image has an appropriate visual effect on the screen.
The step utilizes a plurality of differently exposed second images, combines them to obtain a third image to obtain a wider dynamic range, then combines the static region of the third image with the character region of the first target image, and finally tone maps the generated second target image to adapt to the limitations of the display device. The exposure details and dynamic range of the image can be improved to produce higher quality results.
The step of performing weighted fusion on at least one second image to obtain a third image may specifically include the following steps:
respectively determining a weight image corresponding to each second image; each pixel point in the weight image corresponds to a weight value, and the weight value is determined according to the brightness value of the corresponding pixel point in the second image;
and carrying out weighted fusion on at least one second image based on the weight image to obtain a third image.
For each region, the most appropriate image is selected among the differently exposed images to preserve detail. In general, higher brightness images may preserve dark detail and lower brightness images may preserve bright detail.
Exposure fusion is the merging of information in differently exposed images to achieve a wider dynamic range. For each pixel location (x, y), the following equation (7) can be used for exposure fusion:
F(x, y) = w1 * I1(x, y) + w2 * I2(x, y) + ... + wn * In(x, y) (7)
where F (x, y) is the pixel value In the third image, I1, I2, & In is the pixel value of the second image at different exposure levels, respectively, w1, w2, & wn is the weight value, typically determined according to the brightness condition of each pixel location, to preserve details In the image.
For example, for any pixel in one second image, the weight value is determined according to the brightness value of the pixel, so the weight values corresponding to all pixels in the second image form the weight image of the second image.
For the pixel point at the same position of at least one second image, F (x, y), according to the pixel value I1 (x, y) and the weight value w1 of the pixel point at the position of the first second image, the pixel value I2 (x, y) and the weight value w2 of the pixel point at the position of the second image, the pixel value In (x, y) and the weight value wn of the pixel point at the position of the nth second image are obtained, and the pixel values of the pixel points at the position In the third image are obtained, and the pixel values of the pixel points at other positions In the third image are obtained In the same way, thereby obtaining the third image.
And step 150, performing edge compensation processing on the second target image to obtain a third target image.
In a possible embodiment, in step 150, the following steps may be specifically included:
performing poisson editing on the second target image to obtain a fourth image;
converting the fourth image to obtain a fifth image; the same pixel point, if the corresponding pixel value in the fourth image is 1, the corresponding pixel value in the fifth image is 0, and if the corresponding pixel value in the fourth image is 0, the corresponding pixel value in the fifth image is 1;
And carrying out weighted fusion processing on the fourth image and the fifth image to obtain a third target image.
Performing poisson editing on the second target image to obtain a fourth image;
In particular, poisson editing is an image editing technique for seamlessly incorporating certain regions of a source image into a target image. It can be expressed as the following formula (8):
for each target pixel P (x, y):
P(x, y) = Laplacian(Source(x, y)) + Target(x, y) (8)
Where Laplacian (Source (x, y)) represents the Laplacian of the second target image.
And converting the fourth image, namely performing binary image inversion to obtain a fifth image.
The binary image inversion process is to change the white area into black and the black area into white; thus, for the same pixel, if the corresponding pixel value in the fourth image is 0, the corresponding pixel value in the fifth image is 1.
And carrying out weighted fusion processing on the fourth image and the fifth image to obtain a third target image.
And carrying out weighted fusion processing on the fourth image and the fifth image, and particularly generating a third target image by using alpha mixing operation, thereby solving the problem of color leakage.
Alpha blending operation may be achieved by the following formula (9):
Blended(x, y) = (1 - alpha) * Image1(x, y) + alpha * Image2(x, y) (9)
Wherein Blended (x, y) is the pixel value of the third target Image, image1 and Image2 are the pixel values of the fourth Image and the fifth Image, respectively, and alpha is the blending weight, wherein the alpha value can be adjusted as needed to obtain the desired blending effect.
And carrying out weighted fusion processing on the fourth image and the fifth image to obtain a third target image, and balancing the dependence of pixels near the boundary on at least one second image and one first target image, thereby eliminating the problem of artifacts.
In the embodiment of the application, a plurality of first images are acquired, wherein the plurality of first images are images obtained by continuous shooting; respectively determining expression similarity between each first image and the reference image; determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities; the portrait expression of the first image corresponding to the maximum value is the first image closest to the portrait expression of the reference image, namely the first image with the most natural expression and the most stretched expression, the first image corresponding to the maximum value is determined to be the first target image, the first target image with the most natural expression and the most stretched expression can be rapidly determined from a plurality of first images which are continuously shot, and a user can be helped to rapidly identify the satisfactory image with the specific expression. According to at least one second image, brightness compensation processing is carried out on a static area in a first target image to obtain a second target image, the second image is an image except the first target image in a plurality of first images, exposure details of the first target image can be improved to generate a second target image with higher quality, finally edge compensation processing is carried out on the second target image to obtain a third target image, dependence of pixel points near a boundary in the second target image on at least one second image and the first target image can be balanced, therefore, the problem of artifact is eliminated, and a third target image with natural stretching expression, good exposure degree and natural edge is obtained.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
Fig. 4 is a block diagram of an image processing apparatus according to an embodiment of the present application, the apparatus 400 including:
An acquiring module 410, configured to acquire a plurality of first images, where the plurality of first images are images obtained by continuous shooting;
A first determining module 420, configured to determine expression similarities between each of the first images and the reference image, so as to obtain a plurality of expression similarities;
A second determining module 430, configured to determine a first target image from the plurality of first images according to a plurality of expression similarities, where the expression similarity corresponding to the first target image is a maximum value of the plurality of expression similarities;
The first compensation module 440 is configured to perform brightness compensation processing on the static area in the first target image according to at least one second image, so as to obtain a second target image, where the second image is an image other than the first target image in the plurality of first images;
and a second compensation module 450, configured to perform edge compensation processing on the second target image, so as to obtain a third target image.
In a possible embodiment, the first determining module 420 is specifically configured to:
Extracting expression characteristic information of the plurality of first images respectively to obtain a plurality of first expression characteristic information;
Determining second expression characteristic information corresponding to the reference image;
and determining the expression similarity according to the first expression characteristic information and the second expression characteristic information.
In a possible embodiment, the first determining module 420 is specifically configured to:
for each first image, determining edge characteristic values of the first image in a plurality of preset edge directions;
Calculating an edge threshold according to the edge characteristic value, wherein the edge threshold is used for distinguishing pixel point categories in the first image, and the pixel point categories comprise: a high frequency region and a flat region;
Obtaining first expression feature information according to the edge response value and the edge threshold value, wherein the first expression feature information comprises a plurality of feature description values, and the number of the feature description values in the first expression feature information is the number of the preset edge directions;
For any preset direction, the feature description value corresponding to any preset direction is 1 when the edge feature value is greater than the edge threshold value, and is 0 when the edge feature value is less than or equal to the edge threshold value.
In one possible embodiment, the first compensation module 440 is specifically configured to:
weighting and fusing the at least one second image to obtain a third image;
identifying a static area in the first target image, wherein the static area is an area except a portrait area in the first target image;
and carrying out brightness compensation processing on the static area in the first target image according to the third image to obtain the second target image.
In one possible embodiment, the second compensation module 450 is specifically configured to:
Performing poisson editing on the second target image to obtain a fourth image;
Converting the fourth image to obtain a fifth image; the same pixel point, if the corresponding pixel value in the fourth image is 1, the corresponding pixel value in the fifth image is 0, and if the corresponding pixel value in the fourth image is 0, the corresponding pixel value in the fifth image is 1;
and carrying out weighted fusion processing on the fourth image and the fifth image to obtain the third target image.
In the embodiment of the application, a plurality of first images are acquired, wherein the plurality of first images are images obtained by continuous shooting; respectively determining expression similarity between each first image and the reference image; determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities; the portrait expression of the first image corresponding to the maximum value is the first image closest to the portrait expression of the reference image, namely the first image with the most natural expression and the most stretched expression, the first image corresponding to the maximum value is determined to be the first target image, the first target image with the most natural expression and the most stretched expression can be rapidly determined from a plurality of first images which are continuously shot, and a user can be helped to rapidly identify the satisfactory image with the specific expression. According to at least one second image, brightness compensation processing is carried out on a static area in a first target image to obtain a second target image, the second image is an image except the first target image in a plurality of first images, exposure details of the first target image can be improved to generate a second target image with higher quality, finally edge compensation processing is carried out on the second target image to obtain a third target image, dependence of pixel points near a boundary in the second target image on at least one second image and the first target image can be balanced, therefore, the problem of artifact is eliminated, and a third target image with natural stretching expression, good exposure degree and natural edge is obtained.
The image processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The image processing apparatus according to the embodiment of the present application may be an apparatus having an action system. The action system may be an Android (Android) action system, an iOS action system, or other possible action systems, and the embodiment of the application is not limited specifically.
The image processing device provided by the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 5, the embodiment of the present application further provides an electronic device 510, including a processor 511, a memory 512, and a program or an instruction stored in the memory 512 and capable of being executed on the processor 511, where the program or the instruction implements each step of any one of the above embodiments of the image processing method when executed by the processor 511, and the steps achieve the same technical effects, and for avoiding repetition, a description is omitted herein.
The electronic device of the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, and processor 610.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The network module 602 is configured to acquire a plurality of first images, where the plurality of first images are images obtained by continuous shooting;
a processor 610, configured to determine an expression similarity between each of the first images and the reference image, so as to obtain a plurality of expression similarities;
the processor 610 is further configured to determine a first target image from the plurality of first images according to a plurality of expression similarities, where the expression similarity corresponding to the first target image is a maximum value of the plurality of expression similarities;
The processor 610 is further configured to perform brightness compensation processing on the static area in the first target image according to at least one second image, so as to obtain a second target image, where the second image is an image other than the first target image in the plurality of first images;
the processor 610 is further configured to perform edge compensation processing on the second target image to obtain a third target image.
In a possible embodiment, the processor 610 is further configured to extract expression feature information of the plurality of first images, to obtain a plurality of first expression feature information;
the processor 610 is further configured to determine second expression feature information corresponding to the reference image;
The processor 610 is further configured to determine the plurality of expression similarities according to the plurality of first expression feature information and the second expression feature information.
In a possible embodiment, the processor 610 is further configured to determine, for each of the first images, an edge feature value of the first image in a plurality of preset edge directions;
The processor 610 is further configured to calculate an edge threshold according to the edge feature value, where the edge threshold is used to distinguish a pixel class in the first image, and the pixel class includes: a high frequency region and a flat region;
The processor 610 is further configured to obtain first expression feature information according to the edge response value and the edge threshold value, where the first expression feature information includes a plurality of feature description values, and the number of feature description values in the first expression feature information is the number of the preset edge directions;
For any preset direction, the feature description value corresponding to any preset direction is 1 when the edge feature value is greater than the edge threshold value, and is 0 when the edge feature value is less than or equal to the edge threshold value.
In a possible embodiment, the processor 610 is further configured to perform weighted fusion on the at least one second image to obtain a third image;
The processor 610 is further configured to identify a static area in the first target image, where the static area is an area in the first target image other than a portrait area;
The processor 610 is further configured to perform brightness compensation processing on a static area in the first target image according to the third image, so as to obtain the second target image.
In a possible embodiment, the processor 610 is further configured to perform poisson editing on the second target image to obtain a fourth image;
The processor 610 is further configured to perform a conversion process on the fourth image to obtain a fifth image; the same pixel point, if the corresponding pixel value in the fourth image is 1, the corresponding pixel value in the fifth image is 0, and if the corresponding pixel value in the fourth image is 0, the corresponding pixel value in the fifth image is 1;
the processor 610 is further configured to perform weighted fusion processing on the fourth image and the fifth image, so as to obtain the third target image.
In the embodiment of the application, a plurality of first images are acquired, wherein the plurality of first images are images obtained by continuous shooting; respectively determining expression similarity between each first image and the reference image; determining a first target image from a plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities; the portrait expression of the first image corresponding to the maximum value is the first image closest to the portrait expression of the reference image, namely the first image with the most natural expression and the most stretched expression, the first image corresponding to the maximum value is determined to be the first target image, the first target image with the most natural expression and the most stretched expression can be rapidly determined from a plurality of first images which are continuously shot, and a user can be helped to rapidly identify a satisfactory image. According to at least one second image, brightness compensation processing is carried out on a static area in a first target image to obtain a second target image, the second image is an image except the first target image in a plurality of first images, exposure details of the first target image can be improved to generate a second target image with higher quality, finally edge compensation processing is carried out on the second target image to obtain a third target image, dependence of pixel points near a boundary in the second target image on at least one second image and the first target image can be balanced, therefore the problem of artifact is eliminated, and the third target image with natural expression, good exposure degree and natural edge is obtained.
It should be appreciated that in embodiments of the present application, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, the graphics processor 6041 processing image data of still pictures or video images obtained by an image capturing apparatus (e.g., a camera) in a video image capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. The touch panel 6071 is also referred to as a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an action system. The processor 610 may integrate an application processor that primarily processes action systems, user pages, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 609 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described image processing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. An image processing method, the method comprising:
acquiring a plurality of first images, wherein the plurality of first images are images obtained by continuous shooting;
Determining the expression similarity between each first image and the reference image to obtain a plurality of expression similarities;
determining a first target image from the plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities;
performing brightness compensation processing on the static region in the first target image according to at least one second image to obtain a second target image, wherein the second image is an image except the first target image in the plurality of first images;
and performing edge compensation processing on the second target image to obtain a third target image.
2. The method of claim 1, wherein determining the expression similarity between each of the first images and the reference image to obtain a plurality of expression similarities comprises:
Extracting expression characteristic information of the plurality of first images respectively to obtain a plurality of first expression characteristic information;
Determining second expression characteristic information corresponding to the reference image;
and determining the expression similarity according to the first expression characteristic information and the second expression characteristic information.
3. The method according to claim 2, wherein extracting the expression feature information of the plurality of first images respectively to obtain a plurality of first expression feature information includes:
for each first image, determining edge characteristic values of the first image in a plurality of preset edge directions;
Calculating an edge threshold according to the edge characteristic value, wherein the edge threshold is used for distinguishing pixel point categories in the first image, and the pixel point categories comprise: a high frequency region and a flat region;
Obtaining first expression feature information according to the edge response value and the edge threshold value, wherein the first expression feature information comprises a plurality of feature description values, and the number of the feature description values in the first expression feature information is the number of the preset edge directions;
For any preset direction, the feature description value corresponding to any preset direction is 1 when the edge feature value is greater than the edge threshold value, and is 0 when the edge feature value is less than or equal to the edge threshold value.
4. The method according to claim 1, wherein the performing brightness compensation processing on the static area in the first target image according to at least one second image to obtain a second target image includes:
weighting and fusing the at least one second image to obtain a third image;
identifying a static area in the first target image, wherein the static area is an area except a portrait area in the first target image;
and carrying out brightness compensation processing on the static area in the first target image according to the third image to obtain the second target image.
5. The method of claim 1, wherein performing edge compensation processing on the second target image to obtain a third target image comprises:
Performing poisson editing on the second target image to obtain a fourth image;
Converting the fourth image to obtain a fifth image; the same pixel point, if the corresponding pixel value in the fourth image is 1, the corresponding pixel value in the fifth image is 0, and if the corresponding pixel value in the fourth image is 0, the corresponding pixel value in the fifth image is 1;
and carrying out weighted fusion processing on the fourth image and the fifth image to obtain the third target image.
6. An image processing apparatus, characterized in that the apparatus comprises:
The acquisition module is used for acquiring a plurality of first images, wherein the plurality of first images are images obtained by continuous shooting;
the first determining module is used for determining the expression similarity between each first image and the reference image to obtain a plurality of expression similarities;
The second determining module is used for determining a first target image from the plurality of first images according to the plurality of expression similarities, wherein the expression similarity corresponding to the first target image is the maximum value of the plurality of expression similarities;
The first compensation module is used for carrying out brightness compensation processing on the static area in the first target image according to at least one second image to obtain a second target image, wherein the second image is an image except the first target image in the plurality of first images;
and the second compensation module is used for carrying out edge compensation processing on the second target image to obtain a third target image.
7. The apparatus of claim 6, wherein the first determining module is specifically configured to:
Extracting expression characteristic information of the plurality of first images respectively to obtain a plurality of first expression characteristic information;
Determining second expression characteristic information corresponding to the reference image;
and determining the expression similarity according to the first expression characteristic information and the second expression characteristic information.
8. The apparatus of claim 7, wherein the first determining module is specifically configured to:
for each first image, determining edge characteristic values of the first image in a plurality of preset edge directions;
Calculating an edge threshold according to the edge characteristic value, wherein the edge threshold is used for distinguishing pixel point categories in the first image, and the pixel point categories comprise: a high frequency region and a flat region;
Obtaining first expression feature information according to the edge response value and the edge threshold value, wherein the first expression feature information comprises a plurality of feature description values, and the number of the feature description values in the first expression feature information is the number of the preset edge directions;
For any preset direction, the feature description value corresponding to any preset direction is 1 when the edge feature value is greater than the edge threshold value, and is 0 when the edge feature value is less than or equal to the edge threshold value.
9. The apparatus according to claim 1, wherein the first compensation module is specifically configured to:
weighting and fusing the at least one second image to obtain a third image;
identifying a static area in the first target image, wherein the static area is an area except a portrait area in the first target image;
and carrying out brightness compensation processing on the static area in the first target image according to the third image to obtain the second target image.
10. The apparatus according to claim 1, wherein the second compensation module is specifically configured to:
Performing poisson editing on the second target image to obtain a fourth image;
Converting the fourth image to obtain a fifth image; the same pixel point, if the corresponding pixel value in the fourth image is 1, the corresponding pixel value in the fifth image is 0, and if the corresponding pixel value in the fourth image is 0, the corresponding pixel value in the fifth image is 1;
and carrying out weighted fusion processing on the fourth image and the fifth image to obtain the third target image.
CN202311556320.0A 2023-11-20 2023-11-20 Image processing method and device Pending CN117974501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311556320.0A CN117974501A (en) 2023-11-20 2023-11-20 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311556320.0A CN117974501A (en) 2023-11-20 2023-11-20 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117974501A true CN117974501A (en) 2024-05-03

Family

ID=90865124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311556320.0A Pending CN117974501A (en) 2023-11-20 2023-11-20 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117974501A (en)

Similar Documents

Publication Publication Date Title
US10872420B2 (en) Electronic device and method for automatic human segmentation in image
Winnemöller et al. Real-time video abstraction
US8619098B2 (en) Methods and apparatuses for generating co-salient thumbnails for digital images
WO2014001610A1 (en) Method, apparatus and computer program product for human-face features extraction
Kim et al. Low-light image enhancement based on maximal diffusion values
Chen et al. Face illumination manipulation using a single reference image by adaptive layer decomposition
CN111091590A (en) Image processing method, image processing device, storage medium and electronic equipment
US11978216B2 (en) Patch-based image matting using deep learning
Joshi OpenCV with Python by example
CN114390201A (en) Focusing method and device thereof
Bugeau et al. Influence of color spaces for deep learning image colorization
Tous Pictonaut: movie cartoonization using 3D human pose estimation and GANs
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
Wang et al. Photography enhancement based on the fusion of tone and color mappings in adaptive local region
CN113592753B (en) Method and device for processing image shot by industrial camera and computer equipment
Simone et al. Survey of methods and evaluation of retinex-inspired image enhancers
Masia et al. Selective reverse tone mapping
Ernst et al. Check my chart: A robust color chart tracker for colorimetric camera calibration
CN117974501A (en) Image processing method and device
CN112712571B (en) Object plane mapping method, device and equipment based on video
Van Vo et al. High dynamic range video synthesis using superpixel-based illuminance-invariant motion estimation
CN116569207A (en) Method and electronic device for managing artifacts of images
Li et al. Multi-exposure photomontage with hand-held cameras
Lai et al. Correcting face distortion in wide-angle videos
An et al. Rotateview: A video composition system for interactive product display

Legal Events

Date Code Title Description
PB01 Publication