CN112288665B - Image fusion method and device, storage medium and electronic equipment - Google Patents

Image fusion method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112288665B
CN112288665B CN202011069359.6A CN202011069359A CN112288665B CN 112288665 B CN112288665 B CN 112288665B CN 202011069359 A CN202011069359 A CN 202011069359A CN 112288665 B CN112288665 B CN 112288665B
Authority
CN
China
Prior art keywords
pixel
processed
region
original image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011069359.6A
Other languages
Chinese (zh)
Other versions
CN112288665A (en
Inventor
冯富森
闫嵩
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dami Technology Co Ltd
Original Assignee
Beijing Dami Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dami Technology Co Ltd filed Critical Beijing Dami Technology Co Ltd
Priority to CN202011069359.6A priority Critical patent/CN112288665B/en
Publication of CN112288665A publication Critical patent/CN112288665A/en
Application granted granted Critical
Publication of CN112288665B publication Critical patent/CN112288665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image fusion method, an image fusion device, a storage medium and electronic equipment, and belongs to the technical field of image processing. The image fusion method comprises the following steps: performing face detection on an original image to obtain a first to-be-processed area, determining at least one key pixel point in the first to-be-processed area, determining pixel position information of the at least one key pixel point in the original image, storing the pixel position information, performing face transformation on the first to-be-processed area to obtain a second to-be-processed area, and fusing the second to-be-processed area and the original image based on the pixel position information to obtain a composite image. Therefore, the pixel lookup table is obtained through the transformation relation between the first region to be processed and the face key point template, the synthesized image is more uniform through the pixel lookup table, the accuracy and the speed of image fusion are improved, and the phenomenon of image dithering is reduced.

Description

Image fusion method and device, storage medium and electronic equipment
Technical neighborhood
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for image fusion, a storage medium, and an electronic device.
Background
At present, the functions of face beautifying, mapping, hair changing, face changing and the like in various photographing applications or image processing applications are popular with users. The face changing is that is, face image fusion, which mainly fuses the user photo and the template photo, so that the fused image has the features of the face appearance in the user photo and the character image (such as ancient costume image, military image, character image, etc.) in the template photo. In the existing face fusion algorithm processing process, the fact that the picture obtained by cutting and rotating the face based on the original picture and then replacing the face with the original picture is required to be kept consistent with the original picture is guaranteed, but in the existing processing scheme, the image is firstly cut in a rotating mode, then the cut image is aligned, the flow is long, in addition, multiple rounding operations are carried out in the process, the restored image still has the shaking problem, uneven pixel values generated by the fused picture are easy to cause, the face part of the synthesized image is abrupt, and the authenticity of the picture is reduced.
Disclosure of Invention
The embodiment of the application provides an image fusion method, an image fusion device, a storage medium and electronic equipment, which can enable synthesized images to be more uniform, improve the accuracy and the speed of image fusion and reduce the phenomenon of image shake. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for image fusion, including:
Carrying out face detection on the original image to obtain a first area to be processed; wherein the first region to be processed comprises a face region;
determining at least one key pixel point in the first region to be processed;
Determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information;
Performing face transformation processing on the first region to be processed to obtain a second region to be processed;
And fusing the second area to be processed and the original image based on the pixel position information to obtain a composite image.
In a second aspect, an embodiment of the present application provides an apparatus for image fusion, including:
the detection module is used for carrying out face detection on the original image to obtain a first area to be processed; wherein the first region to be processed comprises a face region;
A determining module, configured to determine at least one key pixel point in the first area to be processed; determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information;
The transformation module is used for carrying out face transformation processing on the first region to be processed to obtain a second region to be processed;
and the fusion module is used for fusing the second area to be processed and the original image based on the pixel position information to obtain a synthetic image.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-described method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiments of the application has the beneficial effects that at least:
When the image fusion method, the device, the storage medium and the electronic equipment work, face detection is carried out on an original image to obtain a first area to be processed, at least one key pixel point is determined in the first area to be processed, pixel position information of the at least one key pixel point in the original image is determined, the pixel position information is stored, face conversion processing is carried out on the first area to be processed to obtain a second area to be processed, and the second area to be processed and the original image are fused based on the pixel position information to obtain a composite image. The embodiment of the application can lead the synthesized image to be more uniform, improve the precision and speed of image fusion and reduce the phenomenon of image shake.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for image fusion according to an embodiment of the present application;
FIG. 2 is a schematic view of a first generation processing region location provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a face key point template according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a first region to be processed being converted into a second region to be processed according to an embodiment of the present application;
FIG. 5 is a schematic view of a composite image generation provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a pixel correspondence provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of a nearest neighbor interpolation result according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a bilinear interpolation result according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a fusion process according to an embodiment of the present application;
FIG. 10 is another flow chart of a method for image fusion according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
In designing the drawings, the following description refers to the same or similar elements in different drawings unless indicated otherwise. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the terms in the present application will be understood in a specific case by those of ordinary skill in the art. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In order to solve the technical problems that in the prior art, the image is required to be cut in a rotating way, then the cut image is aligned, the flow is long, the restored image has a shaking phenomenon after multiple rounding operations, and the fused photo is easy to generate uneven pixel values, so that the face part of the synthesized image is abrupt, and the authenticity of the photo is reduced. The computer system can be a computer device with a camera, such as a smart phone, a notebook computer, a tablet computer, and the like.
In the following method embodiments, for convenience of explanation, only the execution subject of each step will be described as a computer.
The method for image fusion according to the embodiment of the present application will be described in detail with reference to fig. 1 to 10.
Referring to fig. 1, a flowchart of a method for image fusion is provided in an embodiment of the present application. The method may comprise the steps of:
S101, performing face detection on an original image to obtain a first area to be processed.
In general, an original image refers to an image to be processed that requires face conversion. The face detection means that whether part or all of the characteristic pixel points conform to the face features is judged based on all the characteristic pixel points by detecting the characteristic pixel points on the image, and if so, the face exists on the original image; if the images do not accord with the human face, the fact that the human face does not exist on the original image is determined. The computer can obtain a detection result file after detecting the face of the original image, and the information in the detection result file comprises a plurality of key point coordinates such as the upper left vertex coordinates, the length and the width of the face area, cheek coordinates, eyebrow coordinates, eye coordinates, mouth coordinates, nose coordinates and the like. The computer determines the first area to be processed according to the face area position coordinates, and then determines the first area to be processed based on the face area after detecting the face area on the original image as shown in fig. 2.
S102, determining at least one key pixel point in the first area to be processed.
Generally, after determining the first to-be-processed region on the original image, the computer needs to determine at least one key pixel, and the subsequent face transformation needs to use the position information of the key pixel, for example: the computer detects the key points of the faces 1-68, then the coordinates (100, 198) of the key pixel point 46 on the left eye or the coordinates (173, 119) of the key pixel point 52 on the mouth can be determined.
S203, determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information.
Generally, after the computer determines at least one key pixel point in the first area to be processed, a set of location information for storing the key pixel point may be created, where the location information is required to be used in subsequent processing.
S104, performing face transformation processing on the first to-be-processed area to obtain a second to-be-processed area.
Generally, after determining the first area to be processed, the computer needs to acquire a face key point template, where the face key point template only includes edge feature pixel point coordinate information of a face contour, eyes, and the like, and no specific pixel value exists, and as shown in fig. 3, the face template only includes feature point pixel coordinate information. And the computer determines a transformation relation based on the face key point template and the first area to be processed, and obtains the second area to be processed based on the transformation relation and the original image. The pixel values of the first region to be processed may be transformed into the second region to be processed by the face as shown in fig. 4.
S105, fusing the second to-be-processed area and the original image based on the pixel position information to obtain a composite image.
Generally, the composite image refers to an image having features of a face key template after image processing of a face on an original image. The features of the upper two images are fused as shown in fig. 5 to obtain a lower composite image, the upper left representing the original image and the upper right template image, the composite image comprising the original image background and the facial region features of the template image. After the computer obtains the second to-be-processed area through processing, determining the coordinate position of each pixel in the second to-be-processed area, determining that the pixel corresponds to a corresponding pixel on the original image based on the coordinate position and the pixel position information, taking the pixel weighted average of the corresponding pixel and the pixel, and fusing the second to-be-processed area and the original image based on the pixel weighted average to obtain a synthetic image.
As can be seen from the above, performing face detection on an original image to obtain a first to-be-processed area, determining at least one key pixel point in the first to-be-processed area, determining pixel position information of the at least one key pixel point in the original image, storing the pixel position information, performing face transformation on the first to-be-processed area to obtain a second to-be-processed area, and fusing the second to-be-processed area and the original image based on the pixel position information to obtain a composite image. The embodiment of the application can lead the synthesized image to be more uniform, improve the precision and speed of image fusion and reduce the phenomenon of image shake.
Referring to fig. 10, another flow chart of a method for image fusion is provided in an embodiment of the present application. The image fusion method may include the steps of:
S1001, detecting the original image based on a face detection algorithm, obtaining a detection result file, analyzing the detection result file, and determining the first area to be processed.
Generally, the face detection algorithm refers to an algorithm for detecting face feature pixels, for example: hundred degree face detection algorithm, graph technology face detection algorithm, business soup technology face detection algorithm and the like. The user can import the original image into the hundred-degree face detection and attribute analysis system interface, wait for the detection time, and acquire a detection result file, for example: the detection result file is a json file, and the json file is analyzed to obtain the information of the coordinates of the top left vertex, the length and the width of the face region, and the coordinates of a plurality of key points of cheek coordinates, eyebrow coordinates, eye coordinates, mouth coordinates and nose coordinates contained in the detection result file. The computer determines a first area to be processed according to the face area position information.
S1002, determining at least one key pixel point in the first area to be processed, and storing pixel position information.
Generally, after determining the first to-be-processed region on the original image, the computer needs to determine at least one key pixel, for example: the computer detects the key points of the faces 1-68, determines the coordinates (53, 76) of the key pixel points 30, and creates a group to store the position information of the key pixel points, wherein the position information is needed to be used in the subsequent processing.
S1003, acquiring a face key point template, and determining a preset number of key alignment points based on the face key point template and the first area to be processed.
In general, the face key point template only comprises the face outline and the edge feature pixel point coordinate information of eyes and the like, and no specific pixel value exists. The detection result file obtained by the computer has a plurality of pieces of face key point information, and a preset number of key alignment points are determined based on the face key point information and the face key point template, for example: analyzing the detection result file to determine that 20-60 face key points are included, wherein the face key points are 1-68 face key points in the face key point template, and 25, 30, 35, 40, 45 and 50 can be taken as 6 pairs of key alignment points.
S1004, carrying out affine transformation on the preset number of key alignment points to obtain an affine transformation matrix.
In general, an affine transformation matrix refers to a parameter matrix in a transformation process of converting coordinates in an a-coordinate system to coordinates in a B-coordinate system. After determining the key alignment points, the computer needs to calculate an affine transformation matrix, for example: bringing the determined 6 pairs of key alignment points into a formulaCan obtain affine transformation matrix asM is a2 multiplied by 3 matrix, M 00、M01、M02、M10、M11、M12 is a real number, the matrix parameter (M, n) represents the key point coordinates of the face template, the (M ', n') represents the key point coordinates of the original image, and the M, n, M ', n' is a real number larger than 0.
S1005, setting the affine transformation matrix as a homogeneous matrix, calculating an inverse matrix of the homogeneous matrix, and determining a pseudo-inverse matrix based on the inverse matrix.
In general, after obtaining the affine transformation matrix, the computer adds [ 001 ] to the third line of the affine transformation matrix to set a 3×3 homogeneous matrixWherein the affine transformation matrix is a2×3 matrix, and the inverse matrix/>, of the M' homogeneous matrix is calculatedWherein the inverse matrix M 'is a3×3 matrix, and the first two rows of the inverse matrix M' are obtained to obtain the pseudo inverse matrix/>
S1006, determining the corresponding relation between the second area to be processed and the pixel point coordinates of the original image based on the pseudo-inverse matrix.
Typically, after the computer determines the pseudo-inverse, each coordinate in the second region to be processed is brought intoAnd determining the corresponding relation between the second area to be processed and the pixel point coordinate of the original image, wherein (x i,yi) represents the pixel point coordinate of the second area to be processed and (x' i,y`i) represents the pixel point coordinate of the original image. The pixel values representing the second region to be processed on the right as shown in fig. 6 may be filled in according to the pixel values of the corresponding positions of the original image on the left.
S1007, converting each coordinate value in the pixel lookup table into an integer, and obtaining the second area to be processed based on the corresponding relation of the pixel point coordinates.
Generally, a computer obtains a pixel coordinate correspondence, where the pixel coordinate correspondence is represented by using a pixel lookup table, where the pixel lookup table includes pixel coordinates and pixel values, and the computer needs to convert each coordinate value in the lookup table into an integer, for example: each coordinate value in the pixel lookup table is converted into an integer by using a nearest neighbor interpolation method or converted into an integer by using a bilinear interpolation method. The result processed by the nearest neighbor interpolation method is simply rounded off as shown in fig. 7, the result processed by the bilinear interpolation method is linearly interpolated in the transverse direction as shown in fig. 8, and the result obtained is linearly interpolated in the longitudinal direction based on the result obtained, so that the effect of the method is more obvious than that of the nearest neighbor interpolation method.
S1008, determining the coordinate position of each pixel in the second to-be-processed area.
In general, the second region to be processed may be represented by a matrix, each element corresponding to a response location, for example: the elements of row 65, column 126 in the second region to be processed may be determined as the coordinate locations of (65,126).
S1009, determining that the pixel corresponds to a corresponding pixel on the original image based on the coordinate position and the pixel position information.
In general, the computer may determine that the pixel corresponds to a corresponding pixel on the original image after determining the coordinate position, for example: the coordinate position is 65,126, the coordinate value in the point is 73,150,80, and if the coordinate position is 65,126 corresponding to the original image (0, 0), the coordinate position is 73,150,80 corresponding to the pixel of the original image (0, 0). As shown in fig. 9, the upper left side represents the original image, the upper right side represents the second generation processing region, and the composite image is obtained based on the coordinate correspondence.
S1010, taking a pixel weighted average value of the corresponding pixel and the pixel, and fusing the second to-be-processed area and the original image based on the pixel weighted average value.
Generally, after determining the corresponding pixels on the original image, the computer may perform fusion to obtain a composite image, for example: taking the point with the coordinate position of the second to-be-processed area as (65,126), if (65,126) of the coordinate position of the second to-be-processed area corresponds to (73,150) of the original image, taking the weighted average of the pixel values of the pixel point on the second to-be-processed area (65,126) and the pixel point on the original image (73,150), taking the weighted average as the fusion value of the pixel point on (73,150), traversing all points of the second to-be-processed area, and obtaining the fusion image. Wherein the weighted average calculation formula is:
Wherein, Representing pixel values in the original image,/>Representing pixel values in the second region to be processed,/>And representing pixel values in the fused image, wherein alpha is a weight parameter, and when the alpha is 0.5, the average value of the two pixel values is obtained.
When the scheme of the embodiment of the application is executed, the original image is detected based on a face detection algorithm, a detection result file is obtained, the detection result file is analyzed, the first region to be processed is determined, at least one key pixel point is determined in the first region to be processed, pixel position information is stored, a face key point template is obtained, a preset number of key alignment points are determined based on the face key point template and the first region to be processed, affine transformation matrix is obtained by affine transformation of the preset number of key alignment points, the affine transformation matrix is set as a homogeneous matrix, an inverse matrix of the homogeneous matrix is calculated, a pseudo-inverse matrix is determined based on the inverse matrix, a pixel point coordinate correspondence relation between the second region to be processed and the original image is determined based on the pseudo-inverse matrix, each coordinate value in the pixel lookup table is converted into an integer, a coordinate position of each pixel in the second region to be processed is determined based on the pixel point coordinate correspondence relation, an affine transformation matrix is determined, a pixel position of each pixel in the second region to be processed is obtained in the second region to be processed, a pixel position corresponding to the pixel position on the pixel position and the pixel point coordinate information is calculated, the pixel position is corresponding to the pixel position in the first region to be processed, the pixel position is weighted and the original image is obtained, the pixel position is weighted and the pixel position in the original image is obtained by the average. The embodiment of the application can lead the synthesized image to be more uniform, improve the precision and speed of image fusion and reduce the phenomenon of image shake.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 11, a schematic structural diagram of an apparatus for image fusion according to an exemplary embodiment of the present application is shown, and the apparatus is hereinafter referred to as a control apparatus 11. The control means 11 may be implemented as all or part of the terminal by software, hardware or a combination of both. Comprising the following steps:
The detection module 1101 is configured to perform face detection on an original image to obtain a first area to be processed; wherein the first region to be processed comprises a face region;
A determining module 1102, configured to determine at least one key pixel point in the first area to be processed; determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information;
a transforming module 1103, configured to perform face transformation on the first to-be-processed area to obtain a second to-be-processed area;
and a fusion module 1104, configured to fuse the second to-be-processed area and the original image based on the pixel position information, so as to obtain a composite image.
Optionally, the detection module 1101 further includes:
the analysis unit is used for detecting the original image based on a face detection algorithm and obtaining a detection result file; analyzing the detection result file to determine the first area to be processed; the information in the detection result file comprises a top left vertex coordinate, a length and a width of the face area, and a plurality of key point coordinates of cheek coordinates, eyebrow coordinates, eye coordinates, mouth coordinates and nose coordinates.
Optionally, the transforming module 1103 further includes:
the acquisition unit is used for acquiring the key point template of the face; determining a transformation relation based on the face key point template and the first region to be processed; obtaining the second area to be processed based on the transformation relation and the original image; determining a preset number of key alignment points based on the face key point template and the first area to be processed; carrying out affine transformation on the preset number of key alignment points to obtain an affine transformation matrix; determining the transformation relation according to the affine transformation matrix; determining a pseudo-inverse matrix based on the affine transformation matrix; determining a pixel point coordinate corresponding relation between the second region to be processed and the original image based on the pseudo-inverse matrix; the pixel point coordinate corresponding relation is represented by using a pixel lookup table, and the pixel lookup table comprises pixel point coordinates and pixel point values; obtaining the second region to be processed based on the pixel point coordinate correspondence; setting the affine transformation matrix as a homogeneous matrix; wherein the affine transformation matrix is a non-homogeneous matrix; calculating an inverse matrix of the homogeneous matrix; determining a pseudo-inverse based on the inverse; converting each coordinate value in the pixel lookup table into an integer by using a nearest neighbor interpolation method; obtaining the second region to be processed based on the converted pixel point coordinate correspondence; or converting each coordinate value in the pixel lookup table into an integer by using a bilinear interpolation method; and obtaining the second region to be processed based on the converted pixel point coordinate correspondence.
Optionally, the fusion module 1104 further includes:
An updating unit, configured to determine a coordinate position of each pixel in the second area to be processed; determining that the pixel corresponds to a corresponding pixel on the original image based on the coordinate location and the pixel location information; taking a pixel weighted average of the corresponding pixel and the pixel; and fusing the second to-be-processed area and the original image based on the pixel weighted average.
The embodiments of the present application and the embodiments of the methods of fig. 1 or fig. 10 are based on the same concept, and the technical effects brought by the embodiments of the present application are the same, and the specific process may refer to the description of the embodiments of the methods of fig. 1 or fig. 10, which is not repeated here.
The device 11 may be a field-programmable gate array (FPGA) that implements relevant functions, an application specific integrated chip, a system on chip (SoC), a central processing unit (central processor unit, CPU), a network processor (network processor, NP), a digital signal processing circuit, a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD) or other integrated chips.
When the scheme of the embodiment of the application is executed, the original image is detected based on a face detection algorithm, a detection result file is obtained, the detection result file is analyzed, the first region to be processed is determined, at least one key pixel point is determined in the first region to be processed, pixel position information is stored, a face key point template is obtained, a preset number of key alignment points are determined based on the face key point template and the first region to be processed, affine transformation matrix is obtained by affine transformation of the preset number of key alignment points, the affine transformation matrix is set as a homogeneous matrix, an inverse matrix of the homogeneous matrix is calculated, a pseudo-inverse matrix is determined based on the inverse matrix, a pixel point coordinate correspondence relation between the second region to be processed and the original image is determined based on the pseudo-inverse matrix, each coordinate value in the pixel lookup table is converted into an integer, a coordinate position of each pixel in the second region to be processed is determined based on the pixel point coordinate correspondence relation, an affine transformation matrix is determined, a pixel position of each pixel in the second region to be processed is obtained in the second region to be processed, a pixel position corresponding to the pixel position on the pixel position and the pixel point coordinate information is calculated, the pixel position is corresponding to the pixel position in the first region to be processed, the pixel position is weighted and the original image is obtained, the pixel position is weighted and the pixel position in the original image is obtained by the average. The embodiment of the application can lead the synthesized image to be more uniform, improve the precision and speed of image fusion and reduce the phenomenon of image shake.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executed as described above, and the specific implementation process may refer to the specific description of the embodiment shown in fig. 1 or fig. 10, which is not repeated herein.
The present application also provides a computer program product storing at least one instruction that is loaded and executed by the processor to implement the template control method according to the above embodiments.
Referring to fig. 12, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 12, the electronic device 12 may include: at least one processor 1201, at least one network interface 1204, a user interface 1203, a memory 1205, at least one communication bus 1202.
Wherein a communication bus 1202 is used to enable connected communications between these components.
The user interface 1203 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1203 may further include a standard wired interface and a standard wireless interface.
The network interface 1204 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 1201 may include one or more processing cores. The processor 1201 performs various functions of the terminal 1200 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1205, and invoking data stored in the memory 1205, using various interfaces and lines connecting the various parts throughout the terminal 1200. Alternatively, the processor 1201 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1201 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1201 and may be implemented by a single chip.
The Memory 1205 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1205 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). The memory 1205 may be used to store instructions, programs, code sets, or instruction sets. The memory 1205 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 1205 may also optionally be at least one storage device located remotely from the processor 1201. As shown in fig. 12, an operating system, a network communication module, a user interface module, and an image fusion application program may be included in the memory 1205 as one type of computer storage medium.
In the electronic device 1200 shown in fig. 12, the user interface 1203 is mainly used as an interface for providing input for a user, and obtains data input by the user; and the processor 1201 may be configured to invoke the image fusion application stored in the memory 1205 and specifically perform the following operations:
Carrying out face detection on the original image to obtain a first area to be processed; wherein the first region to be processed comprises a face region;
determining at least one key pixel point in the first region to be processed;
Determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information;
Performing face transformation processing on the first region to be processed to obtain a second region to be processed;
And fusing the second area to be processed and the original image based on the pixel position information to obtain a composite image.
In one embodiment, the processor 1201 performs the face detection on the original image to obtain a first to-be-processed area, including:
Detecting the original image based on a face detection algorithm, and obtaining a detection result file;
analyzing the detection result file to determine the first area to be processed;
the information in the detection result file comprises a top left vertex coordinate, a length and a width of the face area, and a plurality of key point coordinates of cheek coordinates, eyebrow coordinates, eye coordinates, mouth coordinates and nose coordinates.
In one embodiment, the processor 1201 performs the face transform processing on the first to-be-processed area to obtain a second to-be-processed area, including:
acquiring a key point template of a human face;
determining a transformation relation based on the face key point template and the first region to be processed;
and obtaining the second area to be processed based on the transformation relation and the original image.
In one embodiment, the determining, by the processor 1201, the transformation relationship based on the face keypoint template and the first area to be processed includes:
determining a preset number of key alignment points based on the face key point template and the first area to be processed;
Carrying out affine transformation on the preset number of key alignment points to obtain an affine transformation matrix;
and determining the transformation relation according to the affine transformation matrix.
In one embodiment, the processor 1201 performs the obtaining the second area to be processed based on the transformation relation and the original image, including:
determining a pseudo-inverse matrix based on the affine transformation matrix;
Determining a pixel point coordinate corresponding relation between the second region to be processed and the original image based on the pseudo-inverse matrix; the pixel point coordinate corresponding relation is represented by using a pixel lookup table, and the pixel lookup table comprises pixel point coordinates and pixel point values;
and obtaining the second area to be processed based on the pixel point coordinate corresponding relation.
In one embodiment, the processor 1201 performs the determining a pseudo-inverse matrix based on the affine transformation matrix, including:
setting the affine transformation matrix as a homogeneous matrix; wherein the affine transformation matrix is a non-homogeneous matrix;
Calculating an inverse matrix of the homogeneous matrix;
a pseudo-inverse matrix is determined based on the inverse matrix.
In one embodiment, the processor 1201 performs the obtaining the second area to be processed based on the pixel coordinate correspondence, including:
converting each coordinate value in the pixel lookup table into an integer by using a nearest neighbor interpolation method;
obtaining the second region to be processed based on the converted pixel point coordinate correspondence; or (b)
Converting each coordinate value in the pixel lookup table into an integer by using a bilinear interpolation method;
and obtaining the second region to be processed based on the converted pixel point coordinate correspondence.
In one embodiment, the fusing the second to-be-processed region and the original image based on the pixel position information is performed by the processor 1201, including:
Determining the coordinate position of each pixel in the second to-be-processed area;
determining that the pixel corresponds to a corresponding pixel on the original image based on the coordinate location and the pixel location information;
Taking a pixel weighted average of the corresponding pixel and the pixel;
And fusing the second to-be-processed area and the original image based on the pixel weighted average.
The technical concept of the embodiment of the present application is the same as that of fig. 1 or fig. 10, and the specific process may refer to the method embodiment of fig. 1 or fig. 10, which is not repeated here.
In the embodiment of the application, the original image is detected based on a face detection algorithm, a detection result file is obtained, the detection result file is analyzed, the first region to be processed is determined, at least one key pixel point is determined in the first region to be processed, pixel position information is stored, a face key point template is obtained, a preset number of key alignment points are determined based on the face key point template and the first region to be processed, affine transformation matrix is obtained by affine transformation of the preset number of key alignment points, the affine transformation matrix is set as a homogeneous matrix, an inverse matrix of the homogeneous matrix is calculated, a pseudo-inverse matrix is determined based on the inverse matrix, pixel point coordinate correspondence of the second region to be processed and the original image is determined based on the pseudo-inverse matrix, each coordinate value in the pixel lookup table is converted into an integer, the second region to be processed is obtained based on the pixel point coordinate correspondence, the coordinate position of each pixel in the second region to be processed is determined, the affine transformation matrix is obtained by affine transformation of the affine transformation matrix based on the coordinate position of the pixel point corresponding to the pixel point coordinate position and the pixel point corresponding to the original image, and the average weighting is carried out on the pixel position and the pixel point corresponding to the original image. The embodiment of the application can lead the synthesized image to be more uniform, improve the precision and speed of image fusion and reduce the phenomenon of image shake.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a computer-readable storage medium, which when executed, may comprise the steps of the above-described embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (7)

1. A method of image fusion, the method comprising:
Carrying out face detection on the original image to obtain a first area to be processed; wherein the first region to be processed comprises a face region;
determining at least one key pixel point in the first region to be processed;
Determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information;
Performing face transformation processing on the first region to be processed to obtain a second region to be processed;
Fusing the second region to be processed and the original image based on the pixel position information to obtain a synthetic image;
The step of performing face transformation processing on the first to-be-processed region to obtain a second to-be-processed region includes:
acquiring a key point template of a human face;
determining a transformation relation based on the face key point template and the first region to be processed;
Obtaining the second area to be processed based on the transformation relation and the original image;
The fusing the second to-be-processed area and the original image based on the pixel position information includes:
Determining the coordinate position of each pixel in the second to-be-processed area;
determining that the pixel corresponds to a corresponding pixel on the original image based on the coordinate location and the pixel location information;
Taking a pixel weighted average of the corresponding pixel and the pixel;
fusing the second to-be-processed area and the original image based on the pixel weighted average;
the determining a transformation relationship based on the face key point template and the first region to be processed includes:
determining a preset number of key alignment points based on the face key point template and the first area to be processed;
Carrying out affine transformation on the preset number of key alignment points to obtain an affine transformation matrix;
determining the transformation relation according to the affine transformation matrix;
the obtaining the second area to be processed based on the transformation relation and the original image includes:
determining a pseudo-inverse matrix based on the affine transformation matrix;
Determining a pixel point coordinate corresponding relation between the second region to be processed and the original image based on the pseudo-inverse matrix; the pixel point coordinate corresponding relation is represented by using a pixel lookup table, and the pixel lookup table comprises pixel point coordinates and pixel point values;
and obtaining the second area to be processed based on the pixel point coordinate corresponding relation.
2. The method of claim 1, wherein performing face detection on the original image to obtain a first to-be-processed region comprises:
Detecting the original image based on a face detection algorithm, and obtaining a detection result file;
analyzing the detection result file to determine the first area to be processed;
The information in the detection result file comprises the coordinates of the top left vertex, the length and the width of the first area to be processed, and a plurality of key point coordinates of cheek coordinates, eyebrow coordinates, eye coordinates, mouth coordinates and nose coordinates.
3. The method of claim 1, wherein the determining a pseudo-inverse based on the affine transformation matrix comprises:
setting the affine transformation matrix as a homogeneous matrix; wherein the affine transformation matrix is a non-homogeneous matrix;
Calculating an inverse matrix of the homogeneous matrix;
a pseudo-inverse matrix is determined based on the inverse matrix.
4. The method according to claim 1, wherein the obtaining the second area to be processed based on the pixel coordinate correspondence includes:
converting each coordinate value in the pixel lookup table into an integer by using a nearest neighbor interpolation method;
obtaining the second region to be processed based on the converted pixel point coordinate correspondence; or (b)
Converting each coordinate value in the pixel lookup table into an integer by using a bilinear interpolation method;
and obtaining the second region to be processed based on the converted pixel point coordinate correspondence.
5. An apparatus for image fusion, the apparatus comprising:
the detection module is used for carrying out face detection on the original image to obtain a first area to be processed; wherein the first region to be processed comprises a face region;
A determining module, configured to determine at least one key pixel point in the first area to be processed; determining pixel position information of the at least one key pixel point in the original image, and storing the pixel position information;
The transformation module is used for carrying out face transformation processing on the first region to be processed to obtain a second region to be processed;
the fusion module is used for fusing the second region to be processed and the original image based on the pixel position information to obtain a synthetic image;
The transformation module further comprises an acquisition unit for acquiring a face key point template; determining a transformation relation based on the face key point template and the first region to be processed; obtaining the second area to be processed based on the transformation relation and the original image; determining a preset number of key alignment points based on the face key point template and the first area to be processed; carrying out affine transformation on the preset number of key alignment points to obtain an affine transformation matrix; determining the transformation relation according to the affine transformation matrix; determining a pseudo-inverse matrix based on the affine transformation matrix; determining a pixel point coordinate corresponding relation between the second region to be processed and the original image based on the pseudo-inverse matrix; the pixel point coordinate corresponding relation is represented by using a pixel lookup table, and the pixel lookup table comprises pixel point coordinates and pixel point values; obtaining the second region to be processed based on the pixel point coordinate correspondence;
The fusion module further comprises an updating unit, a processing unit and a processing unit, wherein the updating unit is used for determining the coordinate position of each pixel in the second area to be processed; determining that the pixel corresponds to a corresponding pixel on the original image based on the coordinate location and the pixel location information; taking a pixel weighted average of the corresponding pixel and the pixel; and fusing the second to-be-processed area and the original image based on the pixel weighted average.
6. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any one of claims 1 to 4.
7. An electronic device, comprising: a memory and a processor; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-4.
CN202011069359.6A 2020-09-30 2020-09-30 Image fusion method and device, storage medium and electronic equipment Active CN112288665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011069359.6A CN112288665B (en) 2020-09-30 2020-09-30 Image fusion method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011069359.6A CN112288665B (en) 2020-09-30 2020-09-30 Image fusion method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112288665A CN112288665A (en) 2021-01-29
CN112288665B true CN112288665B (en) 2024-05-07

Family

ID=74422353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011069359.6A Active CN112288665B (en) 2020-09-30 2020-09-30 Image fusion method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112288665B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284041B (en) * 2021-05-14 2023-04-18 北京市商汤科技开发有限公司 Image processing method, device and equipment and computer storage medium
CN113592720B (en) * 2021-09-26 2022-02-18 腾讯科技(深圳)有限公司 Image scaling processing method, device, equipment and storage medium
CN114065144A (en) * 2021-11-11 2022-02-18 北京达佳互联信息技术有限公司 Image area conversion method, device, electronic equipment and storage medium
CN114066425A (en) * 2021-11-25 2022-02-18 中国建设银行股份有限公司 Electronic approval method, device, equipment and medium
CN114565507A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Hair processing method and device, electronic equipment and storage medium
CN114821717B (en) * 2022-04-20 2024-03-12 北京百度网讯科技有限公司 Target object fusion method and device, electronic equipment and storage medium
CN115049698B (en) * 2022-08-17 2022-11-04 杭州兆华电子股份有限公司 Cloud picture display method and device of handheld acoustic imaging equipment
CN116363031B (en) * 2023-02-28 2023-11-17 锋睿领创(珠海)科技有限公司 Imaging method, device, equipment and medium based on multidimensional optical information fusion
CN116228607B (en) * 2023-05-09 2023-09-29 荣耀终端有限公司 Image processing method and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028136A (en) * 2016-05-30 2016-10-12 北京奇艺世纪科技有限公司 Image processing method and device
CN108876718A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of image co-registration
CN109829930A (en) * 2019-01-15 2019-05-31 深圳市云之梦科技有限公司 Face image processing process, device, computer equipment and readable storage medium storing program for executing
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028136A (en) * 2016-05-30 2016-10-12 北京奇艺世纪科技有限公司 Image processing method and device
CN108876718A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of image co-registration
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109829930A (en) * 2019-01-15 2019-05-31 深圳市云之梦科技有限公司 Face image processing process, device, computer equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN112288665A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN112288665B (en) Image fusion method and device, storage medium and electronic equipment
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
CN112785674B (en) Texture map generation method, rendering device, equipment and storage medium
US9639914B2 (en) Portrait deformation method and apparatus
CN114373056B (en) Three-dimensional reconstruction method, device, terminal equipment and storage medium
CN107452049B (en) Three-dimensional head modeling method and device
CN111369428B (en) Virtual head portrait generation method and device
CN111161392B (en) Video generation method and device and computer system
WO2013177457A1 (en) Systems and methods for generating a 3-d model of a user for a virtual try-on product
CN110322571B (en) Page processing method, device and medium
CN111047509A (en) Image special effect processing method and device and terminal
CN108985132B (en) Face image processing method and device, computing equipment and storage medium
CN112734633A (en) Virtual hair style replacing method, electronic equipment and storage medium
CN107203962B (en) Method for making pseudo-3D image by using 2D picture and electronic equipment
CN115810101A (en) Three-dimensional model stylizing method and device, electronic equipment and storage medium
JP4468631B2 (en) Texture generation method and apparatus for 3D face model
CN107203961B (en) Expression migration method and electronic equipment
CN107644455B (en) Face image synthesis method and device
CN109598672B (en) Map road rendering method and device
KR101888837B1 (en) Preprocessing apparatus in stereo matching system
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112348069B (en) Data enhancement method, device, computer readable storage medium and terminal equipment
CN115222867A (en) Overlap detection method, overlap detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant