CN110443230A - Face fusion method, apparatus and electronic equipment - Google Patents
Face fusion method, apparatus and electronic equipment Download PDFInfo
- Publication number
- CN110443230A CN110443230A CN201910777844.XA CN201910777844A CN110443230A CN 110443230 A CN110443230 A CN 110443230A CN 201910777844 A CN201910777844 A CN 201910777844A CN 110443230 A CN110443230 A CN 110443230A
- Authority
- CN
- China
- Prior art keywords
- face
- key point
- area
- image
- fused
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Abstract
This application discloses a kind of face fusion method, apparatus and electronic equipments, are related to field of cloud calculation.Specific implementation are as follows: obtain the face key point in image to be fused and the face key point in base map;The first human face region is obtained according to the face key point in image to be fused, the second human face region is obtained according to the face key point in base map;First face regional deformation is converted into the second human face region;First human face region is merged with the second human face region, obtains face fusion figure.It efficiently solves during mouth fusion, the severely deformed problem of fused mouth caused by preventing posture from differing greatly.
Description
Technical field
This application involves a kind of field of image processing more particularly to a kind of face fusion fields.
Background technique
Currently, in the faces special efficacy applications such as expression driving, the generation of virtual portrait, or individually need to carry out mouth fusion
Scene in, often through graph cut mode, the mouth region in material is directly fused in bottom plate image.It is this simple
The face fusion method of processing leads to fused mouth region transitions deformation due to the otherness of human face posture, and merges
The texture difference in mouth region and bottom plate image afterwards is larger, more unnatural.
Summary of the invention
The embodiment of the present application provides a kind of face fusion method, apparatus and electronic equipment, in the prior art to solve
One or more technical problems.
In a first aspect, the embodiment of the present application provides a kind of face fusion method, comprising:
Obtain the face key point in image to be fused and the face key point in base map;
The first human face region is obtained according to the face key point in image to be fused, is obtained according to the face key point in base map
Take the second human face region;
First face regional deformation is converted into the second human face region;
First human face region is merged with the second human face region, obtains face fusion figure.
In present embodiment, by the way that the first face regional deformation in image to be fused is converted into the second people in base map
Face region, then by base map the first human face region and the second human face region merge, can be with during mouth fusion
The excessive deformation of mouth is prevented, syncretizing effect better image is obtained.
In one embodiment, the first face regional deformation is converted into before the second human face region, further includes:
First group of key point is chosen according to the face key point in image to be fused, first group of key point includes the first eye
Key point, the first nose key point and the first chin key point;
Second group of key point is chosen according to the face key point in base map, second group of key point includes that the second eye is crucial
Point, the second nose key point and the second chin key point;
According to first group of key point and second group of key point, by the face pair in the face and base map in image to be fused
Together.
It in the present embodiment, can be by the people in image to be fused by choosing the key point of eye, nose, chin
The shape of whole face in face and base map is aligned.
In one embodiment, the first human face region is obtained according to the face key point in image to be fused, the bottom of according to
Face key point in figure obtains the second human face region, comprising:
The first area including mouth is obtained according to the face key point in image to be fused, first area includes to be fused
The nose bottom facial area below of face in image;
First area is expanded with the first ratio, obtains the first human face region;
The second area including mouth is obtained according to the face key point in base map, second area includes face in base map
Nose bottom facial area below;
Second area is expanded with the second ratio, obtains the second human face region.
In the present embodiment, the second area in the first area and base map in image to be fused is expanded respectively
It fills, without carrying out the fusion of full face, can reduce calculation amount, speed up processing.
In one embodiment, the first face regional deformation is converted into the second human face region, comprising:
The first face regional deformation is converted into the second human face region using long side triangulation deformation algorithm.
In the present embodiment, long side triangulation deformation algorithm is utilized to carry out the transformation of mouth region, it is therefore an objective to will be to
The first human face region in blending image carries out shape with the second human face region in base map and is aligned.
In one embodiment, the first human face region is merged with the second human face region, obtains face fusion figure,
Include:
By the progress of the second exposure mask matrix of the first exposure mask matrix of first area and second area and operation, obtains third and cover
Film matrix;
Corrosion treatment and Gaussian Blur processing are carried out to third exposure mask matrix, obtain the 4th exposure mask matrix;
Based on the 4th exposure mask matrix, the image obtained after deformation is converted carries out colour switching algorithm and graph cut algorithm
Processing, obtains face fusion figure.
In the present embodiment, by obtaining the first exposure mask matrix and the progress of the second exposure mask matrix and operation third and covering
Film matrix can prevent posture fused mouth deformation caused by differing greatly.By corroding to third exposure mask matrix
Processing and Gaussian Blur processing, the first human face region and the second human face region are not in that borderline region is split during fusion
Trace, display effect are more natural.Color difference can be reduced by the processing mode of colour switching.
Second aspect, the embodiment of the present application provide a kind of face fusion device, comprising:
Face key point obtains module, crucial for obtaining the face key point in image to be fused and the face in base map
Point;
Human face region obtains module, for obtaining the first human face region, root according to the face key point in image to be fused
The second human face region is obtained according to the face key point in base map;
Human face region conversion module, for the first face regional deformation to be converted into the second human face region;
Face fusion module obtains face fusion figure for merging the first human face region with the second human face region.
In one embodiment, further includes:
First key point obtains module, for choosing first group of key point according to the face key point in image to be fused,
First group of key point includes the first eye key point, the first nose key point and the first chin key point;
Second key point obtain module, for according in base map face key point choose second group of key point, second group
Key point includes the second eye key point, the second nose key point and the second chin key point;
Face alignment module is used for according to first group of key point and second group of key point, by the face in image to be fused
It is aligned with the face in base map.
In one embodiment, human face region conversion module includes:
First area acquiring unit, for obtaining the firstth area including mouth according to the face key point in image to be fused
Domain, first area include the nose bottom facial area below of face in image to be fused;
First area expansion unit obtains the first human face region for expanding with the first ratio first area;
Second area acquiring unit, for obtaining the second area including mouth according to the face key point in base map, the
Two regions include the nose bottom facial area below of face in base map;
Second area expansion unit obtains the second human face region for expanding with the second ratio second area.
In one embodiment, human face region conversion module includes:
Converter unit, for the first face regional deformation to be converted into the second face using long side triangulation deformation algorithm
Region.
In one embodiment, face fusion module includes:
First processing units, for carrying out the second exposure mask matrix of the first exposure mask matrix of first area and second area
With operation, third exposure mask matrix is obtained;
The second processing unit obtains the 4th and covers for carrying out corrosion treatment and Gaussian Blur processing to third exposure mask matrix
Film matrix;
Third processing unit, for being based on the 4th exposure mask matrix, the image obtained after deformation is converted carries out colour switching
Algorithm and graph cut algorithm process, obtain face fusion figure.
One embodiment in above-mentioned application has the following advantages that or the utility model has the advantages that will be in image to be fused because using
The first face regional deformation be converted into the second human face region in base map, then by base map the first human face region and the second people
The technical issues of technological means that face region is merged efficiently solves during mouth fusion, mouth excessive deformation,
And then improve the syncretizing effect in mouth region.
Other effects possessed by above-mentioned optional way are illustrated hereinafter in conjunction with specific embodiment.
Detailed description of the invention
Attached drawing does not constitute the restriction to the application for more fully understanding this programme.Wherein:
Fig. 1 is a kind of face fusion method flow schematic diagram according to the application;
Fig. 2 is the flow diagram according to a kind of specific embodiment of face fusion method of the application;
Fig. 3 is another face fusion method flow schematic diagram according to the application;
Fig. 4 is another face fusion method flow schematic diagram according to the application;
Fig. 5 is a kind of face fusion apparatus structure block diagram according to the application;
Fig. 6 is another face fusion apparatus structure block diagram according to the application;
Fig. 7 is another face fusion apparatus structure block diagram according to the application;
Fig. 8 is the block diagram for the electronic equipment for realizing a kind of face fusion method of the embodiment of the present application;
Fig. 9 is the application scenario diagram for realizing the embodiment of the present application one.
Specific embodiment
It explains below in conjunction with exemplary embodiment of the attached drawing to the application, including the various of the embodiment of the present application
Details should think them only exemplary to help understanding.Therefore, those of ordinary skill in the art should recognize
It arrives, it can be with various changes and modifications are made to the embodiments described herein, without departing from the scope and spirit of the present application.Together
Sample, for clarity and conciseness, descriptions of well-known functions and structures are omitted from the following description.
Embodiment one
In a specific embodiment, a kind of face fusion method is provided, as shown in Figure 1, comprising:
Step S10: the face key point in image to be fused and the face key point in base map are obtained;
Step S20: the first human face region is obtained according to the face key point in image to be fused, according to the face in base map
Key point obtains the second human face region;
Step S30: the first face regional deformation is converted into the second human face region;
Step S40: the first human face region is merged with the second human face region, obtains face fusion figure.
In a kind of example, as shown in Fig. 2, firstly, a large amount of images to be fused can be obtained, include in image to be fused to
The nozzle type material of fusion.For example, image to be fused may include sequence of frames of video, single picture etc., image to be fused can be remembered
Are as follows: src_img.A large amount of base maps can be obtained, the nozzle type in image to be fused is fused in base map.For example, base map may include
Sequence of frames of video, single picture etc., base map can be denoted as dst_img.Image to be fused and base map can be separately input into people
In face critical point detection model, multiple faces of the multiple face key points and base map of extracting image to be fused respectively are crucial
Point.Wherein, the number of face key point is typically larger than 50, can carry out the number that face key point is adaptively adjusted according to demand
Mesh.Judge whether the key point extracted is correct by the judgment criterion of critical point detection exception, if incorrect, returns current
Frame reacquires new image or base map to be fused.
Then, the key point including mouth region is selected according to the face key point in image to be fused, is formed the first
Face region can be denoted as src_mouth_img.Likewise, selecting the pass including mouth region according to the face key point in base map
Key point forms the second human face region, can be denoted as dst_mouth_img.It is pointed out that the first human face region and the second face
Region includes at least the facial area of nose mouth ontology included below.A degree of expansion can be carried out as needed, existed
In the protection scope of present embodiment.It, can also will be wait melt by face key point before carrying out mouth region deformation transformation
The face closed in face and base map in image is aligned.Later, long side triangulation deformation algorithm can be used will be to be fused
The first human face region in image transforms to the second human face region in base map, by image to be fused the first human face region with
The second human face region in base map carries out shape alignment.It is, of course, also possible to carry out shape alignment using other deformation algorithms, exist
In the protection scope of present embodiment.
Finally, the first human face region is merged with the second human face region, face fusion figure is obtained.The process of fusion can
To include the calibration to mouth region after fusion, it is possible to reduce posture difference, so that fused mouth is indeformable.To melting
Mouth region after conjunction carries out corrosion treatment and Gaussian Blur processing, borderline region slight crack occurs during solving fusion,
Unnatural problem.There are also colour of skin fusion treatment is carried out to fused mouth region, solve the problems, such as that color difference is larger.
As shown in figure 9, intermediate image is image to be fused, it is the side face figure that personage closeds mouth in image, left image is base map, figure
It is the positive face figure that personage opens mouth in picture, forms right image after fusion, it can be seen that personage closeds in fused image
The positive face figure of mouth.Excessive deformation does not occur for the mouth region part of personage, fine with base map color blend yet.
In present embodiment, by the way that the first face regional deformation in image to be fused is converted into the second people in base map
Face region, then the first human face region and the method that merges of the second human face region in base map are efficiently solved in mouth
During fusion, the technical issues of mouth excessive deformation.
In one embodiment, as shown in figure 3, after step S20, before step S30, further includes:
Step S21: first group of key point is chosen according to the face key point in image to be fused, first group of key point includes
First eye key point, the first nose key point and the first chin key point;
Step S22: second group of key point is chosen according to the face key point in base map, second group of key point includes second
Portion's key point, the second nose key point and the second chin key point;
Step S23:, will be in the face and base map in image to be fused according to first group of key point and second group of key point
Face alignment.
It, can first will be wait melt before the first face regional deformation is converted into the second human face region in a kind of example
The whole face closed in whole face and base map in image is aligned.The mode of alignment can be by choosing image to be fused
In the several key positions of face key point and base map in the key points of the several key positions of face be aligned.It is optional
The key point of eyes eye is selected, for example, the key point at two canthus.Nose key point may be selected, for example, nose center is crucial
Point.Chin key point may be selected, for example, the key point at chin center.Using two canthus key points, nose center key point with
And whole face in image to be fused is aligned to whole face in base map image by chin center key point.
In one embodiment, as shown in figure 4, step S20, comprising:
Step S201: including the first area of mouth according to the face key point in image to be fused, first area includes
The nose bottom facial area below of face in image to be fused;
Step S202: first area is expanded with the first ratio, obtains the first human face region;
Step S203: including the second area of mouth according to the face key point in base map, second area includes in base map
The nose bottom facial area below of face;
Step S204: second area is expanded with the second ratio, obtains the second human face region.
In the present embodiment, nose bottom facial area below can be selected as the firstth area in image to be fused
Domain selects nose bottom facial area below as second area in base map.First area and second area can be carried out
1.3 times of expansion can expand to above nose.Certainly, expansion ratio can also be other values, carry out adaptability tune as needed
It is whole.The range of expansion may include the facial area of nose or more, and when carrying out deformation transformation, effect is preferable.By image to be fused
In first area and base map in second area expanded respectively, without carrying out the fusion of full face, reduce calculation amount, accelerate
Processing speed.
In one embodiment, as shown in figure 4, step S30, comprising:
Step S301: the first face regional deformation is converted into the second face area using long side triangulation deformation algorithm
Domain.
In a kind of example, the transformation of mouth region is carried out using long side triangulation deformation algorithm.Purpose is will be wait melt
The first human face region closed in image is aligned with the second human face region progress shape in base map.For example, utilizing two-dimensional deformation
The key point of the first human face region in image to be fused is converted into second in base map by (2dimension morphing) algorithm
At the position of the key point half of human face region.Face in image to be fused is positive face, and the face in base map is side face
In the case where, the transformed mouth region shape alignment of deformation is solved due to the excessive shape of the larger caused mouth of attitudes vibration
The technical issues of change.
In one embodiment, as shown in figure 4, step S40, comprising:
Step S401: by the second exposure mask matrix of the first exposure mask matrix of first area and second area carry out and operation,
Obtain third exposure mask matrix;
Step S402: corrosion treatment is carried out to third exposure mask matrix and Gaussian Blur is handled, obtains the 4th exposure mask matrix;
Step S403: being based on the 4th exposure mask matrix, and the image obtained after deformation is converted carries out colour switching algorithm and pool
Loose blending algorithm processing, obtains face fusion figure.
In the present embodiment, the first exposure mask matrix is generated according to the key point outer profile of first area, second area
Key point outer profile generates the second exposure mask matrix, by obtaining the first exposure mask matrix and the progress of the second exposure mask matrix and operation
Third exposure mask matrix to get to fusion after mouth exposure mask.Purpose is fused mouth caused by preventing posture from differing greatly
Severely deformed problem.
Then, in adaptive polo placement nuclear convolution size.Detailed process include: by being merged to third exposure mask matrix after
The exposure mask of mouth carries out corrosion treatment and Gaussian Blur processing, solves the first human face region and the second human face region in fusion
Occurs borderline region slight crack in the process, borderline region is unsmooth, unnatural problem.Wherein, the interior nuclear convolution and Gauss of corrosion
Fuzzy interior nuclear convolution is excessive, causes the exposure mask of mouth after merging to be corroded too small, causes border texture obvious, and kernel is rolled up
Long-pending size is related with accounting of the mouth region in whole image.So the interior nuclear convolution size in Gaussian Blur can lead to
It crosses fused mouth region accounting and takes odd mode adaptively to be adjusted multiplied by coefficient, coefficient can be flat by calculating mean value
Equal mode acquires.
Based on corrosion and Gaussian Blur after exposure mask i.e. the 4th exposure mask matrix, before expanding in image to be fused first
The colour switching in region to base map color.When the biggish situation of colour of skin difference of base map color and fused mouth region
Under, using the processing mode of linear color transformation by the colour switching Cheng Weikuo of the first area before not expanding in image to be fused
Mouth region after colour switching is being carried out graph cut, it is larger to solve color difference by the color of the second area before filling
The problem of, so that mouth region skin no color differnece in face fusion figure.Finally obtain the ratio of different scenes lower mandible portion region fusion
More natural result.
Embodiment two
In another specific embodiment, as shown in figure 5, the embodiment of the present application provides a kind of face fusion device
100, comprising:
Face key point obtains module 110, for obtaining the face key point in image to be fused and the face in base map
Key point;
Human face region obtains module 120, for obtaining the first human face region according to the face key point in image to be fused,
The second human face region is obtained according to the face key point in base map;
Human face region conversion module 130, for the first face regional deformation to be converted into the second human face region;
Face fusion module 140 obtains face fusion for merging the first human face region with the second human face region
Figure.
In one embodiment, as shown in fig. 6, a kind of face fusion device 200, in the base of face fusion device 100
On plinth, further includes:
First key point obtains module 121, for choosing first group of key according to the face key point in image to be fused
Point, first group of key point include the first eye key point, the first nose key point and the first chin key point;
Second key point obtain module 122, for according in base map face key point choose second group of key point, second
Group key point includes the second eye key point, the second nose key point and the second chin key point;
Face alignment module 123 is used for according to first group of key point and second group of key point, by the people in image to be fused
Face is aligned with the face in base map.
In one embodiment, as shown in fig. 7, a kind of face fusion device 300, in the base of face fusion device 100
On plinth, human face region obtains module 120 and includes:
First area acquiring unit 1201, for extracting multiple first faces according to the face key point in image to be fused
Key point, and first area is obtained according to multiple first face key points;
First area expansion unit 1202 obtains the first face area for expanding with the first ratio first area
Domain;
Second area acquiring unit 1203, it is crucial for extracting multiple second faces according to the face key point in base map
Point, and second area is obtained according to multiple second face key points;
Second area expansion unit 1204 obtains the second face area for expanding with the second ratio second area
Domain.
In one embodiment, as shown in fig. 7, a kind of face fusion device 300, in the base of face fusion device 100
On plinth, human face region conversion module 130 includes:
Converter unit 1301, for the first face regional deformation to be converted into second using long side triangulation deformation algorithm
Human face region.
In one embodiment, as shown in fig. 7, a kind of face fusion device 300, in the base of face fusion device 100
On plinth, face fusion module 140 includes:
First processing units 1401, for by the second exposure mask matrix of the first exposure mask matrix of first area and second area
Progress and operation, obtain third exposure mask matrix;
The second processing unit 1402 is handled for carrying out corrosion treatment and Gaussian Blur to third exposure mask matrix, obtains the
Four exposure mask matrixes;
Third processing unit 1403, for being based on the 4th exposure mask matrix, the image obtained after deformation is converted carries out color
Algorithm and graph cut algorithm process are converted, face fusion figure is obtained.
According to an embodiment of the present application, present invention also provides a kind of electronic equipment and a kind of readable storage medium storing program for executing.
As shown in figure 8, being the block diagram according to a kind of electronic equipment of face fusion method of the embodiment of the present application.Electronics is set
Standby to be intended to indicate that various forms of digital computers, such as, laptop computer, desktop computer, workbench, individual digital help
Reason, server, blade server, mainframe computer and other suitable computer.Electronic equipment also may indicate that various shapes
The mobile device of formula, such as, personal digital assistant, cellular phone, smart phone, wearable device and other similar calculating dresses
It sets.Component, their connection and relationship shown in this article and their function are merely exemplary, and are not intended to limit
The realization of described herein and/or requirement the application.
As shown in figure 8, the electronic equipment includes: one or more processors 801, memory 802, and each for connecting
The interface of component, including high-speed interface and low-speed interface.All parts are interconnected using different buses, and can be pacified
It installs in other ways on public mainboard or as needed.Processor can to the instruction executed in electronic equipment into
Row processing, including storage in memory or on memory (such as, to be coupled to interface in external input/output device
Display equipment) on show graphic user interface (Graphical User Interface, GUI) graphical information instruction.In
In other embodiment, if desired, can be by multiple processors and/or multiple bus and multiple memories and multiple memories one
It rises and uses.It is also possible to connect multiple electronic equipments, each equipment provides the necessary operation in part (for example, as server battle array
Column, one group of blade server or multicomputer system).In Fig. 8 by taking a processor 801 as an example.
Memory 802 is non-transitory computer-readable storage medium provided herein.Wherein, memory is stored with
The instruction that can be executed by least one processor, so that at least one processor executes a kind of face fusion provided herein
Method.The non-transitory computer-readable storage medium of the application stores computer instruction, and the computer instruction is for making computer
Execute a kind of face fusion method provided herein.
Memory 802 is used as a kind of non-transitory computer-readable storage medium, can be used for storing non-instantaneous software program, non-
Instantaneous computer executable program and module, as the corresponding program of one of the embodiment of the present application face fusion method refers to
Order/module is (for example, attached face key point shown in fig. 5 obtains module 110, human face region obtains module 120, human face region becomes
Change the mold block 130 and face fusion module 140).The non-instantaneous software journey that processor 801 is stored in memory 802 by operation
Sequence, instruction and module, thereby executing the various function application and data processing of server, i.e. realization above method embodiment
One of face fusion method.
Memory 802 may include storing program area and storage data area, wherein storing program area can store operation system
Application program required for system, at least one function;Storage data area can be stored to be made according to a kind of face fusion electronic equipment
With the data etc. created.In addition, memory 802 may include high-speed random access memory, it can also include non-instantaneous deposit
Reservoir, for example, at least a disk memory, flush memory device or other non-instantaneous solid-state memories.In some embodiments
In, optional memory 802 includes the memory remotely located relative to processor 801, these remote memories can pass through net
Network is connected to a kind of face fusion electronic equipment.The example of above-mentioned network includes but is not limited to internet, intranet, local
Net, mobile radio communication and combinations thereof.
A kind of electronic equipment of face fusion method can also include: input unit 803 and output device 804.Processor
801, memory 802, input unit 803 and output device 804 can be connected by bus or other modes, with logical in Fig. 8
It crosses for bus connection.
Input unit 803 can receive the number or character information of input, and generate and a kind of face fusion electronic equipment
User setting and function control related key signals input, such as touch screen, keypad, mouse, track pad, touch tablet,
The input units such as indicating arm, one or more mouse button, trace ball, control stick.Output device 804 may include that display is set
Standby, auxiliary lighting apparatus (for example, LED) and haptic feedback devices (for example, vibrating motor) etc..The display equipment may include but
It is not limited to, liquid crystal display (Liquid Crystal Display, LCD), light emitting diode (Light Emitting
Diode, LED) display and plasma scope.In some embodiments, display equipment can be touch screen.
The various embodiments of system and technology described herein can be in digital electronic circuitry, integrated circuit system
System, is consolidated specific integrated circuit (Application Specific Integrated Circuits, ASIC), computer hardware
It is realized in part, software, and/or their combination.These various embodiments may include: to implement in one or more calculating
In machine program, which can hold in programmable system containing at least one programmable processor
Row and/or explain, which can be dedicated or general purpose programmable processors, can from storage system, at least
One input unit and at least one output device receive data and instruction, and data and instruction is transmitted to the storage system
System, at least one input unit and at least one output device.
These calculation procedures (also referred to as program, software, software application or code) include the machine of programmable processor
Instruction, and can use programming language, and/or the compilation/machine language of level process and/or object-oriented to implement these
Calculation procedure.As used herein, term " machine readable media " and " computer-readable medium " are referred to for referring to machine
It enables and/or data is supplied to any computer program product, equipment, and/or the device of programmable processor (for example, disk, light
Disk, memory, programmable logic device (programmable logic device, PLD)), including, receiving can as machine
The machine readable media of the machine instruction of read signal.Term " machine-readable signal " is referred to for by machine instruction and/or number
According to any signal for being supplied to programmable processor.
In order to provide the interaction with user, system and technology described herein, the computer can be implemented on computers
Include for user show information display device (for example, CRT (Cathode Ray Tube, cathode-ray tube) or
LCD (liquid crystal display) monitor);And keyboard and indicator device (for example, mouse or trace ball), user can be by this
Keyboard and the indicator device provide input to computer.The device of other types can be also used for providing the friendship with user
Mutually;For example, the feedback for being supplied to user may be any type of sensory feedback (for example, visual feedback, audio feedback or
Touch feedback);And it can be received with any form (including vocal input, voice input or tactile input) from user
Input.
System described herein and technology can be implemented including the computing system of background component (for example, as data
Server) or the computing system (for example, application server) including middleware component or the calculating including front end component
System is (for example, the subscriber computer with graphic user interface or web browser, user can pass through graphical user circle
Face or the web browser to interact with the embodiment of system described herein and technology) or including this backstage portion
In any combination of computing system of part, middleware component or front end component.Any form or the number of medium can be passed through
Digital data communicates (for example, communication network) and is connected with each other the component of system.The example of communication network includes: local area network
(Local Area Network, LAN), wide area network (Wide Area Network, WAN) and internet.
Computer system may include client and server.Client and server is generally off-site from each other and usually logical
Communication network is crossed to interact.By being run on corresponding computer and each other with the meter of client-server relation
Calculation machine program generates the relationship of client and server.
According to the technical solution of the embodiment of the present application, by the way that the first face regional deformation in image to be fused is converted into
The second human face region in base map, then by the first human face region and the method that merges of the second human face region in base map, have
The technical issues of effect solves during mouth fusion, mouth excessive deformation.When carrying out deformation transformation, long side three is utilized
The second human face region in the first human face region and base map in image to be fused can be carried out shape by angle subdivision deformation algorithm
Alignment, solve due to attitudes vibration it is larger caused by mouth excessive deformation the technical issues of.To fused mouth region into
Row corrosion treatment and Gaussian Blur processing, solve borderline region slight crack, borderline region is unsmooth, unnatural problem.To corruption
Erosion and Gaussian Blur treated mouth region carry out colour switching and graph cut, solve the problems, such as that color difference is larger,
So that mouth region skin no color differnece in face fusion figure.The comparison for finally obtaining the fusion of different scenes lower mandible portion region is natural
As a result.
It should be understood that various forms of processes illustrated above can be used, rearrangement increases or deletes step.Example
Such as, each step recorded in the application of this hair can be performed in parallel or be sequentially performed the order that can also be different and execute,
As long as it is desired as a result, being not limited herein to can be realized technical solution disclosed in the present application.
Above-mentioned specific embodiment does not constitute the limitation to the application protection scope.Those skilled in the art should be bright
White, according to design requirement and other factors, various modifications can be carried out, combination, sub-portfolio and substitution.It is any in the application
Spirit and principle within made modifications, equivalent substitutions and improvements etc., should be included within the application protection scope.
Claims (12)
1. a kind of face fusion method characterized by comprising
Obtain the face key point in image to be fused and the face key point in base map;
The first human face region is obtained according to the face key point in the image to be fused, it is crucial according to the face in the base map
Point obtains the second human face region;
The first face regional deformation is converted into second human face region;
First human face region is merged with second human face region, obtains face fusion figure.
2. the method according to claim 1, wherein the first face regional deformation is converted into described second
Before human face region, further includes:
First group of key point is chosen according to the face key point in the image to be fused, first group of key point includes first
Eye key point, the first nose key point and the first chin key point;
Second group of key point is chosen according to the face key point in the base map, second group of key point is closed including the second eye
Key point, the second nose key point and the second chin key point;
According to first group of key point and second group of key point, by the face and the base map in the image to be fused
In face alignment.
3. the method according to claim 1, wherein being obtained according to the face key point in the image to be fused
First human face region obtains the second human face region according to the face key point in the base map, comprising:
The first area including mouth is obtained according to the face key point in the image to be fused, the first area includes institute
State the nose bottom facial area below of face in image to be fused;
The first area is expanded with the first ratio, obtains first human face region;
The second area including mouth is obtained according to the face key point in the base map, the second area includes the base map
The nose bottom facial area below of middle face;
The second area is expanded with the second ratio, obtains second human face region.
4. the method according to claim 1, wherein the first face regional deformation is converted into described second
Human face region, comprising:
The first face regional deformation is converted into second human face region using long side triangulation deformation algorithm.
5. according to the method described in claim 3, it is characterized in that, by first human face region and second human face region
It is merged, obtains face fusion figure, comprising:
Second exposure mask matrix of the first exposure mask matrix of the first area and the second area is carried out and operation, obtains the
Three exposure mask matrixes;
Corrosion treatment and Gaussian Blur processing are carried out to the third exposure mask matrix, obtain the 4th exposure mask matrix;
Based on the 4th exposure mask matrix, the image obtained after deformation is converted carries out colour switching algorithm and graph cut algorithm
Processing, obtains the face fusion figure.
6. a kind of face fusion device characterized by comprising
Face key point obtains module, for obtaining the face key point in image to be fused and the face key point in base map;
Human face region obtains module, for obtaining the first human face region, root according to the face key point in the image to be fused
The second human face region is obtained according to the face key point in the base map;
Human face region conversion module, for the first face regional deformation to be converted into second human face region;
Face fusion module obtains face and melts for merging first human face region with second human face region
Close figure.
7. device according to claim 6, which is characterized in that further include:
First key point obtains module, for choosing first group of key point according to the face key point in the image to be fused,
First group of key point includes the first eye key point, the first nose key point and the first chin key point;
Second key point obtains module, for choosing second group of key point according to the face key point in the base map, described the
Two groups of key points include the second eye key point, the second nose key point and the second chin key point;
Face alignment module is used for according to first group of key point and second group of key point, by the image to be fused
In face be aligned with the face in the base map.
8. device according to claim 6, which is characterized in that the human face region conversion module includes:
First area acquiring unit, for obtaining the firstth area including mouth according to the face key point in the image to be fused
Domain, the first area include the nose bottom facial area below of face in the image to be fused;
First area expansion unit obtains first face area for expanding with the first ratio the first area
Domain;
Second area acquiring unit, for obtaining the second area including mouth, institute according to the face key point in the base map
State the nose bottom facial area below that second area includes face in the base map;
Second area expansion unit obtains second face area for expanding with the second ratio the second area
Domain.
9. device according to claim 6, which is characterized in that the human face region conversion module includes:
Converter unit, for the first face regional deformation to be converted into described second using long side triangulation deformation algorithm
Human face region.
10. device according to claim 8, which is characterized in that the face fusion module includes:
First processing units, for by the second exposure mask matrix of the first exposure mask matrix of the first area and the second area
Progress and operation, obtain third exposure mask matrix;
The second processing unit obtains the 4th and covers for carrying out corrosion treatment and Gaussian Blur processing to the third exposure mask matrix
Film matrix;
Third processing unit, for being based on the 4th exposure mask matrix, the image obtained after deformation is converted carries out colour switching
Algorithm and graph cut algorithm process obtain the face fusion figure.
11. a kind of electronic equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out method of any of claims 1-5.
12. a kind of non-transitory computer-readable storage medium for being stored with computer instruction, which is characterized in that the computer refers to
It enables for making the computer perform claim require method described in any one of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910777844.XA CN110443230A (en) | 2019-08-21 | 2019-08-21 | Face fusion method, apparatus and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910777844.XA CN110443230A (en) | 2019-08-21 | 2019-08-21 | Face fusion method, apparatus and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110443230A true CN110443230A (en) | 2019-11-12 |
Family
ID=68436982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910777844.XA Pending CN110443230A (en) | 2019-08-21 | 2019-08-21 | Face fusion method, apparatus and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443230A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110992493A (en) * | 2019-11-21 | 2020-04-10 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111445564A (en) * | 2020-03-26 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Face texture image generation method and device, computer equipment and storage medium |
CN111553864A (en) * | 2020-04-30 | 2020-08-18 | 深圳市商汤科技有限公司 | Image restoration method and device, electronic equipment and storage medium |
CN111598818A (en) * | 2020-04-17 | 2020-08-28 | 北京百度网讯科技有限公司 | Face fusion model training method and device and electronic equipment |
CN111709878A (en) * | 2020-06-17 | 2020-09-25 | 北京百度网讯科技有限公司 | Face super-resolution implementation method and device, electronic equipment and storage medium |
CN111768356A (en) * | 2020-06-28 | 2020-10-13 | 北京百度网讯科技有限公司 | Face image fusion method and device, electronic equipment and storage medium |
CN112001940A (en) * | 2020-08-21 | 2020-11-27 | Oppo(重庆)智能科技有限公司 | Image processing method and device, terminal and readable storage medium |
WO2021062998A1 (en) * | 2019-09-30 | 2021-04-08 | 北京市商汤科技开发有限公司 | Image processing method, apparatus and electronic device |
CN112766215A (en) * | 2021-01-29 | 2021-05-07 | 北京字跳网络技术有限公司 | Face fusion method and device, electronic equipment and storage medium |
CN113255694A (en) * | 2021-05-21 | 2021-08-13 | 北京百度网讯科技有限公司 | Training image feature extraction model and method and device for extracting image features |
US11461870B2 (en) | 2019-09-30 | 2022-10-04 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and device, and electronic device |
-
2019
- 2019-08-21 CN CN201910777844.XA patent/CN110443230A/en active Pending
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021062998A1 (en) * | 2019-09-30 | 2021-04-08 | 北京市商汤科技开发有限公司 | Image processing method, apparatus and electronic device |
US11461870B2 (en) | 2019-09-30 | 2022-10-04 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and device, and electronic device |
CN110992493A (en) * | 2019-11-21 | 2020-04-10 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN110992493B (en) * | 2019-11-21 | 2023-10-31 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111445564B (en) * | 2020-03-26 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Face texture image generation method, device, computer equipment and storage medium |
CN111445564A (en) * | 2020-03-26 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Face texture image generation method and device, computer equipment and storage medium |
CN111598818A (en) * | 2020-04-17 | 2020-08-28 | 北京百度网讯科技有限公司 | Face fusion model training method and device and electronic equipment |
US11830288B2 (en) | 2020-04-17 | 2023-11-28 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for training face fusion model and electronic device |
CN111553864A (en) * | 2020-04-30 | 2020-08-18 | 深圳市商汤科技有限公司 | Image restoration method and device, electronic equipment and storage medium |
CN111709878A (en) * | 2020-06-17 | 2020-09-25 | 北京百度网讯科技有限公司 | Face super-resolution implementation method and device, electronic equipment and storage medium |
US11710215B2 (en) | 2020-06-17 | 2023-07-25 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Face super-resolution realization method and apparatus, electronic device and storage medium |
CN111768356A (en) * | 2020-06-28 | 2020-10-13 | 北京百度网讯科技有限公司 | Face image fusion method and device, electronic equipment and storage medium |
CN112001940A (en) * | 2020-08-21 | 2020-11-27 | Oppo(重庆)智能科技有限公司 | Image processing method and device, terminal and readable storage medium |
CN112766215A (en) * | 2021-01-29 | 2021-05-07 | 北京字跳网络技术有限公司 | Face fusion method and device, electronic equipment and storage medium |
CN113255694A (en) * | 2021-05-21 | 2021-08-13 | 北京百度网讯科技有限公司 | Training image feature extraction model and method and device for extracting image features |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443230A (en) | Face fusion method, apparatus and electronic equipment | |
US11645801B2 (en) | Method for synthesizing figure of virtual object, electronic device, and storage medium | |
JP7093886B2 (en) | Image processing methods and devices, electronic devices and storage media | |
CN110766777B (en) | Method and device for generating virtual image, electronic equipment and storage medium | |
US10855909B2 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
Guo et al. | Image retargeting using mesh parametrization | |
JP7114774B2 (en) | Face fusion model training method, apparatus and electronic equipment | |
CN107564080B (en) | Face image replacement system | |
EP3992919A1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
CN110392282A (en) | A kind of method, computer storage medium and the server of video interleave | |
CN109961496B (en) | Expression driving method and expression driving device | |
CN111368137A (en) | Video generation method and device, electronic equipment and readable storage medium | |
CN107302694B (en) | Method, equipment and the virtual reality device of scene are presented by virtual reality device | |
US11354875B2 (en) | Video blending method, apparatus, electronic device and readable storage medium | |
CN111294665A (en) | Video generation method and device, electronic equipment and readable storage medium | |
CN111275801A (en) | Three-dimensional picture rendering method and device | |
CN111275824A (en) | Surface reconstruction for interactive augmented reality | |
CN111599002A (en) | Method and apparatus for generating image | |
CN111754431A (en) | Image area replacement method, device, equipment and storage medium | |
CN106204418A (en) | Image warping method based on matrix inversion operation in a kind of virtual reality mobile terminal | |
CN109558842A (en) | A kind of method, apparatus, equipment and medium adjusting image display direction | |
CN108053464A (en) | Particle effect processing method and processing device | |
CN113223128B (en) | Method and apparatus for generating image | |
CN114332317A (en) | Animation data processing method, animation data processing device, program product, medium, and electronic apparatus | |
CN107452045A (en) | Spatial point mapping method based on the anti-distortion grid of virtual reality applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |