Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the application, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, shall fall in the protection scope of this application.
Fig. 1 is the applied environment figure of image interfusion method in one embodiment.Referring to Fig.1, the image interfusion method application
In viewdata system.The viewdata system includes terminal 110 and server 120.Terminal 110 and server 120 pass through net
Network connection.Terminal or server acquisition include the original image of face, obtain the facial image to be fused of face in original image, to
The image that fusion facial image is generated according to the key feature of the face of original image obtains fusion template image, merges Prototype drawing
Include integration region corresponding with face as in, merge integration region of the facial image to be fused into fusion template image,
Obtain subject fusion image.The facial image to be fused generated by the face in original image, according in facial image to be fused
Face key feature characterize face, improve the fusion rate of image co-registration.Terminal 110 specifically can be terminal console or movement
Terminal, mobile terminal specifically can be at least one of mobile phone, tablet computer, laptops etc..Server 120 can be with solely
The server clusters of the either multiple servers compositions of vertical server is realized.
As shown in Fig. 2, in one embodiment, providing a kind of image interfusion method.The present embodiment is mainly in this way
It is illustrated applied to the terminal 110 (or server 120) in above-mentioned Fig. 1.Referring to Fig. 2, which is specifically wrapped
Include following steps:
Step S201, acquisition include the original image of face.
Step S202 obtains the facial image to be fused of face in original image.
In this embodiment, the image that facial image to be fused is generated according to the key feature of the face of original image.
Specifically, original image refers to the image shot in interface of taking pictures, includes an at least face in original image.Pass through original
Image carries out feature extraction, and is facial image to be fused according to the facial image that the feature extracted generates,.Wherein original image
In include one or more face.Face key feature is the data characteristics for describing face, extracts face key feature
Method can use common feature extraction algorithm, can also use customized feature extraction algorithm.It is calculated according to feature extraction
The face key feature that method is extracted from original image generates facial image to be fused.
Step S203 obtains fusion template image.
In this embodiment, merging in template image includes integration region corresponding with face.
Step S204 merges integration region of the facial image to be fused into fusion template image, obtains subject fusion figure
Picture.
Specifically, fusion template image refers to the template image for being fused, and includes one or more in template image
Integration region.Fusion template image can be the image selected in the image that user saves at the terminal, be also possible to according to
The face key feature for merging face determines fusion template image.
In one embodiment, the quantity for obtaining face in facial image to be fused, according to people in facial image to be fused
The quantity of face determines the quantity of integration region, obtains fusion template image, wherein the integration region for including in fusion template image
Quantity and facial image to be fused in face quantity, merge each face in face figure to be fused to merging template image
Corresponding integration region, the template for obtaining the integration region comprising quantity identical as the quantity of face in facial image to be fused are melted
Close image.
Specifically, the face quantity for detecting the face in facial image to be fused, according to the people in facial image to be fused
Face quantity determines the quantity of integration region, fusion template image is obtained according to the quantity of integration region, wherein merging template image
In include integration region quantity it is identical as the face quantity in facial image to be fused.The corresponding corresponding circle of sensation of each face
The face key feature of face in each facial image to be fused is fused to the corresponding integration region of fusion template image by domain
In, wherein integration region can may be different integration regions for identical integration region, obtain comprising multiple and different
The integration region of face key feature.As shooting original image in include two faces, pass through Face datection model inspection to two
Face is opened, the face key feature of two faces is extracted, is generated according to the two face key features extracted comprising two people
The facial image to be fused of face, merging includes two integration regions in template image, one of them is a cat face, another
For a rabbit face, then by a face fusion therein to cat face, a face fusion in addition obtains one to rabbit face
Cat face and rabbit face comprising face characteristic.
In one embodiment, it when the face in integration region difference, facial image to be fused is also different, can make by oneself
The corresponding relationship of adopted face and integration region can such as be randomly provided corresponding relationship, can also according to face to be fused with merge
The position in region determines corresponding relationship, can determine corresponding relationship etc. according to gender so that face to be fused is corresponding.
In one embodiment, the face key feature of face to be fused and corresponding integration region are merged, is merged
Image, comprising: corresponding integration region is replaced using the face key feature of facial image to be fused, obtains subject fusion figure
Picture.The integration percentage that the face key feature of facial image to be fused is determined according to the region parameter of integration region, according to correspondence
Integration percentage realize face fusion.When integration percentage is 1, the pixel of the corresponding pixel of face key feature is directly used
Value replaces the pixel value of the pixel of corresponding integration region, realizes that process is simple, realizes that effect is good.
In one embodiment, the quantity for obtaining integration region determines face figure to be fused according to default custom rule
The corresponding relationship of the quantity of face and integration region as in, is realized according to corresponding relationship to face to be fused and corresponding fusion
The fusion in region.
In one embodiment, the face key feature of face to be fused and corresponding integration region are merged, is merged
Image, comprising: the quantity for obtaining integration region replicates facial image to be fused, obtains quantity identical as the quantity of integration region
Facial image to be fused, merge each facial image to be fused corresponding integration region into fusion template, obtain target and melt
Close image.
Specifically, when integration region is corresponding circle of sensation that are multiple, including when face to be fused is one, in detection fusion image
The quantity in domain replicates facial image to be fused according to the quantity of integration region, obtains quantity identical as the quantity of integration region
Facial image to be fused is determined the fused data of corresponding face to be fused according to each integration region, is replaced using fused data
Change the pixel value of the corresponding pixel in integration region.
Above-mentioned image interfusion method, obtain original image in face facial image to be fused, facial image to be fused according to
The image that the key feature of the face of original image generates obtains fusion template image, merges in template image and includes and face
Corresponding integration region merges integration region of the facial image to be fused into fusion template image, obtains subject fusion image.
When merging to face, slow, the low efficiency that will lead to processing speed using original image is unable to satisfy the quick, just of user
Prompt demand.Based on the face to be fused of generation, fusion is carried out on the basis of face to be fused can be improved place
Manage speed and efficiency.
Step S301, by the face in Face datection model inspection original image, detection obtains intermediate face.
Step S302 extracts the face characteristic of intermediate face, obtains face key feature.
Step S303, being generated according to face key feature includes facial image to be fused.
In this embodiment, the dimension of the face in facial image to be fused and corresponding intermediate face is identical.
Specifically, Face datection model is the mathematical model for being positioned to the face in image, the mathematical model
The model parameter of including but not limited to deep learning network model, convolutional network model etc., mathematical model can be by machine certainly
Dynamic study obtains, and is also possible to artificial setting customized according to demand.Intermediate face refers to be obtained by Face datection model inspection
The face arrived, Face datection model can come out Face datection present in original image, that is, determine face in original image
The band of position obtains the intermediate face comprising intermediate face using the region of the face detected in original image as intermediate face
Image.
Lift the face key feature that algorithm extracts intermediary personnel by feature, face key feature can refer to face
Five features, the position feature of face, face contour feature etc..According to the face key feature of extraction and pre-set life
The face with intermediate face identical dimensional is generated at rule.Identical dimensional refers to the face in the facial image to be fused of generation
Dimensional information and the dimension of corresponding intermediate face are identical.
It in one embodiment, include multiple faces in original image, the intermediate face detected includes multiple, is extracted each
The face key feature of intermediate face generates the face with each intermediate face identical dimensional according to each face key feature.
Specifically, it when detecting multiple faces in Face datection model, is detected by feature extraction algorithm extraction
The face key feature of each face generates people corresponding with each face according to the face key feature that each Face datection arrives
Face.
In one embodiment, the face with intermediate face identical dimensional is generated according to face key feature, comprising: retain
The pixel value of pixel relevant to face key feature substitutes pixel relevant to face key feature using presetted pixel value
The pixel value of point.Wherein presetted pixel value is customized pixel value, and calculating for the pixel value can be wherein one in image
The pixel mean value in a region or the pixel mean value of each region or the particular pixel values of specific region etc..
In one embodiment, according to calculated for pixel values rule, and the pixel of pixel relevant to face key feature
Object pixel mean value is calculated in value, and the pixel of pixel relevant to face key feature is substituted using object pixel mean value
Value.Wherein pixel computation rule is preparatory customized computation rule, is such as weighted summation to pixel value, or from default picture
The pixel value of selected characteristic position is as object pixel etc. in element.
The face in original image is detected by Face datection model, realizes the automatic detection of face, extracts detection
The face key feature arrived generates the face with original image identical dimensional according to face key feature, due to face key feature energy
The main feature of face is enough represented, face key feature is the data characteristics by obtaining to face progress data screening, therefore is adopted
Data-handling efficiency can be effectively improved with the face that the production of face key generates.
In one embodiment, the face to be fused with intermediate face identical dimensional, packet are generated according to face key feature
It includes:
Step S401 carries out region division to intermediate face according to default division rule, obtains multiple subregions.
Specifically, default division rule refers to the image division rule preset for being divided to intermediate face,
Sliding window is such as set, is determined according to the window size of sliding window and sliding step and divides region, obtain multiple sub-districts
Domain, the size of each sub-regions and the size of sliding window are identical.Overlapping region is wherein not present between each sub-regions, with
For the image of 100*100, window size 10*10, sliding step 10, then image is divided into the image of 100 10*10
Region.
Step S402 determines pixel unrelated with face key feature in each sub-regions according to face key feature.
Specifically, crucial special according to face since face key feature is the characteristics of image extracted from intermediate face
The corresponding relationship between intermediate face is levied, determines pixel related and unrelated to face key feature in each sub-regions
Point.Such as in a wherein sub-regions, face key feature is the eyes of user, then is and people in the unrelated pixel of eyes
The unrelated pixel of face key feature.
Step S403 calculates the first pixel mean value of the pixel of each sub-regions.
Step S404 substitutes unrelated picture corresponding with each sub-regions using the first pixel mean value of each sub-regions
The pixel value of vegetarian refreshments.
Specifically, the first pixel mean value of each sub-regions is that the pixel of whole pixels of corresponding each sub-regions is equal
Value, the first pixel mean value, which can be, is weighted and averaged the pixel value of whole pixels of corresponding subregion, adds
The weighting coefficient of each pixel can be customized according to demand when weight average.It is substituted using the first pixel mean value of each sub-regions
The pixel value of unrelated pixel in corresponding each sub-regions.As included the picture unrelated with face key feature in subregion A
Vegetarian refreshments is A1, A2, A3 etc., and the pixel mean value of subregion A is 70, then sets 70 for the pixel value of pixel A1, A2, A3.It is logical
Data for expressing image can be reduced by crossing pixel mean value and substituting the pixel value of unrelated pixel, to improve the place of image
Manage efficiency.
In one embodiment, it is crucial to be less than human body for the weighting coefficient of relevant pixel corresponding to human body key feature
The corresponding unrelated pixel of feature.
In another embodiment, the weighting coefficient of whole pixels in subregion is identical.
In yet another embodiment, it is determined at a distance from relevant pixel according to pixel unrelated in each sub-regions
Weighting coefficient.The weighting coefficient of unrelated pixel such as remoter apart from relevant pixel is bigger.
Fig. 2 is the flow diagram of image interfusion method in one embodiment.Although should be understood that the process of Fig. 2
Each step in figure is successively shown according to the instruction of arrow, but these steps are not the inevitable sequence indicated according to arrow
Successively execute.Unless expressly stating otherwise herein, there is no stringent sequences to limit for the execution of these steps, these steps can
To execute in other order.Moreover, at least part step in Fig. 2 may include multiple sub-steps or multiple stages,
These sub-steps or stage are not necessarily to execute completion in synchronization, but can execute at different times, these
Sub-step perhaps the stage execution sequence be also not necessarily successively carry out but can be with the son of other steps or other steps
Step or at least part in stage execute in turn or alternately.
In one embodiment, as shown in figure 3, providing a kind of image processing apparatus 200, comprising:
Original image obtain module 201, for obtain include face original image.
Image collection module 202 to be fused, for obtaining the facial image to be fused of face in original image, face to be fused
The image that image is generated according to the key feature of the face of original image.
It merges template image and obtains module 203, for obtaining fusion template image, merge in template image and include and people
The corresponding integration region of face.
Image co-registration module 204 is obtained for merging integration region of the facial image to be fused into fusion template image
Subject fusion image.
In one embodiment, above-mentioned image processing apparatus 200, further includes:
Face detection module, for by the face in Face datection model inspection original image, detection to obtain intermediate face.
Characteristic extracting module obtains face key feature for extracting the face characteristic of intermediate face.
Face generation module, for generating facial image to be fused according to face key feature, in facial image to be fused
Face it is identical as the dimension of corresponding intermediate face.
In one embodiment, face generation module, comprising:
Image division unit obtains multiple sub-districts for carrying out region division to intermediate face according to default division rule
Domain.
Pixel determination unit, for according to face key feature, determine in each sub-regions with face key feature without
The pixel of pass.
First average calculation unit, the first pixel mean value of the pixel for calculating each sub-regions.
Pixel value updating unit substitutes corresponding with each sub-regions for the first pixel mean value using each sub-regions
Unrelated pixel pixel value.
In one embodiment, image co-registration module is also used to obtain the quantity of integration region, replicates face figure to be fused
Picture obtains the facial image to be fused of quantity identical as the quantity of integration region, merges each facial image to be fused to correspondence
Fusion template image in integration region, obtain subject fusion image.
In one embodiment, above-mentioned image fusion device, further includes:
Integration region determining module, for obtaining the quantity of face in facial image to be fused, according to face figure to be fused
The quantity of face determines the quantity of integration region as in.
Subject fusion region obtains module and is also used to obtain melting for the integration region comprising quantity identical as region to be fused
Shuttering image.
Image co-registration module is also used to merge that each face in face figure to be fused is corresponding into the fusion template melts
Region is closed, subject fusion image is obtained.
In one embodiment, image co-registration module is also used to replace using the face key feature of facial image to be fused
Corresponding integration region obtains subject fusion image.
Fig. 4 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be Fig. 1
In terminal 110 (or server 120).As shown in figure 4, it includes total by system that the computer equipment, which includes the computer equipment,
Processor, memory, network interface, input unit and the display screen of line connection.Wherein, memory includes that non-volatile memories are situated between
Matter and built-in storage.The non-volatile memory medium of the computer equipment is stored with operating system, can also be stored with computer journey
Sequence when the computer program is executed by processor, may make processor to realize image interfusion method.It can also be stored up in the built-in storage
There is computer program, when which is executed by processor, processor may make to execute image interfusion method.Computer
The display screen of equipment can be liquid crystal display or electric ink display screen, and the input unit of computer equipment can be display
The touch layer covered on screen is also possible to the key being arranged on computer equipment shell, trace ball or Trackpad, can also be outer
Keyboard, Trackpad or mouse for connecing etc..
It will be understood by those skilled in the art that structure shown in Fig. 4, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, image data processing system provided by the present application can be implemented as a kind of computer program
Form, computer program can be run in computer equipment as shown in Figure 4.Composition can be stored in the memory of computer equipment
Each program module of the image data processing system, for example, original image shown in Fig. 3 obtains module 201, image to be fused obtains
Modulus block 202, fusion template image obtain module 203 and image co-registration module 204.The computer journey that each program module is constituted
Sequence makes processor execute the step in the image interfusion method of each embodiment of the application described in this specification.
For example, computer equipment shown in Fig. 4 can pass through the original image in image data processing system as shown in Figure 3
It obtains module 201 and executes the original image that acquisition includes face.Computer equipment can be held by image collection module 202 to be fused
Row obtains the facial image to be fused of face in original image, and facial image to be fused is raw according to the key feature of the face of original image
At image.Computer equipment can be obtained module 203 by fusion template image and execute fusion template image acquisition module, be used for
Fusion template image is obtained, includes integration region corresponding with face in the fusion template image.Computer equipment can lead to
It crosses image co-registration module 204 and executes the integration region for merging facial image to be fused into fusion template image, obtain target and melt
Close image.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor perform the steps of acquisition original image when executing computer program
The facial image to be fused of middle face, the image that facial image to be fused is generated according to the key feature of the face of original image, is obtained
Fusion template image is taken, merging in template image includes integration region corresponding with face, merges facial image to be fused extremely
The integration region in template image is merged, subject fusion image is obtained.
In one embodiment, it obtains in original image before the facial image to be fused of face, processor executes computer
It is also performed the steps of when program through the face in Face datection model inspection original image, detection obtains intermediate face, extracts
The face characteristic of intermediate face obtains face key feature, generates facial image to be fused according to face key feature, to be fused
The dimension of face and corresponding intermediate face in facial image is identical.
In one embodiment, facial image to be fused is generated according to face key feature, comprising: advise according to default division
Then to intermediate face carry out region division, obtain multiple subregions, according to face key feature, determine in each sub-regions with people
The unrelated pixel of face key feature calculates the first pixel mean value of the pixel of each sub-regions, using each sub-regions
First pixel mean value substitutes the pixel value of unrelated pixel corresponding with each sub-regions.
In one embodiment, it also performs the steps of when processor executes computer program according to face key feature,
It determines pixel relevant to face key feature in each sub-regions, calculates second of relevant pixel in each sub-regions
Pixel mean value substitutes the pixel value of relevant pixel corresponding to each sub-regions using the second pixel mean value.
In one embodiment, integration region is multiple, when face to be fused is one, merges facial image to be fused extremely
The integration region in template image is merged, subject fusion image is obtained, comprising: obtains the quantity of integration region, replicates to be fused
Facial image obtains the facial image to be fused of quantity identical as the quantity of integration region.Merge each facial image to be fused
Integration region into corresponding fusion template image, obtains subject fusion image.
In one embodiment, when the face for including in facial image to be fused is multiple, processor executes computer journey
The quantity for obtaining face in facial image to be fused is also performed the steps of when sequence, according to face in facial image to be fused
Quantity determines the quantity of integration region, obtains fusion template image, comprising: obtain melting comprising quantity identical as region to be fused
The fusion template image for closing region merges integration region of the facial image to be fused into fusion template image, obtains target and melt
Close image, comprising: merge each face corresponding integration region into fusion template in face figure to be fused, obtain subject fusion
Image.
In one embodiment, integration region of the facial image to be fused into fusion template image is merged, target is obtained
Blending image, comprising: corresponding integration region is replaced using the face key feature of facial image to be fused, obtains subject fusion
Image.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of the facial image to be fused for obtaining face in original image, face to be fused when being executed by processor
The image that image is generated according to the key feature of the face of original image obtains fusion template image, includes in fusion template image
There is integration region corresponding with face, merges integration region of the facial image to be fused into fusion template image, obtain target
Blending image.
In one embodiment, it obtains in original image before the facial image to be fused of face, processor executes computer
It is also performed the steps of when program through the face in Face datection model inspection original image, detection obtains intermediate face, extracts
The face characteristic of intermediate face obtains face key feature, generates facial image to be fused according to face key feature, to be fused
The dimension of face and corresponding intermediate face in facial image is identical.
In one embodiment, facial image to be fused is generated according to face key feature, comprising: advise according to default division
Then to intermediate face carry out region division, obtain multiple subregions, according to face key feature, determine in each sub-regions with people
The unrelated pixel of face key feature calculates the first pixel mean value of the pixel of each sub-regions, using each sub-regions
First pixel mean value substitutes the pixel value of unrelated pixel corresponding with each sub-regions.
In one embodiment, it also performs the steps of when processor executes computer program according to face key feature,
It determines pixel relevant to face key feature in each sub-regions, calculates second of relevant pixel in each sub-regions
Pixel mean value substitutes the pixel value of relevant pixel corresponding to each sub-regions using the second pixel mean value.
In one embodiment, integration region is multiple, when face to be fused is one, merges facial image to be fused extremely
The integration region in template image is merged, subject fusion image is obtained, comprising: obtains the quantity of integration region, replicates to be fused
Facial image obtains the facial image to be fused of quantity identical as the quantity of integration region.Merge each facial image to be fused
Integration region into corresponding fusion template image, obtains subject fusion image.
In one embodiment, when the face for including in facial image to be fused is multiple, processor executes computer journey
The quantity for obtaining face in facial image to be fused is also performed the steps of when sequence, according to face in facial image to be fused
Quantity determines the quantity of integration region, obtains fusion template image, comprising: obtain melting comprising quantity identical as region to be fused
The fusion template image for closing region merges integration region of the facial image to be fused into fusion template image, obtains target and melt
Close image, comprising: merge each face corresponding integration region into fusion template in face figure to be fused, obtain subject fusion
Image.
In one embodiment, integration region of the facial image to be fused into fusion template image is merged, target is obtained
Blending image, comprising: corresponding integration region is replaced using the face key feature of facial image to be fused, obtains subject fusion
Image.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein
Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It should be noted that, in this document, the relational terms of such as " first " and " second " or the like are used merely to one
A entity or operation with another entity or operate distinguish, without necessarily requiring or implying these entities or operation it
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to
Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting
Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in the process, method, article or apparatus that includes the element.
The above is only a specific embodiment of the invention, is made skilled artisans appreciate that or realizing this hair
It is bright.Various modifications to these embodiments will be apparent to one skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and applied principle and features of novelty phase one herein
The widest scope of cause.