CN109801249A - Image interfusion method, device, computer equipment and storage medium - Google Patents

Image interfusion method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109801249A
CN109801249A CN201811615913.9A CN201811615913A CN109801249A CN 109801249 A CN109801249 A CN 109801249A CN 201811615913 A CN201811615913 A CN 201811615913A CN 109801249 A CN109801249 A CN 109801249A
Authority
CN
China
Prior art keywords
face
image
fused
fusion
integration region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811615913.9A
Other languages
Chinese (zh)
Inventor
傅声华
吴勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen Hawker Internet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hawker Internet Co Ltd filed Critical Shenzhen Hawker Internet Co Ltd
Priority to CN201811615913.9A priority Critical patent/CN109801249A/en
Publication of CN109801249A publication Critical patent/CN109801249A/en
Pending legal-status Critical Current

Links

Abstract

This application involves a kind of image interfusion method, device, computer equipment and storage mediums, the described method includes: obtaining the facial image to be fused of face in original image, the image that facial image to be fused is generated according to the key feature of the face of original image, obtain fusion template image, merging in template image includes integration region corresponding with face, integration region of the facial image to be fused into fusion template image is merged, subject fusion image is obtained.The facial image to be fused generated by the face in original image characterizes face according to the face key feature in facial image to be fused, improves the fusion rate of image co-registration.

Description

Image interfusion method, device, computer equipment and storage medium
Technical field
This application involves computer vision field more particularly to a kind of image interfusion method, device, computer equipment and deposit Storage media.
Background technique
With the development of internet, computer vision is constantly applied in multiple technical fields, and especially image melts It closes, when handling image co-registration, through being handled frequently with original graph, with the update of capture apparatus, is wrapped in image The content contained is more and more abundant, when handling image, leads to image co-registration rate since the internal storage data of image is excessive It is low.
Summary of the invention
In order to solve the above-mentioned technical problem, it this application provides a kind of image interfusion method, device, computer equipment and deposits Storage media.
In a first aspect, this application provides a kind of image interfusion method, including:
Acquisition includes the original image of face;
The facial image to be fused of face in original image is obtained, facial image to be fused is according to the key of the face of original image The image that feature generates;
Fusion template image is obtained, merging in template image includes integration region corresponding with face;
Integration region of the facial image to be fused into fusion template image is merged, subject fusion image is obtained.
Second aspect, this application provides a kind of image processing apparatus, comprising:
Original image obtain module, for obtain include face original image;
Image collection module to be fused, for obtaining the facial image to be fused of face in original image, face figure to be fused The image that picture is generated according to the key feature of the face of original image;
It merges template image and obtains module, for obtaining fusion template image, merge in template image and include and face Corresponding integration region;
Image co-registration module obtains mesh for merging integration region of the facial image to be fused into fusion template image Mark blending image.
A kind of computer equipment can be run on a memory and on a processor including memory, processor and storage Computer program, the processor perform the steps of when executing the computer program
Acquisition includes the original image of face;
The facial image to be fused of face in original image is obtained, facial image to be fused is according to the key of the face of original image The image that feature generates;
Fusion template image is obtained, merging in template image includes integration region corresponding with face;
Integration region of the facial image to be fused into fusion template image is merged, subject fusion image is obtained.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor It is performed the steps of when row
Acquisition includes the original image of face;
The facial image to be fused of face in original image is obtained, facial image to be fused is according to the key of the face of original image The image that feature generates;
Fusion template image is obtained, merging in template image includes integration region corresponding with face;
Integration region of the facial image to be fused into fusion template image is merged, subject fusion image is obtained.
Above-mentioned image interfusion method, device, computer equipment and storage medium, which comprises obtaining includes someone The original image of face obtains the facial image to be fused of face in original image, and facial image to be fused is according to the face of original image The image that key feature generates obtains fusion template image, and merging in template image includes integration region corresponding with face, Integration region of the facial image to be fused into fusion template image is merged, subject fusion image is obtained.By in original image The facial image to be fused that face generates characterizes face according to the face key feature in facial image to be fused, improves image The fusion rate of fusion.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and be used to explain the principle of the present invention together with specification.
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, for those of ordinary skill in the art Speech, without any creative labor, is also possible to obtain other drawings based on these drawings.
Fig. 1 is the applied environment figure of image interfusion method in one embodiment;
Fig. 2 is the flow diagram of image interfusion method in one embodiment;
Fig. 3 is the structural block diagram of image fusion device in one embodiment;
Fig. 4 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the application, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall in the protection scope of this application.
Fig. 1 is the applied environment figure of image interfusion method in one embodiment.Referring to Fig.1, the image interfusion method application In viewdata system.The viewdata system includes terminal 110 and server 120.Terminal 110 and server 120 pass through net Network connection.Terminal or server acquisition include the original image of face, obtain the facial image to be fused of face in original image, to The image that fusion facial image is generated according to the key feature of the face of original image obtains fusion template image, merges Prototype drawing Include integration region corresponding with face as in, merge integration region of the facial image to be fused into fusion template image, Obtain subject fusion image.The facial image to be fused generated by the face in original image, according in facial image to be fused Face key feature characterize face, improve the fusion rate of image co-registration.Terminal 110 specifically can be terminal console or movement Terminal, mobile terminal specifically can be at least one of mobile phone, tablet computer, laptops etc..Server 120 can be with solely The server clusters of the either multiple servers compositions of vertical server is realized.
As shown in Fig. 2, in one embodiment, providing a kind of image interfusion method.The present embodiment is mainly in this way It is illustrated applied to the terminal 110 (or server 120) in above-mentioned Fig. 1.Referring to Fig. 2, which is specifically wrapped Include following steps:
Step S201, acquisition include the original image of face.
Step S202 obtains the facial image to be fused of face in original image.
In this embodiment, the image that facial image to be fused is generated according to the key feature of the face of original image.
Specifically, original image refers to the image shot in interface of taking pictures, includes an at least face in original image.Pass through original Image carries out feature extraction, and is facial image to be fused according to the facial image that the feature extracted generates,.Wherein original image In include one or more face.Face key feature is the data characteristics for describing face, extracts face key feature Method can use common feature extraction algorithm, can also use customized feature extraction algorithm.It is calculated according to feature extraction The face key feature that method is extracted from original image generates facial image to be fused.
Step S203 obtains fusion template image.
In this embodiment, merging in template image includes integration region corresponding with face.
Step S204 merges integration region of the facial image to be fused into fusion template image, obtains subject fusion figure Picture.
Specifically, fusion template image refers to the template image for being fused, and includes one or more in template image Integration region.Fusion template image can be the image selected in the image that user saves at the terminal, be also possible to according to The face key feature for merging face determines fusion template image.
In one embodiment, the quantity for obtaining face in facial image to be fused, according to people in facial image to be fused The quantity of face determines the quantity of integration region, obtains fusion template image, wherein the integration region for including in fusion template image Quantity and facial image to be fused in face quantity, merge each face in face figure to be fused to merging template image Corresponding integration region, the template for obtaining the integration region comprising quantity identical as the quantity of face in facial image to be fused are melted Close image.
Specifically, the face quantity for detecting the face in facial image to be fused, according to the people in facial image to be fused Face quantity determines the quantity of integration region, fusion template image is obtained according to the quantity of integration region, wherein merging template image In include integration region quantity it is identical as the face quantity in facial image to be fused.The corresponding corresponding circle of sensation of each face The face key feature of face in each facial image to be fused is fused to the corresponding integration region of fusion template image by domain In, wherein integration region can may be different integration regions for identical integration region, obtain comprising multiple and different The integration region of face key feature.As shooting original image in include two faces, pass through Face datection model inspection to two Face is opened, the face key feature of two faces is extracted, is generated according to the two face key features extracted comprising two people The facial image to be fused of face, merging includes two integration regions in template image, one of them is a cat face, another For a rabbit face, then by a face fusion therein to cat face, a face fusion in addition obtains one to rabbit face Cat face and rabbit face comprising face characteristic.
In one embodiment, it when the face in integration region difference, facial image to be fused is also different, can make by oneself The corresponding relationship of adopted face and integration region can such as be randomly provided corresponding relationship, can also according to face to be fused with merge The position in region determines corresponding relationship, can determine corresponding relationship etc. according to gender so that face to be fused is corresponding.
In one embodiment, the face key feature of face to be fused and corresponding integration region are merged, is merged Image, comprising: corresponding integration region is replaced using the face key feature of facial image to be fused, obtains subject fusion figure Picture.The integration percentage that the face key feature of facial image to be fused is determined according to the region parameter of integration region, according to correspondence Integration percentage realize face fusion.When integration percentage is 1, the pixel of the corresponding pixel of face key feature is directly used Value replaces the pixel value of the pixel of corresponding integration region, realizes that process is simple, realizes that effect is good.
In one embodiment, the quantity for obtaining integration region determines face figure to be fused according to default custom rule The corresponding relationship of the quantity of face and integration region as in, is realized according to corresponding relationship to face to be fused and corresponding fusion The fusion in region.
In one embodiment, the face key feature of face to be fused and corresponding integration region are merged, is merged Image, comprising: the quantity for obtaining integration region replicates facial image to be fused, obtains quantity identical as the quantity of integration region Facial image to be fused, merge each facial image to be fused corresponding integration region into fusion template, obtain target and melt Close image.
Specifically, when integration region is corresponding circle of sensation that are multiple, including when face to be fused is one, in detection fusion image The quantity in domain replicates facial image to be fused according to the quantity of integration region, obtains quantity identical as the quantity of integration region Facial image to be fused is determined the fused data of corresponding face to be fused according to each integration region, is replaced using fused data Change the pixel value of the corresponding pixel in integration region.
Above-mentioned image interfusion method, obtain original image in face facial image to be fused, facial image to be fused according to The image that the key feature of the face of original image generates obtains fusion template image, merges in template image and includes and face Corresponding integration region merges integration region of the facial image to be fused into fusion template image, obtains subject fusion image. When merging to face, slow, the low efficiency that will lead to processing speed using original image is unable to satisfy the quick, just of user Prompt demand.Based on the face to be fused of generation, fusion is carried out on the basis of face to be fused can be improved place Manage speed and efficiency.
Step S301, by the face in Face datection model inspection original image, detection obtains intermediate face.
Step S302 extracts the face characteristic of intermediate face, obtains face key feature.
Step S303, being generated according to face key feature includes facial image to be fused.
In this embodiment, the dimension of the face in facial image to be fused and corresponding intermediate face is identical.
Specifically, Face datection model is the mathematical model for being positioned to the face in image, the mathematical model The model parameter of including but not limited to deep learning network model, convolutional network model etc., mathematical model can be by machine certainly Dynamic study obtains, and is also possible to artificial setting customized according to demand.Intermediate face refers to be obtained by Face datection model inspection The face arrived, Face datection model can come out Face datection present in original image, that is, determine face in original image The band of position obtains the intermediate face comprising intermediate face using the region of the face detected in original image as intermediate face Image.
Lift the face key feature that algorithm extracts intermediary personnel by feature, face key feature can refer to face Five features, the position feature of face, face contour feature etc..According to the face key feature of extraction and pre-set life The face with intermediate face identical dimensional is generated at rule.Identical dimensional refers to the face in the facial image to be fused of generation Dimensional information and the dimension of corresponding intermediate face are identical.
It in one embodiment, include multiple faces in original image, the intermediate face detected includes multiple, is extracted each The face key feature of intermediate face generates the face with each intermediate face identical dimensional according to each face key feature.
Specifically, it when detecting multiple faces in Face datection model, is detected by feature extraction algorithm extraction The face key feature of each face generates people corresponding with each face according to the face key feature that each Face datection arrives Face.
In one embodiment, the face with intermediate face identical dimensional is generated according to face key feature, comprising: retain The pixel value of pixel relevant to face key feature substitutes pixel relevant to face key feature using presetted pixel value The pixel value of point.Wherein presetted pixel value is customized pixel value, and calculating for the pixel value can be wherein one in image The pixel mean value in a region or the pixel mean value of each region or the particular pixel values of specific region etc..
In one embodiment, according to calculated for pixel values rule, and the pixel of pixel relevant to face key feature Object pixel mean value is calculated in value, and the pixel of pixel relevant to face key feature is substituted using object pixel mean value Value.Wherein pixel computation rule is preparatory customized computation rule, is such as weighted summation to pixel value, or from default picture The pixel value of selected characteristic position is as object pixel etc. in element.
The face in original image is detected by Face datection model, realizes the automatic detection of face, extracts detection The face key feature arrived generates the face with original image identical dimensional according to face key feature, due to face key feature energy The main feature of face is enough represented, face key feature is the data characteristics by obtaining to face progress data screening, therefore is adopted Data-handling efficiency can be effectively improved with the face that the production of face key generates.
In one embodiment, the face to be fused with intermediate face identical dimensional, packet are generated according to face key feature It includes:
Step S401 carries out region division to intermediate face according to default division rule, obtains multiple subregions.
Specifically, default division rule refers to the image division rule preset for being divided to intermediate face, Sliding window is such as set, is determined according to the window size of sliding window and sliding step and divides region, obtain multiple sub-districts Domain, the size of each sub-regions and the size of sliding window are identical.Overlapping region is wherein not present between each sub-regions, with For the image of 100*100, window size 10*10, sliding step 10, then image is divided into the image of 100 10*10 Region.
Step S402 determines pixel unrelated with face key feature in each sub-regions according to face key feature.
Specifically, crucial special according to face since face key feature is the characteristics of image extracted from intermediate face The corresponding relationship between intermediate face is levied, determines pixel related and unrelated to face key feature in each sub-regions Point.Such as in a wherein sub-regions, face key feature is the eyes of user, then is and people in the unrelated pixel of eyes The unrelated pixel of face key feature.
Step S403 calculates the first pixel mean value of the pixel of each sub-regions.
Step S404 substitutes unrelated picture corresponding with each sub-regions using the first pixel mean value of each sub-regions The pixel value of vegetarian refreshments.
Specifically, the first pixel mean value of each sub-regions is that the pixel of whole pixels of corresponding each sub-regions is equal Value, the first pixel mean value, which can be, is weighted and averaged the pixel value of whole pixels of corresponding subregion, adds The weighting coefficient of each pixel can be customized according to demand when weight average.It is substituted using the first pixel mean value of each sub-regions The pixel value of unrelated pixel in corresponding each sub-regions.As included the picture unrelated with face key feature in subregion A Vegetarian refreshments is A1, A2, A3 etc., and the pixel mean value of subregion A is 70, then sets 70 for the pixel value of pixel A1, A2, A3.It is logical Data for expressing image can be reduced by crossing pixel mean value and substituting the pixel value of unrelated pixel, to improve the place of image Manage efficiency.
In one embodiment, it is crucial to be less than human body for the weighting coefficient of relevant pixel corresponding to human body key feature The corresponding unrelated pixel of feature.
In another embodiment, the weighting coefficient of whole pixels in subregion is identical.
In yet another embodiment, it is determined at a distance from relevant pixel according to pixel unrelated in each sub-regions Weighting coefficient.The weighting coefficient of unrelated pixel such as remoter apart from relevant pixel is bigger.
Fig. 2 is the flow diagram of image interfusion method in one embodiment.Although should be understood that the process of Fig. 2 Each step in figure is successively shown according to the instruction of arrow, but these steps are not the inevitable sequence indicated according to arrow Successively execute.Unless expressly stating otherwise herein, there is no stringent sequences to limit for the execution of these steps, these steps can To execute in other order.Moreover, at least part step in Fig. 2 may include multiple sub-steps or multiple stages, These sub-steps or stage are not necessarily to execute completion in synchronization, but can execute at different times, these Sub-step perhaps the stage execution sequence be also not necessarily successively carry out but can be with the son of other steps or other steps Step or at least part in stage execute in turn or alternately.
In one embodiment, as shown in figure 3, providing a kind of image processing apparatus 200, comprising:
Original image obtain module 201, for obtain include face original image.
Image collection module 202 to be fused, for obtaining the facial image to be fused of face in original image, face to be fused The image that image is generated according to the key feature of the face of original image.
It merges template image and obtains module 203, for obtaining fusion template image, merge in template image and include and people The corresponding integration region of face.
Image co-registration module 204 is obtained for merging integration region of the facial image to be fused into fusion template image Subject fusion image.
In one embodiment, above-mentioned image processing apparatus 200, further includes:
Face detection module, for by the face in Face datection model inspection original image, detection to obtain intermediate face.
Characteristic extracting module obtains face key feature for extracting the face characteristic of intermediate face.
Face generation module, for generating facial image to be fused according to face key feature, in facial image to be fused Face it is identical as the dimension of corresponding intermediate face.
In one embodiment, face generation module, comprising:
Image division unit obtains multiple sub-districts for carrying out region division to intermediate face according to default division rule Domain.
Pixel determination unit, for according to face key feature, determine in each sub-regions with face key feature without The pixel of pass.
First average calculation unit, the first pixel mean value of the pixel for calculating each sub-regions.
Pixel value updating unit substitutes corresponding with each sub-regions for the first pixel mean value using each sub-regions Unrelated pixel pixel value.
In one embodiment, image co-registration module is also used to obtain the quantity of integration region, replicates face figure to be fused Picture obtains the facial image to be fused of quantity identical as the quantity of integration region, merges each facial image to be fused to correspondence Fusion template image in integration region, obtain subject fusion image.
In one embodiment, above-mentioned image fusion device, further includes:
Integration region determining module, for obtaining the quantity of face in facial image to be fused, according to face figure to be fused The quantity of face determines the quantity of integration region as in.
Subject fusion region obtains module and is also used to obtain melting for the integration region comprising quantity identical as region to be fused Shuttering image.
Image co-registration module is also used to merge that each face in face figure to be fused is corresponding into the fusion template melts Region is closed, subject fusion image is obtained.
In one embodiment, image co-registration module is also used to replace using the face key feature of facial image to be fused Corresponding integration region obtains subject fusion image.
Fig. 4 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be Fig. 1 In terminal 110 (or server 120).As shown in figure 4, it includes total by system that the computer equipment, which includes the computer equipment, Processor, memory, network interface, input unit and the display screen of line connection.Wherein, memory includes that non-volatile memories are situated between Matter and built-in storage.The non-volatile memory medium of the computer equipment is stored with operating system, can also be stored with computer journey Sequence when the computer program is executed by processor, may make processor to realize image interfusion method.It can also be stored up in the built-in storage There is computer program, when which is executed by processor, processor may make to execute image interfusion method.Computer The display screen of equipment can be liquid crystal display or electric ink display screen, and the input unit of computer equipment can be display The touch layer covered on screen is also possible to the key being arranged on computer equipment shell, trace ball or Trackpad, can also be outer Keyboard, Trackpad or mouse for connecing etc..
It will be understood by those skilled in the art that structure shown in Fig. 4, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, image data processing system provided by the present application can be implemented as a kind of computer program Form, computer program can be run in computer equipment as shown in Figure 4.Composition can be stored in the memory of computer equipment Each program module of the image data processing system, for example, original image shown in Fig. 3 obtains module 201, image to be fused obtains Modulus block 202, fusion template image obtain module 203 and image co-registration module 204.The computer journey that each program module is constituted Sequence makes processor execute the step in the image interfusion method of each embodiment of the application described in this specification.
For example, computer equipment shown in Fig. 4 can pass through the original image in image data processing system as shown in Figure 3 It obtains module 201 and executes the original image that acquisition includes face.Computer equipment can be held by image collection module 202 to be fused Row obtains the facial image to be fused of face in original image, and facial image to be fused is raw according to the key feature of the face of original image At image.Computer equipment can be obtained module 203 by fusion template image and execute fusion template image acquisition module, be used for Fusion template image is obtained, includes integration region corresponding with face in the fusion template image.Computer equipment can lead to It crosses image co-registration module 204 and executes the integration region for merging facial image to be fused into fusion template image, obtain target and melt Close image.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor perform the steps of acquisition original image when executing computer program The facial image to be fused of middle face, the image that facial image to be fused is generated according to the key feature of the face of original image, is obtained Fusion template image is taken, merging in template image includes integration region corresponding with face, merges facial image to be fused extremely The integration region in template image is merged, subject fusion image is obtained.
In one embodiment, it obtains in original image before the facial image to be fused of face, processor executes computer It is also performed the steps of when program through the face in Face datection model inspection original image, detection obtains intermediate face, extracts The face characteristic of intermediate face obtains face key feature, generates facial image to be fused according to face key feature, to be fused The dimension of face and corresponding intermediate face in facial image is identical.
In one embodiment, facial image to be fused is generated according to face key feature, comprising: advise according to default division Then to intermediate face carry out region division, obtain multiple subregions, according to face key feature, determine in each sub-regions with people The unrelated pixel of face key feature calculates the first pixel mean value of the pixel of each sub-regions, using each sub-regions First pixel mean value substitutes the pixel value of unrelated pixel corresponding with each sub-regions.
In one embodiment, it also performs the steps of when processor executes computer program according to face key feature, It determines pixel relevant to face key feature in each sub-regions, calculates second of relevant pixel in each sub-regions Pixel mean value substitutes the pixel value of relevant pixel corresponding to each sub-regions using the second pixel mean value.
In one embodiment, integration region is multiple, when face to be fused is one, merges facial image to be fused extremely The integration region in template image is merged, subject fusion image is obtained, comprising: obtains the quantity of integration region, replicates to be fused Facial image obtains the facial image to be fused of quantity identical as the quantity of integration region.Merge each facial image to be fused Integration region into corresponding fusion template image, obtains subject fusion image.
In one embodiment, when the face for including in facial image to be fused is multiple, processor executes computer journey The quantity for obtaining face in facial image to be fused is also performed the steps of when sequence, according to face in facial image to be fused Quantity determines the quantity of integration region, obtains fusion template image, comprising: obtain melting comprising quantity identical as region to be fused The fusion template image for closing region merges integration region of the facial image to be fused into fusion template image, obtains target and melt Close image, comprising: merge each face corresponding integration region into fusion template in face figure to be fused, obtain subject fusion Image.
In one embodiment, integration region of the facial image to be fused into fusion template image is merged, target is obtained Blending image, comprising: corresponding integration region is replaced using the face key feature of facial image to be fused, obtains subject fusion Image.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of the facial image to be fused for obtaining face in original image, face to be fused when being executed by processor The image that image is generated according to the key feature of the face of original image obtains fusion template image, includes in fusion template image There is integration region corresponding with face, merges integration region of the facial image to be fused into fusion template image, obtain target Blending image.
In one embodiment, it obtains in original image before the facial image to be fused of face, processor executes computer It is also performed the steps of when program through the face in Face datection model inspection original image, detection obtains intermediate face, extracts The face characteristic of intermediate face obtains face key feature, generates facial image to be fused according to face key feature, to be fused The dimension of face and corresponding intermediate face in facial image is identical.
In one embodiment, facial image to be fused is generated according to face key feature, comprising: advise according to default division Then to intermediate face carry out region division, obtain multiple subregions, according to face key feature, determine in each sub-regions with people The unrelated pixel of face key feature calculates the first pixel mean value of the pixel of each sub-regions, using each sub-regions First pixel mean value substitutes the pixel value of unrelated pixel corresponding with each sub-regions.
In one embodiment, it also performs the steps of when processor executes computer program according to face key feature, It determines pixel relevant to face key feature in each sub-regions, calculates second of relevant pixel in each sub-regions Pixel mean value substitutes the pixel value of relevant pixel corresponding to each sub-regions using the second pixel mean value.
In one embodiment, integration region is multiple, when face to be fused is one, merges facial image to be fused extremely The integration region in template image is merged, subject fusion image is obtained, comprising: obtains the quantity of integration region, replicates to be fused Facial image obtains the facial image to be fused of quantity identical as the quantity of integration region.Merge each facial image to be fused Integration region into corresponding fusion template image, obtains subject fusion image.
In one embodiment, when the face for including in facial image to be fused is multiple, processor executes computer journey The quantity for obtaining face in facial image to be fused is also performed the steps of when sequence, according to face in facial image to be fused Quantity determines the quantity of integration region, obtains fusion template image, comprising: obtain melting comprising quantity identical as region to be fused The fusion template image for closing region merges integration region of the facial image to be fused into fusion template image, obtains target and melt Close image, comprising: merge each face corresponding integration region into fusion template in face figure to be fused, obtain subject fusion Image.
In one embodiment, integration region of the facial image to be fused into fusion template image is merged, target is obtained Blending image, comprising: corresponding integration region is replaced using the face key feature of facial image to be fused, obtains subject fusion Image.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It should be noted that, in this document, the relational terms of such as " first " and " second " or the like are used merely to one A entity or operation with another entity or operate distinguish, without necessarily requiring or implying these entities or operation it Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in the process, method, article or apparatus that includes the element.
The above is only a specific embodiment of the invention, is made skilled artisans appreciate that or realizing this hair It is bright.Various modifications to these embodiments will be apparent to one skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and applied principle and features of novelty phase one herein The widest scope of cause.

Claims (10)

1. a kind of image interfusion method, which is characterized in that the described method includes:
Acquisition includes the original image of face;
Obtain the facial image to be fused of face in the original image, the facial image to be fused is according to the face of original image The image that key feature generates;
Fusion template image is obtained, includes integration region corresponding with the face in the fusion template image;
Integration region of the facial image to be fused into the fusion template image is merged, subject fusion image is obtained.
2. the method according to claim 1, wherein the face to be fused for obtaining face in the original image Before image, further includes:
By the face in original image described in Face datection model inspection, detection obtains intermediate face;
The face characteristic for extracting the intermediate face obtains the face key feature;
The facial image to be fused is generated according to the face key feature, the face in the facial image to be fused It is identical as the dimension of the corresponding intermediate face.
3. according to the method described in claim 2, it is characterized in that, described described wait melt according to face key feature generation Close facial image, comprising:
Region division is carried out to the intermediate face according to default division rule, obtains multiple subregions;
According to the face key feature, pixel unrelated with the face key feature in each subregion is determined;
Calculate the first pixel mean value of the pixel of each subregion;
Using the first pixel mean value of each sub-regions, the substitution unrelated pixel corresponding with each subregion The pixel value of point.
4. the method according to claim 1, wherein the integration region be it is multiple, the face to be fused is At one, integration region of the fusion facial image to be fused into the fusion template image obtains subject fusion Image, comprising:
Obtain the quantity of the integration region;
The facial image to be fused is replicated, the face figure to be fused of quantity identical as the quantity of the integration region is obtained Picture;
Integration region of each facial image to be fused into the corresponding fusion template image is merged, target is obtained and melts Close image.
5. the method according to claim 1, wherein the face for including in the facial image to be fused is multiple When, the method also includes:
The quantity for obtaining face in the facial image to be fused is determined according to the quantity of face in the facial image to be fused The quantity of the integration region;
Template image is merged in the acquisition, comprising:
Obtain the fusion template image of the integration region comprising quantity identical as the region to be fused;
Integration region of the fusion facial image to be fused into the fusion template image, obtains subject fusion figure Picture, comprising:
Each face corresponding integration region into the fusion template in the face figure to be fused is merged, is obtained The subject fusion image.
6. method described in -5 any one according to claim 1, which is characterized in that the fusion facial image to be fused Integration region into the fusion template image, obtains subject fusion image, comprising:
The corresponding integration region is replaced using the face key feature of the facial image to be fused, the target is obtained and melts Close image.
7. a kind of image fusion device, which is characterized in that described device includes:
Original image obtain module, for obtain include face original image;
Image collection module to be fused, for obtaining the facial image to be fused of face in the original image, the people to be fused The image that face image is generated according to the key feature of the face of original image;
Merge template image and obtain module, for obtaining fusion template image, include in the fusion template image with it is described The corresponding integration region of face;
Image co-registration module is obtained for merging integration region of the facial image to be fused into the fusion template image To subject fusion image.
8. device according to claim 7, which is characterized in that described device, further includes:
Face detection module, for by the face in original image described in Face datection model inspection, detection to obtain intermediate face;
Characteristic extracting module obtains the face key feature for extracting the face characteristic of the intermediate face;
Face generation module, for generating the facial image to be fused, the people to be fused according to the face key feature The face in face image it is identical as the dimension of the corresponding intermediate face.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, which is characterized in that the processor realizes any one of claims 1 to 6 institute when executing the computer program The step of stating method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
CN201811615913.9A 2018-12-27 2018-12-27 Image interfusion method, device, computer equipment and storage medium Pending CN109801249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811615913.9A CN109801249A (en) 2018-12-27 2018-12-27 Image interfusion method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811615913.9A CN109801249A (en) 2018-12-27 2018-12-27 Image interfusion method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN109801249A true CN109801249A (en) 2019-05-24

Family

ID=66557821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811615913.9A Pending CN109801249A (en) 2018-12-27 2018-12-27 Image interfusion method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109801249A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598819A (en) * 2020-05-14 2020-08-28 易思维(杭州)科技有限公司 Self-adaptive image preprocessing method and application thereof
WO2021062998A1 (en) * 2019-09-30 2021-04-08 北京市商汤科技开发有限公司 Image processing method, apparatus and electronic device
CN113361471A (en) * 2021-06-30 2021-09-07 平安普惠企业管理有限公司 Image data processing method, image data processing device, computer equipment and storage medium
WO2021238410A1 (en) * 2020-05-29 2021-12-02 北京沃东天骏信息技术有限公司 Image processing method and apparatus, electronic device, and medium
US11461870B2 (en) 2019-09-30 2022-10-04 Beijing Sensetime Technology Development Co., Ltd. Image processing method and device, and electronic device
WO2022213798A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
WO2023173826A1 (en) * 2022-03-14 2023-09-21 腾讯科技(深圳)有限公司 Image processing method and apparatus, and storage medium, electronic device and product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102027505A (en) * 2008-07-30 2011-04-20 泰塞拉技术爱尔兰公司 Automatic face and skin beautification using face detection
CN102831624A (en) * 2012-09-03 2012-12-19 北京千橡网景科技发展有限公司 Method and device for compressing image
CN103218615A (en) * 2013-04-17 2013-07-24 哈尔滨工业大学深圳研究生院 Face judgment method
CN103226689A (en) * 2012-01-30 2013-07-31 展讯通信(上海)有限公司 Red eye detection method and device and red eye removing method and device
CN103839223A (en) * 2012-11-21 2014-06-04 华为技术有限公司 Image processing method and image processing device
CN107578029A (en) * 2017-09-21 2018-01-12 北京邮电大学 Method, apparatus, electronic equipment and the storage medium of area of computer aided picture certification
CN107808136A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and computer equipment
CN108256521A (en) * 2017-12-29 2018-07-06 济南中维世纪科技有限公司 For the effective coverage localization method of body color identification
CN108875539A (en) * 2018-03-09 2018-11-23 北京旷视科技有限公司 Expression matching process, device and system and storage medium
CN108985181A (en) * 2018-06-22 2018-12-11 华中科技大学 A kind of end-to-end face mask method based on detection segmentation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102027505A (en) * 2008-07-30 2011-04-20 泰塞拉技术爱尔兰公司 Automatic face and skin beautification using face detection
CN103226689A (en) * 2012-01-30 2013-07-31 展讯通信(上海)有限公司 Red eye detection method and device and red eye removing method and device
CN102831624A (en) * 2012-09-03 2012-12-19 北京千橡网景科技发展有限公司 Method and device for compressing image
CN103839223A (en) * 2012-11-21 2014-06-04 华为技术有限公司 Image processing method and image processing device
CN103218615A (en) * 2013-04-17 2013-07-24 哈尔滨工业大学深圳研究生院 Face judgment method
CN107578029A (en) * 2017-09-21 2018-01-12 北京邮电大学 Method, apparatus, electronic equipment and the storage medium of area of computer aided picture certification
CN107808136A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and computer equipment
CN108256521A (en) * 2017-12-29 2018-07-06 济南中维世纪科技有限公司 For the effective coverage localization method of body color identification
CN108875539A (en) * 2018-03-09 2018-11-23 北京旷视科技有限公司 Expression matching process, device and system and storage medium
CN108985181A (en) * 2018-06-22 2018-12-11 华中科技大学 A kind of end-to-end face mask method based on detection segmentation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021062998A1 (en) * 2019-09-30 2021-04-08 北京市商汤科技开发有限公司 Image processing method, apparatus and electronic device
US11461870B2 (en) 2019-09-30 2022-10-04 Beijing Sensetime Technology Development Co., Ltd. Image processing method and device, and electronic device
CN111598819A (en) * 2020-05-14 2020-08-28 易思维(杭州)科技有限公司 Self-adaptive image preprocessing method and application thereof
WO2021238410A1 (en) * 2020-05-29 2021-12-02 北京沃东天骏信息技术有限公司 Image processing method and apparatus, electronic device, and medium
WO2022213798A1 (en) * 2021-04-08 2022-10-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device and storage medium
CN113361471A (en) * 2021-06-30 2021-09-07 平安普惠企业管理有限公司 Image data processing method, image data processing device, computer equipment and storage medium
WO2023173826A1 (en) * 2022-03-14 2023-09-21 腾讯科技(深圳)有限公司 Image processing method and apparatus, and storage medium, electronic device and product

Similar Documents

Publication Publication Date Title
CN109801249A (en) Image interfusion method, device, computer equipment and storage medium
CN110135226B (en) Expression animation data processing method and device, computer equipment and storage medium
CN112330685B (en) Image segmentation model training method, image segmentation device and electronic equipment
CN105354231A (en) Image selection method and apparatus, and image processing method and apparatus
JP2016527624A (en) Flexible image layout
CN109118531A (en) Three-dimensional rebuilding method, device, computer equipment and the storage medium of transparent substance
CN109840559A (en) Method for screening images, device and electronic equipment
CN106155477A (en) The method of adjustment of screen-icon size, device and terminal
CN111862124A (en) Image processing method, device, equipment and computer readable storage medium
CN110414570A (en) Image classification model generating method, device, equipment and storage medium
CN109819176A (en) A kind of image pickup method, system, device, electronic equipment and storage medium
CN112001399A (en) Image scene classification method and device based on local feature saliency
WO2019051701A1 (en) Photographic terminal, and photographic parameter setting method therefor based on long short-term memory neural network
CN112288664A (en) High dynamic range image fusion method and device and electronic equipment
CN110378883A (en) Picture appraisal model generating method, image processing method, device, computer equipment and storage medium
JP2023550047A (en) Using interpolation to generate video from still images
CN110378852A (en) Image enchancing method, device, computer equipment and storage medium
CN112950497A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111898573A (en) Image prediction method, computer device, and storage medium
CN109829374A (en) Image processing method, device, computer equipment and storage medium
CN109685015B (en) Image processing method and device, electronic equipment and computer storage medium
CN109658360B (en) Image processing method and device, electronic equipment and computer storage medium
CN114881893B (en) Image processing method, device, equipment and computer readable storage medium
CN110047115B (en) Star image shooting method and device, computer equipment and storage medium
CN111726526A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201030

Address after: 9 / F, TCL multimedia building, D4 building, international e city, 1001 Zhongshan Garden Road, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN TCL NEW TECHNOLOGY Co.,Ltd.

Address before: 518052 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A (located in Shenzhen Qianhai business secretary Co. Ltd.)

Applicant before: SHENZHEN HAWK INTERNET Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190524