CN100433061C - Human face image changing method with camera function cell phone - Google Patents

Human face image changing method with camera function cell phone Download PDF

Info

Publication number
CN100433061C
CN100433061C CNB200410050802XA CN200410050802A CN100433061C CN 100433061 C CN100433061 C CN 100433061C CN B200410050802X A CNB200410050802X A CN B200410050802XA CN 200410050802 A CN200410050802 A CN 200410050802A CN 100433061 C CN100433061 C CN 100433061C
Authority
CN
China
Prior art keywords
image
bmp
interpolation
frame
rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CNB200410050802XA
Other languages
Chinese (zh)
Other versions
CN1588443A (en
Inventor
万享
严更真
蓝彩萍
胡胜发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ankai Microelectronics Co ltd
Original Assignee
ANKAI (GUANGZHOU) SOFTWARE TECHN Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANKAI (GUANGZHOU) SOFTWARE TECHN Co Ltd filed Critical ANKAI (GUANGZHOU) SOFTWARE TECHN Co Ltd
Priority to CNB200410050802XA priority Critical patent/CN100433061C/en
Publication of CN1588443A publication Critical patent/CN1588443A/en
Application granted granted Critical
Publication of CN100433061C publication Critical patent/CN100433061C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention discloses a human face image change method for a cell phone with a camera function. A source image and a target image are firstly converted into respective BMP standard format files; then, the sizes of the source image and the target image are carried out with parameter matching processing, and an intermediate image is generated; after a frame number N of interpolation from the intermediate image to the target image is determined, the interpolation of N times is respectively carried out among corresponding points of the matched images according to R, G, B components; N interpolation frames are generated and carried out with smooth processing, and N smooth frames are generated; the source image, the matched intermediate image, a smooth frame of a first interpolation frame, a smooth frame of a second interpolation frame until a smooth frame of a Nth interpolation frame, and the target image are orderly displayed, and a conversion process is finished. The method of the present invention can reduce operational difficulty and the requirements for memory, make a destination file small enough, and can complete operations of the class on the cell phone with the camera function, and the destination file is suitable for being transmitted on a wireless network.

Description

A kind of people's face image transform method that is used to camera enabled cell phone
Technical field
The present invention relates to a kind of image transform method of radio hand-held equipment, especially a kind of people's face image transform method that is used to camera enabled cell phone.
Background technology
According to the principle of three primary colours, all colours of occurring in nature can (B) three component combination form for R, G by red, green, blue.It is more that the color that has contains red composition, as dark red; The red composition that contains that has lacks, as pale red.At containing what of red composition, can be divided into 0 to 255 totally 256 grades, 0 grade of expression does not contain red composition; 255 grades of expressions contain 100% red composition.Equally, green and blueness also are divided into 256 grades.This hierarchical concept is called quantification.Various combination can express 256 * 256 * 256 according to red, green, blue, about 1,600 ten thousand kinds of colors.In digital image, each pixel all can be represented by three components of RGB (maybe can become the data of RGB component by mathematic(al) manipulation).
In the color image conversion, the identification of people's face image and conversion are the focus difficult point problems of international academic community research always, and present research mainly is that advanced pedestrian's face detects and facial Feature Localization on general thought, and then carries out the modeling of people's face.
The detection of people's face is to judge the process that whether has people's face in the image of input or the video.Method in the document mainly contains: based on the method for heuristic rule, based on the method for eigenface, based on the method for clustering learning, based on the method for ANN and based on method of support vector machine etc.Human face detection tech can be used for the initial position of people's face in the search image sequence, also is used in people from location face in the tracing process.
Facial Feature Localization then is after the existence of confirming people's face, further provides position, size and each main face organ's of everyone face positional information, carries out the modeling of people's face.
The pattern feature that people's face image is comprised is very abundant, in these features which be the most useful, how to utilize these features, be that people's face detects and will study a key issue.People's face pattern has complicated and careful variation, therefore generally needs to adopt the method for various modes characteristic synthetic.Conclusion is got up, and method for detecting human face can be divided into based on the method for features of skin colors with based on method two classes of gray feature according to the color attribute that utilizes feature.The former is applicable to that constructing fast people's face detects and the face tracking algorithm, The latter people's face be different from the more essential feature of other object, be the emphasis that people's face detection range is studied.The different models that adopt during according to characteristic synthetic can be divided into the method based on gray feature two big classes: based on the method for heuristic (knowledge) model with based on the method for statistical model.Because people's face detects the complex nature of the problem, those class methods all can't adapt to all situations, general all only at certain or some specific problem in people's face detection range, and the computation complexity of these methods is than higher, computing requires high, can only be powerful in arithmetic capability, realize in the systems such as PC/workstation that system resource is sufficient, on handheld devices such as mobile phone, can't realize.And the file destination that traditional disposal route generates is huge, and file destination can reach several million, tens even hundreds of million, is not suitable in transmitted over wireless networks, can not be applied in the present mobile communication environment.
In addition, the camera application in the mobile phone mainly concentrates on to take pictures at present receives/issues permit functions such as sheet, video playback with picture browsing function, MMS, relates to and rarely have for the people's face image processing/conversion of mobile phone.
Summary of the invention
The purpose of this invention is to provide a kind of people's face method therefor that is used to camera enabled cell phone, the inventive method can reduce the computing difficulty and to the demand of internal memory, make file destination enough little, can finish such computing on camera mobile phone, file destination is adapted at transmitting on the wireless network.
Purpose of the present invention can realize by following technical measures, may further comprise the steps successively:
(1), determines earlier the source image and the target image of conversion, and convert source image and target image unification to separately BMP standard format files based on digital picture;
(2) length and the width that carries out source image (BMP) and target image (BMP) then carries out parameter match process respectively, generates the middle rectangle image of coupling;
(3) determine intermediate image after target image (BMP) needs the frame number N of interpolation, carrying out N time interpolation respectively by R, G, B component between the corresponding point to the image after the coupling; To generate N interpolated frame;
(4) N the interpolated frame that generates carried out smoothing processing respectively, generate N smoothed frame;
(5) show smoothed frame, the target image (BMP) of the smoothed frame of the middle rectangle image of source image, coupling, the smoothed frame of first interpolated frame, second interpolated frame up to N interpolated frame successively, transfer process finishes.
Among the present invention, the detailed process that the length and the width of source image (BMP) and target image (BMP) carried out parameter match process respectively is: rectangle image in the middle of 1. defining, wherein the length of rectangle image equals the length of target image (BMP), and the width of rectangle image equals the width of target image (BMP); 2. with the border rectangle of intermediate image and the border rectangle center-aligned of source image, middle rectangular length is with wide parallel with wide difference with the rectangular length in source image (BMP) border; 3. according to the alignment thereof of step in 2., the borderline region of rectangle image and the common factor of source image (BMP) border rectangular region in the middle of getting; If 4. the area of intersection area equates with the area in middle rectangle image zone, then according to the alignment thereof of step in 2., each pixel P[I in the middle rectangular image, J] value just equal the value of the pixel in the source image (BMP) corresponding with this pixel; If 4. the area of intersection area is less than the area in middle rectangle image zone, then in intersection area, according to the alignment thereof of step in 2., the value of each pixel in the middle rectangular image just equals the value of the pixel in the source image (BMP) corresponding with this pixel; And to exceeding the pixel of the partially filled default color of intersection area in the middle rectangular region, default value can be provided with as required.
The present invention adopts the zonule equalization filtering of cross template to carry out smoothing processing.
The present invention adopts interpolation processing heterogeneous when carrying out interframe interpolation; Promptly in whole interpolating sequence, start-up portion adopts relative intensive interpolation with the ending, and center section adopts sparse relatively interpolation, to improve visual effect.
The invention has the advantages that the general thinking of having avoided the detection of people's face, facial Feature Localization, people's face modeling, carry out image conversion by simple and practical interpolation algorithm, reduced the computing difficulty, reduction makes the conversion of people's face be able to especially realize on the camera phones at internal memory and all less portable equipment of arithmetic capability to the demand of internal memory.Simultaneously, in order to eliminate owing to the pattern noise that image transforms and camera causes, the present invention also all carries out smoothing processing to the frame that generates after each interpolation, and this can improve the effect that image shows.Selecting for use aspect the smooth template, the present invention adopts the filtering of the zonule equalization method of cross template.
In addition, because when source image (middle rectangle image) has just begun conversion and has been about to be transformed into target image, human eye can be relatively more responsive to trickle variation, so last several frames (1/3rd (rounds) of totalframes) when source image (middle rectangle image) has just begun several frames (1/3rd (rounds) of totalframes) of conversion and has been about to be transformed into target image, the transformed variable that the present invention makes every frame is less than mean value (average 2/3rds), and Tu Xiang conversion just can not be too lofty like this.Can produce better visual effect.
Description of drawings
Fig. 1 is implementing procedure figure of the present invention.
Embodiment
As shown in Figure 1, this enforcement comprises the steps: successively
[1] in mobile phone, sets up a star (personage) picture library by modes such as wireless downloading, PC download or camera picture-storage.What store in this picture library is the picture of people's face image of the various stars (personage) that like of user, and these pictures can be jpeg format or BMP form;
[2] user is stored as jpeg format or BMP form, as the source image of this conversion by people's face image of camera extracting oneself;
[3] user selects the target image of a certain star's image as this conversion according to the hobby of oneself in star's picture library of step [1];
[4], progressively convert the people's face in the source image in the target image people's face set by step, and generate a series of intermediate picture (comprising interpolated frame) by image processing and matching treatment; Play this series intermediate picture continuously by the priority that generates, thereby realize the dynamic evolution process of user people's face to star people's face.
Wherein step [4] is a key of the present invention, and the processing procedure of step [4] is undertaken by following flow process again:
(1) source image and target image format match
If source image or target image are jpeg formats, then need file layout according to jpeg format and standard and BMP, convert source image and target image unification to separately BMP formatted file; If source image and target image are the BMP form, then need not to carry out format conversion.Through after this step computing, will produce two BMP format patterns, i.e. source image (BMP) and target image (BMP).
(2) length of source image (BMP) and target image (BMP) and width are carried out parameter match process respectively and generate intermediate image with target image (BMP) matching size, detailed process is: rectangle image in the middle of 1. defining, and wherein the length of rectangle image equals the length of target image (BMP), the width of rectangle image equals the width of target image (BMP); 2. with the border rectangle of intermediate image and the border rectangle center-aligned of source image (BMP), middle rectangular length is with wide parallel with wide difference with the rectangular length in source image (BMP) border; 3. according to the alignment thereof of step in 2., the borderline region of rectangle image and the common factor of source image (BMP) border rectangular region in the middle of getting; If 4. the area of intersection area equates with the area in middle rectangle image zone, then according to the alignment thereof of step in 2., each pixel P[I in the middle rectangular image, J] value just equal the value of the pixel in the source image (BMP) corresponding with this pixel; If 4. the area of intersection area is less than the area in middle rectangle image zone, then in intersection area, according to the alignment thereof of step in 2., the value of each pixel in the middle rectangular image just equals the value of the pixel in the source image (BMP) corresponding with this pixel; And to exceeding the pixel of the partially filled default color of intersection area in the middle rectangular region, default value can be provided with as required, as R=100, G=100, B=100.
(3) determine that middle rectangle image needs the frame number N (can be imported or system default frame number (N=10) by the user) of interpolation to target image (BMP); N to user's input need carry out validity checking, and N must be greater than 1 and less than 20; If the N of user's input is illegal, then N is defaulted as 10.
(4) according to step (2), middle rectangle image (being labeled as PM) is identical with both sizes of target image (being labeled as PD).In this method, the point that has same coordinate in point in the intermediate image BMP file and the target image BMP file is mutual corresponding point; That is, to the arbitrary pixel PM[i in the middle image, j] all in target image (BMP), unique corresponding point PD[i is arranged, j]; (PM[i, j], PD[i, j]) represent that promptly this is to point; (PM[i, j], PD[i, j]) (i=0 ... the width-1 of target image (BMP), j=0 ... the height-1 of target image (BMP)) } then represent the right set of pixel that intermediate image and target image (BMP) are all.
(5) the described interpolation of step (3) is promptly at described { (PM[i, j], PD[i, j]) (i=0 of step 4) ... the width-1 of target image (BMP), j=0 ... the height-1 of target image (BMP)) } all pixels in between carry out; Because each pixel is formed by R, G, three components of B, so PM[i, j] and PD[i, j] between interpolation again by PM[i, j] R component and PD[i, j] the R component between interpolation, PM[i, j] G component and PD[i, j] the G component between interpolation, PM[i, j] B component and PD[i, j] the B component between three of interpolation part form.
(6) to first pair of corresponding point of PM and PD image (PM[0,0], PD[0,0] R component), promptly (PM[0,0] the R component, PD[0,0] carries out the interpolation first time R amount), note interpolation variable R D1[0,0], then the value that generates after the interpolation is P1[0,0] R component (=PD[0,0] R component+RD1[0,0]); After the same method,, determine G component interpolation variable GD1[0 for the first time earlier respectively, 0 to the G component and the B component of (PM[0,0], PD[0,0])] and B component interpolation variable first time BD1[0,0], carry out the interpolation first time then; Generate P1[0 the most at last, 0] the G component (=PD[0,0] G component+GD1[0,0]) and P1[0,0] the B component (=PD[0,0] B component+BD1[0,0]).So just generated and carried out the interpolation point P1[0 of interpolation for the first time, 0 between (PM[0,0], PD[0,0])], P1[0,0] be the color pixel point that comprises R, G, three components of B; In like manner, the value of carrying out generating after the interpolation for the second time is P2[0,0] R component (=PD[0,0] R component+RD1[0,0]+RD2[0,0]); And the like; The value that can obtain carrying out generating after the N time interpolation is PN[0,0] R component.Other two component G, B and other all pixels to pixel all adopt similar method.
(7) right to all pixels described in the step 5), promptly (PM[i, j], PD[i, j]) (i=0 ... the width-1 of target image (BMP), j=0 ... the height-1 of target image (BMP)) } carry out the interpolation first time of three components of RGB according to the method described in the step 6); To generate for the first time first intermediate picture frame after the interpolation at last, promptly P1[i, j] (R, G, B), i=0 ... the width-1 of target image (BMP), j=0 ... height-the 1} of target image (BMP).
(8) according to the method described in step 6), the step 7), and the method for the definite interpolation variable described in the step 9), visual PM and PD are carried out the 2nd time, the 3rd time interpolation up to the N interpolation; Form the 2nd respectively, the 3rd intermediate picture frame is up to generating N interpolated frame.
(9) first pair of corresponding point R component in the step 6) carries out the interpolation variable R D1[0 of interpolation for the first time, 0], and carry out the 2nd time, the 3rd time interpolation in the step 8), up to the interpolation variable R D2[0 of the N time interpolation, 0], RD3[0,0], up to RDn[0,0] be by (PM[0,0] the R component, the R amount of PD [0,0]) value is calculated.Concrete computing method are exactly, when N<10, RDi[0,0] (i=1..N)=(the R amount-PM[0 of PD [0,0], 0] R component)/N, and with RDi[0,0] (i=1..N) method of round off round; When N 〉=10, then to adopt between preceding [N/3] individual frame and last [N/3] individual frame the mean difference score value 2/3 as the difference variable, middle N-[N/3]-[2N/3] frame evenly inserts rest parts, be RDi[0,0] (i=1..[N/3], i=[2N/3] ..N)=(PD[0,0] R amount-PM[0,0] the R component)/N}x 2/3; RDi[0,0] (i=[N/3]+1..[2N/3]-1)=(PD[0,0] R amount-PM[0,0] the R component)-(PD[0,0] R amount-PM[0,0] the R component)/N}x 2/3x ([N/3]+[2N/3])/([2N/3]-[N/3]-1); Wherein [v] expression is carried out the round computing to v, and all RDi finally also will pass through round; Operation to G component in the step 6) and B component is also determined in the same way.
(10) adopt the method for average value filtering to handle the problem of image smooth to the 1st interpolated frame that generates in the step 7).The template that the present invention uses is 010 141 010 , R color component array to the 1st frame, allow the upper left corner (upper left corner complete matching of the upper left corner of template and array) of template from array, with 9 values on the template respectively and 9 values on the pairing separately array multiply each other, and after 9 long-pending additions that will obtain divided by 8, the merchant is exactly the level and smooth result of central point institute's corresponding point in array of template.
(11) in the scope of R color component array,, move row, up to the right of template and the flush right of array by from left to right at every turn; If template arrives the rightmost of array, then press order from top to bottom, the relative array of template moves down delegation, and the operation from left to right that repeats then in this step is moved, and finishes alignment up to the lower right corner of template and array.According to this mobile order and according to step 10)) method can calculate the value of the R component behind the R component of being had a few except that one of outermost is punctuated in the array level and smooth successively; The R component that one of outermost is punctuated remains unchanged before and after level and smooth; G, B color component array are adopted same computing method; So just generated the smoothed frame of first interpolated frame;
(12) set by step 10), same method in the step 11), obtain the smoothed frame of all N interpolated frame described in the step 8);
(13) show smoothed frame, the target image (BMP) of the smoothed frame of the middle rectangle image of source image, coupling, the smoothed frame of first interpolated frame, second interpolated frame up to N interpolated frame successively, transfer process finishes.
The present invention adopts the method for average value filtering to handle the problem of image smooth.The template that the present invention uses is 010 141 010 , Obtain after 9 values in the template and the color component in the interpolated frame image array multiply each other 9 long-pending, divided by 8, the merchant who obtains is as the value of new central point to 9 long-pending summation backs.At borderline lastrow, next line, the first from left is capable and the rightest delegation (being a week of interpolated frame array outermost) can't carry out the point of template operation, their value is constant before and after level and smooth.For example, the color component array of interpolated frame is as follows:
Figure C20041005080200102
Through template 010 141 010 After the computing, the color component of new image is
0 0 0 0 0 0 0
0 0 1/8 3/4 1/8 0 0
0 1/8 5/4 13/4 5/4 1/8 0
0 0 0 0 0 0 0
Wherein, the position representation template of the frame of broken lines in the upper left corner situation of row operation of going forward side by side of aliging with the first time of the color component array of interpolated frame, the center position of computing is for the first time then represented in the position of the solid box in the upper left corner; The go forward side by side situation of row operation of the position representation template of the frame of broken lines of last cell and aliging for the last time of the color component array of interpolated frame, the center position of last computing is then represented in the position of the solid box of last cell.
6 and color component on every side in the middle of the color component array of interpolated frame differs greatly as can be seen, is a noise spot.Through the average value filtering of template, the amplitude of saltus step has been relaxed, thereby reaches level and smooth purpose.

Claims (3)

1, a kind of people's face image transform method that is used to camera enabled cell phone is characterized in that may further comprise the steps successively:
(1), earlier need to determine the source image and the target image of conversion, and convert source image and target image unification to separately BMP standard format files based on digital picture;
(2) length and the width that carries out source image (BMP) and target image (BMP) then carries out parameter match process respectively, generates the middle rectangle image of coupling;
(3) determining that middle rectangle image after target image (BMP) needs the frame number N of interpolation, carries out N time interpolation respectively by red, green, blue (R, G, B) component between the corresponding point to the image after the coupling; To generate N interpolated frame;
(4) N the interpolated frame that generates carried out smoothing processing, generate N smoothed frame;
(5) show smoothed frame, the target image (BMP) of the smoothed frame of source image, the middle rectangle image of coupling, the smoothed frame of first interpolated frame, second interpolated frame up to N interpolated frame successively, transfer process finishes;
The described detailed process that the length and the width of source image (BMP) and target image (BMP) are carried out parameter match process respectively is: rectangle image in the middle of 1. defining, wherein the length of rectangle image equals the length of target image (BMP), and the width of rectangle image equals the width of target image (BMP); 2. with the border rectangle of intermediate image and the border rectangle center-aligned of source image, middle rectangular length is with wide parallel with wide difference with the rectangular length in source image (BMP) border; 3. according to the alignment thereof of step in 2., the borderline region of rectangle image and the common factor of source image (BMP) border rectangular region in the middle of getting; If 4. the area of intersection area equates with the area in middle rectangle image zone, then according to the alignment thereof of step in 2., each pixel P[I in the middle rectangular image, J] value just equal the value of the pixel in the source image (BMP) corresponding with this pixel; If 4. the area of intersection area is less than the area in middle rectangle image zone, then in intersection area, according to the alignment thereof of step in 2., the value of each pixel in the middle rectangular image just equals the value of the pixel in the source image (BMP) corresponding with this pixel; And to exceeding the pixel of the partially filled default color of intersection area in the middle rectangular region, default value can be provided with as required.
2, the people's face image transform method that is used to camera enabled cell phone according to claim 1 is characterized in that: adopt interpolation processing heterogeneous when carrying out interframe interpolation; Promptly in whole interpolating sequence, start-up portion adopts relative intensive interpolation with the ending, and center section adopts sparse relatively interpolation.
3, the people's face image transform method that is used to camera enabled cell phone according to claim 1 is characterized in that: adopt the zonule equalization filtering of cross template to carry out smoothing processing.
CNB200410050802XA 2004-07-23 2004-07-23 Human face image changing method with camera function cell phone Active CN100433061C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200410050802XA CN100433061C (en) 2004-07-23 2004-07-23 Human face image changing method with camera function cell phone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200410050802XA CN100433061C (en) 2004-07-23 2004-07-23 Human face image changing method with camera function cell phone

Publications (2)

Publication Number Publication Date
CN1588443A CN1588443A (en) 2005-03-02
CN100433061C true CN100433061C (en) 2008-11-12

Family

ID=34602276

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200410050802XA Active CN100433061C (en) 2004-07-23 2004-07-23 Human face image changing method with camera function cell phone

Country Status (1)

Country Link
CN (1) CN100433061C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100550051C (en) * 2007-04-29 2009-10-14 威盛电子股份有限公司 image deformation method
CN104915663B (en) * 2015-07-03 2018-07-06 广东欧珀移动通信有限公司 A kind of method, system and mobile terminal for promoting recognition of face and showing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285794B1 (en) * 1998-04-17 2001-09-04 Adobe Systems Incorporated Compression and editing of movies by multi-image morphing
WO2003007198A2 (en) * 2001-07-13 2003-01-23 Igo Technologies Inc. Deformable transformations for interventional guidance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285794B1 (en) * 1998-04-17 2001-09-04 Adobe Systems Incorporated Compression and editing of movies by multi-image morphing
WO2003007198A2 (en) * 2001-07-13 2003-01-23 Igo Technologies Inc. Deformable transformations for interventional guidance

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Image Metamorphosis with Scattered Feature Constraints. Seungyong Lee, ET AL.IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,Vol.2 No.4. 1996
Image Metamorphosis with Scattered Feature Constraints. Seungyong Lee, ET AL.IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,Vol.2 No.4. 1996 *
Morph transformation of the facial image. Philip J Benson.Image and Vision Computing,Vol.12 No.10. 1994
Morph transformation of the facial image. Philip J Benson.Image and Vision Computing,Vol.12 No.10. 1994 *

Also Published As

Publication number Publication date
CN1588443A (en) 2005-03-02

Similar Documents

Publication Publication Date Title
CN112232425B (en) Image processing method, device, storage medium and electronic equipment
CN101536078B (en) Improving image masks
CN107256555B (en) Image processing method, device and storage medium
WO2020093837A1 (en) Method for detecting key points in human skeleton, apparatus, electronic device, and storage medium
US11158057B2 (en) Device, method, and graphical user interface for processing document
CN109410131B (en) Face beautifying method and system based on condition generation antagonistic neural network
CN103973977A (en) Blurring processing method and device for preview interface and electronic equipment
WO2014137806A2 (en) Visual language for human computer interfaces
CN104866755B (en) Setting method and device for background picture of application program unlocking interface and electronic equipment
US11908241B2 (en) Method for correction of the eyes image using machine learning and method for machine learning
CN104978750B (en) Method and apparatus for handling video file
CN108347578A (en) The processing method and processing device of video image in video calling
CN108701355A (en) GPU optimizes and the skin possibility predication based on single Gauss online
CN103299342A (en) Method and apparatus for providing a mechanism for gesture recognition
CN105574834B (en) Image processing method and device
CN110674826A (en) Character recognition method based on quantum entanglement
CN111106836A (en) Image reconstruction method and device
CN107507158A (en) A kind of image processing method and device
CN104281865A (en) Method and equipment for generating two-dimensional codes
CN114758027A (en) Image processing method, image processing device, electronic equipment and storage medium
CN105635574B (en) The treating method and apparatus of image
CN105957114A (en) Method and device for detecting polygon in image
US11250542B2 (en) Mosaic generation apparatus and method
CN100433061C (en) Human face image changing method with camera function cell phone
CN108537165A (en) Method and apparatus for determining information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Assignee: Focaltech Systems Ltd.

Assignor: ANYKA (GUANGZHOU) SOFTWARE TECHNOLOGIY Co.,Ltd.

Contract fulfillment period: 2008.12.19 to 2015.12.18

Contract record no.: 2009440000218

Denomination of invention: Human face image changing method with camera function cell phone

Granted publication date: 20081112

License type: Exclusive license

Record date: 20090331

LIC Patent licence contract for exploitation submitted for record

Free format text: EXCLUSIVE LICENSE; TIME LIMIT OF IMPLEMENTING CONTACT: 2008.12.19 TO 2015.12.18; CHANGE OF CONTRACT

Name of requester: DUNTAI TECHNOLOGY ( SHENZHEN ) CO., LTD.

Effective date: 20090331

C56 Change in the name or address of the patentee

Owner name: ANKAI (GUANGZHOU) MICROELECTRONICS TECHNOLOGY CO.,

Free format text: FORMER NAME: ANKAI( GUANGZHOU ) SOFTWARE TECHNOLOGY CO., LTD.

CP02 Change in the address of a patent holder

Address after: Tianhe Software Park Gaotang science and Technology Park in Guangdong province high tech Industrial Development Zone of Guangzhou New District No. 1033 high Pu Lu - 7 Building 6 zip code: 510663

Patentee after: ANYKA (GUANGZHOU) SOFTWARE TECHNOLOGIY Co.,Ltd.

Address before: 16 floor, Huajing garden, Tianhe Software Park, 89 North Road, Zhongshan Avenue, Guangdong, Guangzhou Province: 510663

Patentee before: ANYKA (GUANGZHOU) SOFTWARE TECHNOLOGIY Co.,Ltd.

CP03 Change of name, title or address

Address after: High tech Industrial Development Zone, Guangzhou City, Guangdong Province Science City Science Avenue 301303, No. 182 C1 District 401402 zip code: 510663

Patentee after: ANYKA (GUANGZHOU) MICROELECTRONICS TECHNOLOGY Co.,Ltd.

Address before: Tianhe Software Park Gaotang science and Technology Park in Guangdong province high tech Industrial Development Zone of Guangzhou New District No. 1033 high Pu Lu - 7 Building 6 zip code: 510663

Patentee before: ANYKA (GUANGZHOU) SOFTWARE TECHNOLOGIY Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 510663 301-303, 401-402, area C1, 182 science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee after: Guangzhou Ankai Microelectronics Co.,Ltd.

Address before: 510663 301-303, 401-402, area C1, 182 science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee before: ANYKA (GUANGZHOU) MICROELECTRONICS TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 510555 No. 107 Bowen Road, Huangpu District, Guangzhou, Guangdong

Patentee after: Guangzhou Ankai Microelectronics Co.,Ltd.

Address before: 510663 301-303, 401-402, area C1, 182 science Avenue, Science City, Guangzhou high tech Industrial Development Zone, Guangdong Province

Patentee before: Guangzhou Ankai Microelectronics Co.,Ltd.