CN104715447A - Image synthesis method and device - Google Patents
Image synthesis method and device Download PDFInfo
- Publication number
- CN104715447A CN104715447A CN201510092998.7A CN201510092998A CN104715447A CN 104715447 A CN104715447 A CN 104715447A CN 201510092998 A CN201510092998 A CN 201510092998A CN 104715447 A CN104715447 A CN 104715447A
- Authority
- CN
- China
- Prior art keywords
- face object
- face
- feature points
- unique point
- facial feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses an image synthesis method and device. In one specific implementation mode, the method comprises the steps that facial feature points corresponding to a plurality of human face objects in a human facial image are obtained; materials used for the human face objects are obtained; a standard human face object is selected from the human face objects; the mapping positions of the facial feature points are determined based on the mapping relation between the feature point of the standard human face object and the feature point of the material used for the standard human face object; the materials of all the human face objects are adjusted to be synthesized to a target image based on the mapping positions. The implementation mode is based on the mapping relation between the standard human face object and the material corresponding to the standard human face object, the facial feature points of the human face objects are mapped to the target image, the materials of all the human face objects are adjusted to be synthesized to the target image based on the mapped facial feature points, and therefore the matching degree of the materials of all the human face objects in the target image, such as the position, the size and other features, and the human face objects in the human face images is improved, and the validity of the target image obtained through synthesis is enhanced.
Description
Technical field
The application relates to field of computer technology, is specifically related to image processing field, particularly relates to face image processing process and device.
Background technology
At present, in some applications, provide function based on user's Face image synthesis personalized image (such as cartoon image) for strengthening Consumer's Experience.In known technology, based on being embodied as by finding out the material similar to the face object (such as shape of face, eyes, nose etc.) in facial image of function of user's Face image synthesis personalized image, material is directly utilized to synthesize personalized image.But because the shape of material has scrambling, when directly utilizing material to synthesize personalized image, the size of the face object that the size of material is corresponding with it may not be mated, and causes the personalized image distortion of synthesis.
Summary of the invention
This application provides a kind of image combining method and device, solve the technical matters that above background technology part is mentioned.
First aspect, this application provides a kind of image combining method, the method comprises: the facial feature points that the multiple face objects in acquisition facial image are corresponding; Obtain the material being used for multiple face object; Selection reference face object from multiple face object; Mapping relations between unique point based on benchmark face object and the unique point for the material of benchmark face object, determine the mapping position of facial feature points; Based on mapping position, adjust the material of each face object to synthesize target image.
Second aspect, this application provides a kind of image synthesizer, and this device comprises: unique point acquiring unit, is configured for the facial feature points that the multiple face objects obtained in facial image are corresponding; Material obtaining unit, is configured for the material obtained for multiple face object; Selection unit, is configured for selection reference face object from multiple face object; Map unit, is configured for the mapping relations between the unique point based on benchmark face object and the unique point for the material of benchmark face object, determines the mapping position of facial feature points; Adjustment unit, is configured for based on mapping position, adjusts the material of each face object with composograph.
The image combining method that the application provides and device, based on the mapping relations between the material that benchmark face object is corresponding with it, the facial feature points of face object is mapped to target image, and based on the facial feature points after mapping, adjust the material of each face object to synthesize target image, thus improve the matching degree of the face object in the feature of the material such as position, size and so in the target image of each face object and facial image, strengthen the validity of the target image of synthesis.
Accompanying drawing explanation
By reading the detailed description done non-limiting example done with reference to the following drawings, the other features, objects and advantages of the application will become more obvious:
Fig. 1 is the process flow diagram of an embodiment of image combining method according to the application;
Fig. 2 is the structural representation of an embodiment of image synthesizer according to the application.
Embodiment
Below in conjunction with drawings and Examples, the application is described in further detail.Be understandable that, specific embodiment described herein is only for explaining related invention, but not the restriction to this invention.It also should be noted that, for convenience of description, in accompanying drawing, illustrate only the part relevant to Invention.
It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.Below with reference to the accompanying drawings and describe the application in detail in conjunction with the embodiments.
Please refer to Fig. 1, it illustrates the flow process 100 of an embodiment of the image combining method according to the application.The method comprises the following steps:
Step 101, the facial feature points that the multiple face objects in acquisition facial image are corresponding.
In the present embodiment, facial image can be the facial image in the photo uploaded of user, also can be the facial image utilizing camera to take user and obtain.
After obtaining facial image, the face object in facial image can be obtained further.Face object can be organ on face or the object characterizing face characteristic.
In some optional implementations of the present embodiment, face object comprises following at least one item: shape of face object, eyebrow object, eyes object, nose object, face object, ear object, hair object etc.Be appreciated that above-mentioned face object all for the synthesis of target image, also can choose some face objects as the face object for the synthesis of target image in above-mentioned face object.
Face object can utilize facial feature points or claim face's key point to be described.Each face object can have its characteristic of correspondence point.Such as, for shape of face object, by the profile of shape of face, the facial feature points of point as shape of face object of some can be chosen.In some optional implementations of the present embodiment, face recognition technology can be adopted to identify facial image, obtain the facial feature points of the feature characterizing face object in facial image.In some implementations, 72 unique points can be used to characterize facial image.It will be understood by those skilled in the art that and more or less unique point can be used to characterize facial image, the application does not limit in this regard.After obtaining the facial feature points characterizing facial image, correspondingly can determine the unique point of each face object.Each face object in facial image can the facial feature points of corresponding some, and such as, shape of face object can corresponding 13 facial feature points.
Step 102, obtains the material being used for the plurality of face object.
In the present embodiment, the type of material can be the type relevant to face object.Such as, in some implementations, iting is desirable to generate cartoon character, therefore can be the various materials of cartoon character for the material of the face object in facial image.Material can comprise the material for dissimilar face object, such as, for the shape of face material of shape of face object, for the eyebrow material of eyebrow object, for the eyes material of eyes object, like this.
With face object class seemingly, material can be described by the unique point of material.Such as, for shape of face material, its material unique point by the profile of this shape of face material, can choose the unique point of point as material of some.
In certain embodiments, based on the matching relationship between the facial feature points of face object and the unique point of material, the material of face object can automatically be determined.In other words, for Given Face object (such as shape of face object), the material closest or the most similar to this Given Face object can be selected from for automatic numerous materials (such as shape of face material) of this Given Face object.Distance metric can be adopted to calculate the degree of approach between face object and material or similarity.In one implementation, the two-dimensional coordinate that facial feature points corresponding to face object can characterize its position in facial image with represents, correspondingly, the two-dimensional coordinate that the unique point of material can characterize its position in material with represents.Then can determine the degree of approach between the facial feature points that face object is corresponding and the unique point of material or similarity in the following ways: can based on the two-dimensional coordinate of the Feature point correspondence of the two-dimensional coordinate of the Feature point correspondence of face object and material, application Euclidean distance formula calculates above-mentioned coordinate.Finally, material corresponding to result that in result of calculation, value is minimum is chosen as the material with face object matching.
In further embodiments, the material for each face object can be determined by user.Such as, user can manually select according to hobby the material being used for each face object.
In other embodiment, also first automatically can determine the material of each face object, then accept the adjustment of user to determined material, such as, replace.
Step 103, selection reference face object from face object.
In the present embodiment, an object can be chosen as benchmark face object from face object.In some implementations, choosing of benchmark facial image can be determined according to the position of other objects in target image to be synthesized.
In some optional implementations of the present embodiment, benchmark face is to liking shape of face object.When based on face object synthesis target image, first can fix the object of one of them position (such as neck) correspondence on the person, in this case, shape of face object can be chosen as benchmark face object.
Step 104, the mapping relations between the unique point based on benchmark face object and the unique point for the material of benchmark face object, determine the mapping position of facial feature points.
In the present embodiment, the unique point of benchmark face object can represent with a two-dimensional coordinate, and this two-dimensional coordinate characterizes its position in facial image.The rectangle facial image being 100*100 for a facial image, can set up coordinate system in advance to the rectangle facial image of 100*100, chooses the central point of the rectangle facial image of 100*100 and the cornerwise joining initial point as coordinate system.Based on the coordinate system of the facial image set up, the coordinate of facial feature points in the rectangle facial image of 100*100 can be determined.Correspondingly, the position of unique point in the interface of pre-set dimension of the corresponding material of benchmark face object can represent with a two-dimensional coordinate, and this two-dimensional coordinate characterizes the position of material in the interface at target image place to be synthesized being used for benchmark face object.The rectangle electronics painting canvas being 480*800 for the interface at target image place, coordinate system can be set up in advance to the rectangle electronics painting canvas of 480*800, such as, choose the central point of the rectangle electronics painting canvas of 480*800 and the cornerwise joining initial point as coordinate system.Based on the coordinate system of the rectangle electronics painting canvas of the 480*800 set up, the coordinate of unique point in the rectangle electronics painting canvas of 480*800 of the corresponding material of benchmark face object can be determined.In the present embodiment, after the coordinate of the unique point and material corresponding to it that determine benchmark face object, can according to both coordinates, the mapping relations between the unique point calculating the unique point of the benchmark face object material corresponding with it.Then, can based on these mapping relations, calculate the facial feature points coordinate in the target image of multiple face object in facial image, thus complete the interface (the rectangle electronics painting canvas of the such as 480*800) facial feature points of the face object obtained being mapped to target image place from facial image (the rectangle facial image of such as 100*100).
Below for benchmark face to as if shape of face object, further illustrate the mapping relations between the unique point based on benchmark face object and the unique point for the material of benchmark face object, determine the process of the mapping position of facial feature points.
In some implementations, when benchmark face is to when liking shape of face object, mapping relations between unique point based on shape of face object and the unique point of the shape of face material for shape of face object, determine that the mapping position of facial feature points can in the following ways: based on the coordinate of unique point on facial image of shape of face object, set up the characteristic point matrix of shape of face object.Based on the coordinate of shape of face material at pre-set dimension interface, set up shape of face material characteristic point matrix.Based on characteristic point matrix and the shape of face material characteristic point matrix of shape of face object, calculate the transition matrix between two matrixes, this transition matrix characterizes the transformation relation (such as translation, convergent-divergent, rotation) between two matrixes, and the transition matrix also namely by calculating characterizes the mapping relations between the unique point of shape of face object and the unique point of shape of face material.After determining the mapping relations between the unique point of shape of face object and the unique point of shape of face material, these mapping relations can be applied to the facial feature points of other face objects (such as eyes object, nose object, face object), the facial feature points of other face objects can be mapped on pre-set dimension interface (the rectangle electronics painting canvas of 480*800).Take facial image as the facial image of 100*100, the interface of pre-set dimension is the rectangle electronics painting canvas of 480*800 is that example illustrates above-mentioned mapping process.Face feature dot matrix can be set up according to the coordinate of the facial feature points of other face objects on the facial image of 100*100, according to the transition matrix of the mapping relations between the unique point of the sign shape of face object calculated and the unique point of shape of face material, matrixing (such as translation transformation is carried out to the face feature dot matrix set up, scale transformation, rotational transform), the matrix characterizing the position of facial feature points on the interface of pre-set dimension can be obtained, by this matrix, and then determine face object (such as eyes object, nose object, face object) the coordinate of facial feature points in the rectangle electronics painting canvas of 480*800.Like this, by above-mentioned transfer process, the facial feature points of the face object on the facial image of 100*100 all maps to the relevant position in the rectangle electronics painting canvas of 480*800.After completing mapping, the size that profile that the facial feature points in the rectangle electronics painting canvas of 480*800 surrounds may be used for corresponding object in indicating target image is in after mapping, such as, based on the profile that the facial feature points of the eyes after mapping surrounds, be used to indicate the size of the eyes object in target image.
Step 105, based on mapping position, adjusts the material of each face object to synthesize target image.
In the present embodiment, target image can be carry out by the material of face object (the various materials of such as cartoon character) image that is synthesized at the interface (the rectangle electronics painting canvas of such as 480*800) of pre-set dimension.The material of face object can be set in advance on the interface of pre-set dimension.Because of be set in advance in the size of the material (such as eyes, nose, face) on the interface of pre-set dimension and position and the facial feature points after mapping surround the size of face object corresponding to profile (such as eyes, nose, face) and position inconsistent, therefore need size and the position of the material adjusting face object further.In the present embodiment, can based on the mapping position of the facial feature points of each face object on the interface of pre-set dimension, adjust the position of unique point in the interface of pre-set dimension of the material of face object respectively, make its can with map after facial feature points institute surround size and the position consistency of face object corresponding to profile, thus make it possible to utilize the material after size and position adjust to synthesize target image.
In some optional implementations of the present embodiment, adjustment material mode can be: based on the unique point of the mapping position of facial feature points and the material of face object position between corresponding relation, determine to adjust parameter, adjustment parameter can comprise following at least one item: translation parameters and zooming parameter; Based on adjustment parameter, adjust the material of each face object to synthesize target image.The material of each face object can be adjusted respectively, for the material of each face object, calculate the adjustment parameter of its correspondence respectively, utilize the material of adjustment parameter adjustment face object.Can determine in the following ways to adjust parameter: the coordinate obtaining the coordinate of facial feature points on the interface (the rectangle electronics painting canvas of such as 480*800) of pre-set dimension of face object (such as eyes object) and the unique point of eyes material on the interface of pre-set dimension of its correspondence.Based on the above-mentioned coordinate obtained, the zooming parameter of convergent-divergent relation between the translation parameters that the translation relation (position relationship of such as horizontal or vertical direction) represented between the coordinate of facial feature points and the coordinate of the unique point of corresponding material can be determined and the coordinate representing the coordinate of facial feature points and the unique point of corresponding material.After determining the adjustment parameter that the material of each face object is corresponding by the way, based on adjustment parameter, can adjust the material of each face object, such as, based on translation parameters, adjustment material in the horizontal direction or the movement of vertical direction.After being adjusted by the material of adjustment parameter to face, the material after adjustment can be utilized to synthesize target image.
The image combining method that above-described embodiment of the application provides, based on the mapping relations between the material that benchmark face object is corresponding with it, the facial feature points of face object is mapped to target image, and based on the facial feature points after mapping, adjust the material of each face object to synthesize target image, thus improve the matching degree of the face object in the feature of the material such as position, size and so in the target image of each face object and facial image, strengthen the validity of the target image of synthesis.
Please refer to Fig. 2, it illustrates the structural representation of the image synthesizer embodiment according to the application.
As shown in Figure 2, this device 200 comprises: unique point acquiring unit 201, is configured for the facial feature points that the multiple face objects obtained in facial image are corresponding; Material obtaining unit 202, is configured for the material obtained for multiple face object; Selection unit 203, is configured for selection reference face object from multiple face object; Map unit 204, is configured for the mapping relations between the unique point based on benchmark face object and the unique point for the material of described benchmark face object, determines the mapping position of facial feature points; Adjustment unit 205, is configured for based on mapping position, adjusts the material of each face object to synthesize described image.
In some optional implementations of the present embodiment, map unit 204 is also configured for the positional information based on the positional information of the unique point of benchmark face object and the unique point for the material of benchmark face object, determines mapping relations; Utilize mapping relations, determine the mapping position of facial feature points.
In some optional implementations of the present embodiment, corresponding relation between adjustment unit 205 is also configured for based on the position of the unique point of the mapping position of facial feature points and the material of face object, determine to adjust parameter, adjustment parameter comprises following at least one item: translation parameters and zooming parameter; Based on adjustment parameter, adjust the material of each face object to synthesize target image.
In some optional implementations of the present embodiment, unique point acquiring unit 201 is also configured for the facial feature points adopting the face object in face recognition technology acquisition facial image corresponding.
In some optional implementations of the present embodiment, benchmark face is to liking shape of face object.
In some optional implementations of the present embodiment, material comprises cartoon character material.
In some optional implementations of the present embodiment, face object comprises following at least one item: shape of face object, eyebrow object, eyes object, nose object, face object, ear object and hair object.
Unit involved in the embodiment of the present application can be realized by the mode of software, also can be realized by the mode of hardware.Described unit also can be arranged within a processor, such as, can be described as: a kind of processor comprises acquiring unit, determining unit, selection unit, map unit, adjustment unit.Wherein, the title of these unit does not form the restriction to this unit itself under certain conditions, and such as, acquiring unit can also be described to " for obtaining the unit of facial feature points corresponding to multiple face objects in facial image ".
As another aspect, present invention also provides a kind of computer-readable recording medium, this computer-readable recording medium can be the computer-readable recording medium comprised in device described in above-described embodiment; Also can be individualism, be unkitted the computer-readable recording medium allocated in terminal.Described computer-readable recording medium stores more than one or one program, and described program is used for performance description in the image combining method of the application by one or more than one processor.
More than describe and be only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art are to be understood that, invention scope involved in the application, be not limited to the technical scheme of the particular combination of above-mentioned technical characteristic, also should be encompassed in when not departing from described inventive concept, other technical scheme of being carried out combination in any by above-mentioned technical characteristic or its equivalent feature and being formed simultaneously.The technical characteristic that such as, disclosed in above-mentioned feature and the application (but being not limited to) has similar functions is replaced mutually and the technical scheme formed.
Claims (14)
1. an image combining method, is characterized in that, described method comprises:
The facial feature points that multiple face objects in acquisition facial image are corresponding;
Obtain the material being used for described multiple face object;
Selection reference face object from described multiple face object;
Mapping relations between unique point based on described benchmark face object and the unique point for the material of described benchmark face object, determine the mapping position of described facial feature points;
Based on described mapping position, adjust the material of each face object to synthesize target image.
2. method according to claim 1, is characterized in that, describedly determines that the mapping position of described facial feature points comprises:
Based on the positional information of the positional information of the unique point of described benchmark face object and the unique point for the material of described benchmark face object, determine described mapping relations;
Utilize described mapping relations, determine the mapping position of described facial feature points.
3. according to the described method of one of claim 1-2, it is characterized in that, described based on described mapping position, the material adjusting each face object comprises to synthesize described target image:
Based on the unique point of the mapping position of described facial feature points and the material of face object position between corresponding relation, determine to adjust parameter, described adjustment parameter comprises following at least one item: translation parameters and zooming parameter;
Based on described adjustment parameter, adjust the material of each face object to synthesize described target image.
4. method according to claim 1, is characterized in that, described face object comprises following at least one item: shape of face object, eyebrow object, eyes object, nose object, face object, ear object and hair object.
5. method according to claim 1, is characterized in that, described benchmark face is to liking shape of face object.
6. method according to claim 1, is characterized in that, described material comprises cartoon character material.
7. method according to claim 1, is characterized in that, adopts the facial feature points that the face object in face recognition technology acquisition facial image is corresponding.
8. an image synthesizer, is characterized in that, described device comprises:
Unique point acquiring unit, is configured for the facial feature points that the multiple face objects obtained in facial image are corresponding;
Material obtaining unit, is configured for the material obtained for described multiple face object;
Selection unit, is configured for selection reference face object from described multiple face object;
Map unit, is configured for the mapping relations between the unique point based on described benchmark face object and the unique point for the material of described benchmark face object, determines the mapping position of described facial feature points;
Adjustment unit, is configured for based on described mapping position, adjusts the material of each face object to synthesize described image.
9. device according to claim 8, it is characterized in that, described map unit is also configured for the positional information based on the positional information of the unique point of described benchmark face object and the unique point for the material of described benchmark face object, determines described mapping relations; Utilize described mapping relations, determine the mapping position of described facial feature points.
10. device according to claim 8, it is characterized in that, corresponding relation between described adjustment unit is also configured for based on the position of the unique point of the mapping position of described facial feature points and the material of face object, determine to adjust parameter, described adjustment parameter comprises following at least one item: translation parameters and zooming parameter; Based on described adjustment parameter, adjust the material of each face object to synthesize described target image.
11. devices according to claim 8, is characterized in that, described benchmark face is to liking shape of face object.
12. devices according to claim 8, is characterized in that, described material comprises cartoon character material.
13. devices according to claim 8, is characterized in that, described face object comprises following at least one item: shape of face object, eyebrow object, eyes object, nose object, face object, ear object and hair object.
14. devices according to claim 8, is characterized in that, described unique point acquiring unit is also configured for the facial feature points adopting the face object in face recognition technology acquisition facial image corresponding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510092998.7A CN104715447B (en) | 2015-03-02 | 2015-03-02 | Image composition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510092998.7A CN104715447B (en) | 2015-03-02 | 2015-03-02 | Image composition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104715447A true CN104715447A (en) | 2015-06-17 |
CN104715447B CN104715447B (en) | 2019-08-30 |
Family
ID=53414742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510092998.7A Active CN104715447B (en) | 2015-03-02 | 2015-03-02 | Image composition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104715447B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105405094A (en) * | 2015-11-26 | 2016-03-16 | 掌赢信息科技(上海)有限公司 | Method for processing face in instant video and electronic device |
CN105915793A (en) * | 2016-05-10 | 2016-08-31 | 北京奇虎科技有限公司 | Intelligent watch shooting processing method and device |
CN107610239A (en) * | 2017-09-14 | 2018-01-19 | 广州帕克西软件开发有限公司 | The virtual try-in method and device of a kind of types of facial makeup in Beijing operas |
CN107679497A (en) * | 2017-10-11 | 2018-02-09 | 齐鲁工业大学 | Video face textures effect processing method and generation system |
CN107886559A (en) * | 2017-11-29 | 2018-04-06 | 北京百度网讯科技有限公司 | Method and apparatus for generating picture |
CN108021872A (en) * | 2017-11-22 | 2018-05-11 | 广州久邦世纪科技有限公司 | A kind of camera recognition methods for realizing real-time matching template and its system |
CN108537881A (en) * | 2018-04-18 | 2018-09-14 | 腾讯科技(深圳)有限公司 | A kind of faceform's processing method and its equipment, storage medium |
CN108717719A (en) * | 2018-05-23 | 2018-10-30 | 腾讯科技(深圳)有限公司 | Generation method, device and the computer storage media of cartoon human face image |
CN109509141A (en) * | 2017-09-15 | 2019-03-22 | 阿里巴巴集团控股有限公司 | Image processing method, head portrait setting method and device |
CN109671016A (en) * | 2018-12-25 | 2019-04-23 | 网易(杭州)网络有限公司 | Generation method, device, storage medium and the terminal of faceform |
CN110033420A (en) * | 2018-01-12 | 2019-07-19 | 北京京东金融科技控股有限公司 | A kind of method and apparatus of image co-registration |
CN111738930A (en) * | 2020-05-12 | 2020-10-02 | 北京三快在线科技有限公司 | Face image synthesis method and device, electronic equipment and storage medium |
CN113327191A (en) * | 2020-02-29 | 2021-08-31 | 华为技术有限公司 | Face image synthesis method and device |
WO2021238809A1 (en) * | 2020-05-29 | 2021-12-02 | 北京字节跳动网络技术有限公司 | Facial model reconstruction method and apparatus, and medium and device |
WO2022135518A1 (en) * | 2020-12-25 | 2022-06-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method and apparatus based on three-dimensional cartoon model, and server and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159064A (en) * | 2007-11-29 | 2008-04-09 | 腾讯科技(深圳)有限公司 | Image generation system and method for generating image |
CN101655985A (en) * | 2009-09-09 | 2010-02-24 | 西安交通大学 | Unified parametrization method of human face cartoon samples of diverse styles |
CN101847268A (en) * | 2010-04-29 | 2010-09-29 | 北京中星微电子有限公司 | Cartoon human face image generation method and device based on human face images |
CN102542586A (en) * | 2011-12-26 | 2012-07-04 | 暨南大学 | Personalized cartoon portrait generating system based on mobile terminal and method |
CN104157001A (en) * | 2014-08-08 | 2014-11-19 | 中科创达软件股份有限公司 | Method and device for drawing head caricature |
-
2015
- 2015-03-02 CN CN201510092998.7A patent/CN104715447B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159064A (en) * | 2007-11-29 | 2008-04-09 | 腾讯科技(深圳)有限公司 | Image generation system and method for generating image |
CN101655985A (en) * | 2009-09-09 | 2010-02-24 | 西安交通大学 | Unified parametrization method of human face cartoon samples of diverse styles |
CN101847268A (en) * | 2010-04-29 | 2010-09-29 | 北京中星微电子有限公司 | Cartoon human face image generation method and device based on human face images |
CN102542586A (en) * | 2011-12-26 | 2012-07-04 | 暨南大学 | Personalized cartoon portrait generating system based on mobile terminal and method |
CN104157001A (en) * | 2014-08-08 | 2014-11-19 | 中科创达软件股份有限公司 | Method and device for drawing head caricature |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105405094A (en) * | 2015-11-26 | 2016-03-16 | 掌赢信息科技(上海)有限公司 | Method for processing face in instant video and electronic device |
CN105915793A (en) * | 2016-05-10 | 2016-08-31 | 北京奇虎科技有限公司 | Intelligent watch shooting processing method and device |
WO2017193796A1 (en) * | 2016-05-10 | 2017-11-16 | 北京奇虎科技有限公司 | Photographing processing method and apparatus for smartwatch |
CN107610239A (en) * | 2017-09-14 | 2018-01-19 | 广州帕克西软件开发有限公司 | The virtual try-in method and device of a kind of types of facial makeup in Beijing operas |
CN107610239B (en) * | 2017-09-14 | 2020-11-03 | 广州帕克西软件开发有限公司 | Virtual try-on method and device for facial makeup |
CN109509141A (en) * | 2017-09-15 | 2019-03-22 | 阿里巴巴集团控股有限公司 | Image processing method, head portrait setting method and device |
CN107679497A (en) * | 2017-10-11 | 2018-02-09 | 齐鲁工业大学 | Video face textures effect processing method and generation system |
CN107679497B (en) * | 2017-10-11 | 2023-06-27 | 山东新睿信息科技有限公司 | Video face mapping special effect processing method and generating system |
CN108021872A (en) * | 2017-11-22 | 2018-05-11 | 广州久邦世纪科技有限公司 | A kind of camera recognition methods for realizing real-time matching template and its system |
CN107886559A (en) * | 2017-11-29 | 2018-04-06 | 北京百度网讯科技有限公司 | Method and apparatus for generating picture |
CN110033420A (en) * | 2018-01-12 | 2019-07-19 | 北京京东金融科技控股有限公司 | A kind of method and apparatus of image co-registration |
CN110033420B (en) * | 2018-01-12 | 2023-11-07 | 京东科技控股股份有限公司 | Image fusion method and device |
CN108537881B (en) * | 2018-04-18 | 2020-04-03 | 腾讯科技(深圳)有限公司 | Face model processing method and device and storage medium thereof |
CN108537881A (en) * | 2018-04-18 | 2018-09-14 | 腾讯科技(深圳)有限公司 | A kind of faceform's processing method and its equipment, storage medium |
CN108717719A (en) * | 2018-05-23 | 2018-10-30 | 腾讯科技(深圳)有限公司 | Generation method, device and the computer storage media of cartoon human face image |
US11328455B2 (en) | 2018-12-25 | 2022-05-10 | Netease (Hangzhou) Network Co., Ltd. | Method and apparatus for generating face model, storage medium, and terminal |
CN109671016A (en) * | 2018-12-25 | 2019-04-23 | 网易(杭州)网络有限公司 | Generation method, device, storage medium and the terminal of faceform |
WO2020133863A1 (en) * | 2018-12-25 | 2020-07-02 | 网易(杭州)网络有限公司 | Facial model generation method and apparatus, storage medium, and terminal |
CN113327191A (en) * | 2020-02-29 | 2021-08-31 | 华为技术有限公司 | Face image synthesis method and device |
WO2021169556A1 (en) * | 2020-02-29 | 2021-09-02 | 华为技术有限公司 | Method and apparatus for compositing face image |
CN111738930A (en) * | 2020-05-12 | 2020-10-02 | 北京三快在线科技有限公司 | Face image synthesis method and device, electronic equipment and storage medium |
WO2021238809A1 (en) * | 2020-05-29 | 2021-12-02 | 北京字节跳动网络技术有限公司 | Facial model reconstruction method and apparatus, and medium and device |
WO2022135518A1 (en) * | 2020-12-25 | 2022-06-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method and apparatus based on three-dimensional cartoon model, and server and medium |
Also Published As
Publication number | Publication date |
---|---|
CN104715447B (en) | 2019-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104715447A (en) | Image synthesis method and device | |
JP6864449B2 (en) | Methods and devices for adjusting the brightness of the image | |
JP4512584B2 (en) | Panorama video providing method and apparatus with improved image matching speed and blending method | |
EP2160714B1 (en) | Augmenting images for panoramic display | |
US10580205B2 (en) | 3D model generating system, 3D model generating method, and program | |
US10726580B2 (en) | Method and device for calibration | |
US11790495B2 (en) | Method for optimal body or face protection with adaptive dewarping based on context segmentation layers | |
WO2019035155A1 (en) | Image processing system, image processing method, and program | |
CN110072046B (en) | Image synthesis method and device | |
CN107452049B (en) | Three-dimensional head modeling method and device | |
JP7441917B2 (en) | Projection distortion correction for faces | |
CN108682050B (en) | Three-dimensional model-based beautifying method and device | |
US9959672B2 (en) | Color-based dynamic sub-division to generate 3D mesh | |
CN109242760B (en) | Face image processing method and device and electronic equipment | |
US10664947B2 (en) | Image processing apparatus and image processing method to represent part of spherical image in planar image using equidistant cylindrical projection | |
TWI820246B (en) | Apparatus with disparity estimation, method and computer program product of estimating disparity from a wide angle image | |
CN112581389A (en) | Virtual viewpoint depth map processing method, equipment, device and storage medium | |
Guo et al. | A real-time interactive system of surface reconstruction and dynamic projection mapping with RGB-depth sensor and projector | |
US11682234B2 (en) | Texture map generation using multi-viewpoint color images | |
CN107066095B (en) | Information processing method and electronic equipment | |
Islam et al. | Stereoscopic image warping for enhancing composition aesthetics | |
JP4775903B2 (en) | Free viewpoint image generation method, apparatus and program using multi-viewpoint images | |
JP5966657B2 (en) | Image generating apparatus, image generating method, and program | |
CN112560867A (en) | Method, device, equipment and medium for correcting text image | |
US11893681B2 (en) | Method for processing two-dimensional image and device for executing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20150617 Assignee: Beijing Xiaoxiong Bowang Technology Co., Ltd. Assignor: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. Contract record no.: X2019990000095 Denomination of invention: An image composing method and device Granted publication date: 20190830 License type: Exclusive License Record date: 20190923 |