CN104715447B - Image composition method and device - Google Patents
Image composition method and device Download PDFInfo
- Publication number
- CN104715447B CN104715447B CN201510092998.7A CN201510092998A CN104715447B CN 104715447 B CN104715447 B CN 104715447B CN 201510092998 A CN201510092998 A CN 201510092998A CN 104715447 B CN104715447 B CN 104715447B
- Authority
- CN
- China
- Prior art keywords
- face
- face object
- characteristic point
- feature points
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
This application discloses image composition methods and device.One specific embodiment of the method includes: the corresponding facial feature points of multiple face objects obtained in facial image;Obtain the material for being used for multiple face objects;Benchmark face object is selected from multiple face objects;Mapping relations between characteristic point based on benchmark face object and the characteristic point of the material for benchmark face object, determine the mapping position of facial feature points;Based on mapping position, the material of each face object is adjusted to synthesize target image.The embodiment is based on the mapping relations between the corresponding material of benchmark face object, the facial feature points of face object are mapped to target image, and based on the facial feature points after mapping, the material of each face object is adjusted to synthesize target image, to improve the matching degree of the face object in such as feature and facial image of position, size etc of the material of each face object in the target image, enhance the validity of the target image of synthesis.
Description
Technical field
This application involves field of computer technology, and in particular to field of image processing more particularly to face image processing side
Method and device.
Background technique
Currently, in some applications, providing based on user's Face image synthesis personalized image (such as cartoon image)
Function is for enhancing user experience.In known technology, the reality of the function based on user's Face image synthesis personalized image
Now for by searching for go out and the similar material of face object (such as shape of face, eyes, nose etc.) in facial image, directly utilization
Material synthesizes personalized image.However, the shape due to material has scrambling, scheme when directly utilizing material to synthesize personalization
When picture, the size of the corresponding face object of the size of material may be mismatched, and lead to the personalized image distortion of synthesis.
Summary of the invention
This application provides a kind of image composition method and devices, ask to solve the technology that background section above is mentioned
Topic.
In a first aspect, this application provides a kind of image composition method, this method comprises: obtaining multiple in facial image
The corresponding facial feature points of face object;Obtain the material for being used for multiple face objects;Benchmark is selected from multiple face objects
Face object;Mapping between characteristic point based on benchmark face object and the characteristic point of the material for benchmark face object is closed
System, determines the mapping position of facial feature points;Based on mapping position, the material of each face object is adjusted to synthesize target image.
Second aspect, this application provides a kind of image synthesizer, which includes: characteristic point acquiring unit, configuration
For obtaining the corresponding facial feature points of multiple face objects in facial image;Material obtaining unit is configured to obtain and use
In the material of multiple face objects;Selecting unit is configured to select benchmark face object from multiple face objects;Mapping is single
Member, the mapping being configured between the characteristic point based on benchmark face object and the characteristic point of the material for benchmark face object
Relationship determines the mapping position of facial feature points;Adjustment unit is configured to adjust each face object based on mapping position
Material is with composograph.
Image composition method provided by the present application and device, based on reflecting between the corresponding material of benchmark face object
Relationship is penetrated, the facial feature points of face object are mapped to target image, and based on the facial feature points after mapping, adjustment is each
The material of face object to synthesize target image, thus improve the material of each face object such as position in the target image,
The matching degree of face object in the feature and facial image of size etc, enhances the validity of the target image of synthesis.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is the flow chart according to one embodiment of the image composition method of the application;
Fig. 2 is the structural schematic diagram according to one embodiment of the image synthesizer of the application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Referring to FIG. 1, it illustrates the processes 100 according to one embodiment of the image composition method of the application.The party
Method the following steps are included:
Step 101, the corresponding facial feature points of multiple face objects in facial image are obtained.
In the present embodiment, facial image can be the facial image in the photo of user's upload, is also possible to utilize and take the photograph
The facial image obtained as head shoots user.
After obtaining facial image, the face object in facial image can be further obtained.Face object can be face
On organ or characterize face characteristic object.
In some optional implementations of the present embodiment, face object includes at least one of the following: shape of face object, eyebrow
Hair object, eyes object, nose object, mouth object, ear object, hair object etc..It is appreciated that above-mentioned face object can
All for synthesizing target image, some face objects can also be chosen in above-mentioned face object and are used as synthesizing mesh
The face object of logo image.
Face object can use facial feature points or face's key point is described.Each face object can have it
Corresponding characteristic point.For example, for shape of face object face can be used as by the profile of shape of face, choosing a certain number of points
The facial feature points of type object.It, can be using face recognition technology to people in some optional implementations of the present embodiment
Face image is identified, to obtain the facial feature points of the feature of face object in characterization facial image.In some implementations, may be used
Facial image is characterized to use 72 characteristic points.It will be understood by those skilled in the art that more or fewer spies can be used
Point is levied to characterize facial image, there is no limit in this regard by the application.After the facial feature points for obtaining characterization facial image,
It can correspondingly determine the characteristic point of each face object.Each of facial image face object can correspond to a certain number of
Facial feature points, for example, shape of face object can correspond to 13 facial feature points.
Step 102, the material for being used for multiple face object is obtained.
In the present embodiment, the type of material can be type relevant to face object.For example, in some implementations,
Wish to generate cartoon character, the material for the face object being accordingly used in facial image can be each of cartoon character
Kind material.Material may include the material for different types of face object, such as the shape of face material of shape of face object, use
It is such for the eyes material of eyes object in the eyebrow material of eyebrow object.
Similar with face object, material can be described by the characteristic point of material.For example, for shape of face material, element
Material characteristic point can be by the profile of the shape of face material, choosing characteristic point of a certain number of points as material.
In some embodiments, can matching between the facial feature points based on face object and the characteristic point of material close
System, automatically determines the material for face object.In other words, for Given Face object (such as shape of face object), can from
It is automatically selected in numerous materials (such as shape of face material) of the Given Face object closest or most with the Given Face object
Similar material.The degree of approach or similarity between face object and material can be calculated using distance metric.In a kind of reality
In existing, the two-dimensional coordinate that the corresponding facial feature points of face object can characterize its position in facial image with one is carried out
It indicates, correspondingly, the two-dimensional coordinate that the characteristic point of material can characterize its position in material with one is indicated.Then
The degree of approach or phase between the corresponding facial feature points of face object and the characteristic point of material can be determined in the following ways
Like degree: can the corresponding two-dimensional coordinate of the characteristic point two-dimensional coordinate corresponding with the characteristic point of material based on face object, application
Euclidean distance formula calculates above-mentioned coordinate.Make finally, choosing the corresponding material of the smallest result of value in calculated result
For the material with face object matching.
In further embodiments, material for each face object can be is determined by user.For example, user can be with
The material for each face object is manually selected according to hobby.
The material for each face object can also be automatically determined first in yet other embodiments, then receive user
Adjustment to identified material, such as replace.
Step 103, benchmark face object is selected from face object.
In the present embodiment, an object can be chosen from face object as benchmark face object.In some realizations
In, choosing for benchmark face image can be determined according to the position of other objects in target image to be synthesized.
In some optional implementations of the present embodiment, benchmark face object is shape of face object.It is being based on face pair
When as synthesis target image, the corresponding object in one of position (such as neck) on human body can be fixed first, at this
In the case of kind, shape of face object can be chosen as benchmark face object.
Step 104, between the characteristic point based on benchmark face object and the characteristic point of the material for benchmark face object
Mapping relations, determine the mapping position of facial feature points.
In the present embodiment, the characteristic point of benchmark face object can be indicated with a two-dimensional coordinate, which sits
Mark characterizes its position in facial image.It, can be right by taking a facial image is the rectangle facial image of 100*100 as an example
The rectangle facial image of 100*100 pre-establishes coordinate system, chooses the central point i.e. diagonal line of the rectangle facial image of 100*100
Origin of the crosspoint as coordinate system.The coordinate system of facial image based on foundation can determine facial feature points in 100*
Coordinate in 100 rectangle facial image.Correspondingly, benchmark face object corresponds to interface of the characteristic point in pre-set dimension of material
In position can be indicated with a two-dimensional coordinate, the two-dimensional coordinate characterization for benchmark face object material wait close
At target image where interface in position.It is for the rectangle electronics painting canvas of 480*800 with the interface where target image
Example, can the rectangle electronics painting canvas to 480*800 pre-establish coordinate system, such as choose the rectangle electronics painting canvas of 480*800
Central point, that is, origin of cornerwise crosspoint as coordinate system.It can be based on the rectangle electronics painting canvas of the 480*800 of foundation
Coordinate system determines that benchmark face object corresponds to coordinate of the characteristic point of material in the rectangle electronics painting canvas of 480*800.In this reality
It applies in example, it, can be according to the seat of the two after the coordinate of characteristic point and corresponding material that benchmark face object has been determined
Mark calculates the mapping relations between the characteristic point of the corresponding material of characteristic point of benchmark face object.It then, can be with base
In this mapping relations, the coordinate of the facial feature points of multiple face objects in facial image in the target image is calculated, from
And the facial feature points for completing the face object that will acquire are mapped to from facial image (such as rectangle facial image of 100*100)
Interface (such as rectangle electronics painting canvas of 480*800) where target image.
Below by taking benchmark face object is shape of face object as an example, further illustrate characteristic point based on benchmark face object with
Mapping relations between the characteristic point of material for benchmark face object determine the process of the mapping position of facial feature points.
In some implementations, when benchmark face object is shape of face object, characteristic point based on shape of face object be used for face
Mapping relations between the characteristic point of the shape of face material of type object determine that the mapping position of facial feature points can be used with lower section
Formula: coordinate of the characteristic point based on shape of face object on facial image establishes the characteristic point matrix of shape of face object.Based on shape of face element
Material establishes shape of face material characteristic point matrix in the coordinate at pre-set dimension interface.Characteristic point matrix and shape of face based on shape of face object
Material characteristic point matrix calculates the transition matrix between two matrixes, which characterizes the transformation between two matrixes and close
The characteristic point and shape of face element of system's (such as translation, scaling, rotation) namely transition matrix obtained by calculation characterization shape of face object
Mapping relations between the characteristic point of material.Mapping between the characteristic point of the characteristic point and shape of face material that determine shape of face object is closed
After system, which can be applied to the face of other face objects (such as eyes object, nose object, mouth object)
Characteristic point, so that the facial feature points of other face objects may map to pre-set dimension interface (the rectangle electronic picture of 480*800
Cloth) on.It is the facial image of 100*100 with facial image, for the rectangle electronics painting canvas of 480*800 is in the interface of pre-set dimension
Illustrate above-mentioned mapping process.It can be according to coordinate of the facial feature points of other face objects on the facial image of 100*100
Face feature dot matrix is established, according between the characteristic point of characterization shape of face object and the characteristic point of shape of face material being calculated
The transition matrix of mapping relations, to the face feature dot matrix of foundation carry out matrixing (such as translation transformation, scale transformation,
Rotation transformation), the matrix of available position of the characterization facial feature points on the interface of pre-set dimension, by the matrix, into
And determine rectangle electronics of the facial feature points in 480*800 of face object (such as eyes object, nose object, mouth object)
Coordinate in painting canvas.In this way, by above-mentioned conversion process, the face feature of the face object on the facial image of 100*100
Point maps to the corresponding position in the rectangle electronics painting canvas of 480*800.After completing mapping, in 480*800 after mapping
The profile that facial feature points in rectangle electronics painting canvas are surrounded can serve to indicate that the size of corresponding object in target image, example
Such as, the profile that the facial feature points based on the eyes after mapping surround is used to indicate the size of the eyes object in target image.
Step 105, it is based on mapping position, adjusts the material of each face object to synthesize target image.
In the present embodiment, target image can be material (such as the various elements of cartoon character by face object
Material) image that is synthesized at the interface (such as rectangle electronics painting canvas of 480*800) of pre-set dimension.It can be by face
The material of object is set in advance on the interface of pre-set dimension.Because be set in advance on the interface of pre-set dimension material (such as
Eyes, nose, mouth) size and position face object corresponding with the surrounded profile of the facial feature points after mapping (such as
Eyes, nose, mouth) size and location it is inconsistent, it is therefore desirable to further adjustment face object material size and position
It sets.It in the present embodiment, can mapped bits of the facial feature points based on each face object on the interface of pre-set dimension
It sets, adjusts separately position of the characteristic point of the material of face object in the interface of pre-set dimension, after allowing to and mapping
The size and location of the corresponding face object of the surrounded profile of facial feature points is consistent so that using by size and
Position material adjusted synthesizes target image.
In some optional implementations of the present embodiment, adjustment material mode can be with are as follows: based on facial feature points
Corresponding relationship between the position of the characteristic point of the material of mapping position and face object determines that adjusting parameter, adjusting parameter can
To include at least one of the following: translation parameters and zooming parameter;Based on adjusting parameter, the material of each face object is adjusted to synthesize
Target image.The material that each face object can be adjusted separately calculates separately out the material of each face object
Its corresponding adjusting parameter utilizes the material of adjusting parameter adjustment face object.It can determine in the following ways adjusting parameter:
Obtain interface (such as the rectangle electronics of 480*800 of the facial feature points in pre-set dimension of face object (such as eyes object)
Painting canvas) on coordinate and its characteristic point of the corresponding eyes material on the interface of pre-set dimension coordinate.Based on acquisition
Above-mentioned coordinate can determine the translation relation between the coordinate and the coordinate of the characteristic point of corresponding material that indicate facial feature points
The translation parameters of (such as positional relationship of horizontal or vertical direction) and indicate the coordinates of facial feature points and corresponding material
Characteristic point coordinate between scale relationship zooming parameter.In the material for determining each face object through the above way
After corresponding adjusting parameter, it can be based on adjusting parameter, adjust the material of each face object, such as based on translation parameters, adjustment
Material is in the horizontal direction or the movement of vertical direction.After being adjusted by adjusting material of the parameter to face, tune can use
Material after whole synthesizes target image.
The image composition method provided by the above embodiment of the application, based on the corresponding material of benchmark face object it
Between mapping relations, the facial feature points of face object are mapped to target image, and based on the facial feature points after mapping,
The material of each face object is adjusted to synthesize target image, to improve the material of each face object in the target image such as
The matching degree of face object in the feature and facial image of position, size etc, enhances the validity of the target image of synthesis.
Referring to FIG. 2, it illustrates the structural schematic diagrams according to image synthesizer one embodiment of the application.
As shown in Fig. 2, the device 200 includes: characteristic point acquiring unit 201, it is configured to obtain more in facial image
The corresponding facial feature points of a face object;Material obtaining unit 202 is configured to obtain the element for being used for multiple face objects
Material;Selecting unit 203 is configured to select benchmark face object from multiple face objects;Map unit 204, is configured to
Mapping relations between characteristic point based on benchmark face object and the characteristic point of the material for the benchmark face object, really
Determine the mapping position of facial feature points;Adjustment unit 205 is configured to adjust the material of each face object based on mapping position
To synthesize described image.
In some optional implementations of the present embodiment, map unit 204 is also configured to based on benchmark face pair
The location information of the characteristic point of the location information of the characteristic point of elephant and the material for benchmark face object, determines mapping relations;
Using mapping relations, the mapping position of facial feature points is determined.
In some optional implementations of the present embodiment, adjustment unit 205 is also configured to based on facial feature points
Mapping position and face object material characteristic point position between corresponding relationship, determine adjusting parameter, adjusting parameter
Include at least one of the following: translation parameters and zooming parameter;Based on adjusting parameter, the material of each face object is adjusted to synthesize mesh
Logo image.
In some optional implementations of the present embodiment, characteristic point acquiring unit 201 is also configured to using face
Identification technology obtains the corresponding facial feature points of face object in facial image.
In some optional implementations of the present embodiment, benchmark face object is shape of face object.
In some optional implementations of the present embodiment, material includes cartoon character material.
In some optional implementations of the present embodiment, face object includes at least one of the following: shape of face object, eyebrow
Hair object, eyes object, nose object, mouth object, ear object and hair object.
Involved unit can be realized by way of software in the embodiment of the present application, can also pass through the side of hardware
Formula is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor includes obtaining
Unit, determination unit, selecting unit, map unit, adjustment unit.Wherein, the title of these units is not under certain conditions
The restriction to the unit itself is constituted, for example, acquiring unit is also described as " for obtaining multiple people in facial image
The unit of the corresponding facial feature points of face object ".
As on the other hand, present invention also provides a kind of computer readable storage medium, the computer-readable storage mediums
Matter can be computer readable storage medium included in device described in above-described embodiment;It is also possible to individualism, not
The computer readable storage medium being fitted into terminal.The computer-readable recording medium storage have one or more than one
Program, described program are used to execute the image composition method for being described in the application by one or more than one processor.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of image composition method, which is characterized in that the described method includes:
Obtain the corresponding facial feature points of multiple face objects in facial image;
Obtain the material for being used for the multiple face object;
Benchmark face object is selected from the multiple face object;
Reflecting between the characteristic point based on the benchmark face object and the characteristic point of the material for the benchmark face object
Relationship is penetrated, determines the mapping position of the facial feature points;
Based on the mapping position, the material of each face object is adjusted to synthesize target image.
2. the method according to claim 1, wherein the mapping position packet of the determination facial feature points
It includes:
The feature of the location information of characteristic point based on the benchmark face object and the material for the benchmark face object
The location information of point, determines the mapping relations;
Using the mapping relations, the mapping position of the facial feature points is determined.
3. one of -2 the method according to claim 1, which is characterized in that it is described to be based on the mapping position, adjust each face
The material of object includes: to synthesize the target image
Corresponding relationship between the position of the characteristic point of the material of mapping position based on the facial feature points and face object,
Determine that adjusting parameter, the adjusting parameter include at least one of the following: translation parameters and zooming parameter;
Based on the adjusting parameter, the material of each face object is adjusted to synthesize the target image.
4. the method according to claim 1, wherein the face object includes at least one of the following: shape of face pair
As, eyebrow object, eyes object, nose object, mouth object, ear object and hair object.
5. the method according to claim 1, wherein the benchmark face object is shape of face object.
6. the method according to claim 1, wherein the material includes cartoon character material.
7. the method according to claim 1, wherein obtaining the face in facial image using face recognition technology
The corresponding facial feature points of object.
8. a kind of image synthesizer, which is characterized in that described device includes:
Characteristic point acquiring unit is configured to obtain the corresponding facial feature points of multiple face objects in facial image;
Material obtaining unit is configured to obtain the material for being used for the multiple face object;
Selecting unit is configured to select benchmark face object from the multiple face object;
Map unit is configured to the characteristic point based on the benchmark face object and is used for the material of the benchmark face object
Characteristic point between mapping relations, determine the mapping position of the facial feature points;
Adjustment unit is configured to adjust the material of each face object based on the mapping position to synthesize described image.
9. device according to claim 8, which is characterized in that the map unit is also configured to based on the benchmark people
The location information of the characteristic point of the location information of the characteristic point of face object and the material for the benchmark face object, determines institute
State mapping relations;Using the mapping relations, the mapping position of the facial feature points is determined.
10. device according to claim 8, which is characterized in that the adjustment unit is also configured to based on the face
Corresponding relationship between the position of the characteristic point of the material of the mapping position and face object of characteristic point, determines adjusting parameter, institute
It states adjusting parameter and includes at least one of the following: translation parameters and zooming parameter;Based on the adjusting parameter, each face object is adjusted
Material to synthesize target image.
11. device according to claim 8, which is characterized in that the benchmark face object is shape of face object.
12. device according to claim 8, which is characterized in that the material includes cartoon character material.
13. device according to claim 8, which is characterized in that the face object includes at least one of the following: shape of face pair
As, eyebrow object, eyes object, nose object, mouth object, ear object and hair object.
14. device according to claim 8, which is characterized in that the characteristic point acquiring unit is also configured to using people
Face identification technology obtains the corresponding facial feature points of face object in facial image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510092998.7A CN104715447B (en) | 2015-03-02 | 2015-03-02 | Image composition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510092998.7A CN104715447B (en) | 2015-03-02 | 2015-03-02 | Image composition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104715447A CN104715447A (en) | 2015-06-17 |
CN104715447B true CN104715447B (en) | 2019-08-30 |
Family
ID=53414742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510092998.7A Active CN104715447B (en) | 2015-03-02 | 2015-03-02 | Image composition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104715447B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105405094A (en) * | 2015-11-26 | 2016-03-16 | 掌赢信息科技(上海)有限公司 | Method for processing face in instant video and electronic device |
CN105915793A (en) * | 2016-05-10 | 2016-08-31 | 北京奇虎科技有限公司 | Intelligent watch shooting processing method and device |
CN107610239B (en) * | 2017-09-14 | 2020-11-03 | 广州帕克西软件开发有限公司 | Virtual try-on method and device for facial makeup |
CN109509141A (en) * | 2017-09-15 | 2019-03-22 | 阿里巴巴集团控股有限公司 | Image processing method, head portrait setting method and device |
CN107679497B (en) * | 2017-10-11 | 2023-06-27 | 山东新睿信息科技有限公司 | Video face mapping special effect processing method and generating system |
CN108021872A (en) * | 2017-11-22 | 2018-05-11 | 广州久邦世纪科技有限公司 | A kind of camera recognition methods for realizing real-time matching template and its system |
CN107886559A (en) * | 2017-11-29 | 2018-04-06 | 北京百度网讯科技有限公司 | Method and apparatus for generating picture |
CN110033420B (en) * | 2018-01-12 | 2023-11-07 | 京东科技控股股份有限公司 | Image fusion method and device |
CN108537881B (en) * | 2018-04-18 | 2020-04-03 | 腾讯科技(深圳)有限公司 | Face model processing method and device and storage medium thereof |
CN108717719A (en) * | 2018-05-23 | 2018-10-30 | 腾讯科技(深圳)有限公司 | Generation method, device and the computer storage media of cartoon human face image |
CN109671016B (en) * | 2018-12-25 | 2019-12-17 | 网易(杭州)网络有限公司 | face model generation method and device, storage medium and terminal |
CN113327191A (en) * | 2020-02-29 | 2021-08-31 | 华为技术有限公司 | Face image synthesis method and device |
CN111738930A (en) * | 2020-05-12 | 2020-10-02 | 北京三快在线科技有限公司 | Face image synthesis method and device, electronic equipment and storage medium |
CN111627106B (en) * | 2020-05-29 | 2023-04-28 | 北京字节跳动网络技术有限公司 | Face model reconstruction method, device, medium and equipment |
CN112581518A (en) * | 2020-12-25 | 2021-03-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method, device, server and medium based on three-dimensional cartoon model |
CN117788720B (en) * | 2024-02-26 | 2024-05-17 | 山东齐鲁壹点传媒有限公司 | Method for generating user face model, storage medium and terminal |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159064A (en) * | 2007-11-29 | 2008-04-09 | 腾讯科技(深圳)有限公司 | Image generation system and method for generating image |
CN101655985A (en) * | 2009-09-09 | 2010-02-24 | 西安交通大学 | Unified parametrization method of human face cartoon samples of diverse styles |
CN101847268A (en) * | 2010-04-29 | 2010-09-29 | 北京中星微电子有限公司 | Cartoon human face image generation method and device based on human face images |
CN102542586A (en) * | 2011-12-26 | 2012-07-04 | 暨南大学 | Personalized cartoon portrait generating system based on mobile terminal and method |
CN104157001A (en) * | 2014-08-08 | 2014-11-19 | 中科创达软件股份有限公司 | Method and device for drawing head caricature |
-
2015
- 2015-03-02 CN CN201510092998.7A patent/CN104715447B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101159064A (en) * | 2007-11-29 | 2008-04-09 | 腾讯科技(深圳)有限公司 | Image generation system and method for generating image |
CN101655985A (en) * | 2009-09-09 | 2010-02-24 | 西安交通大学 | Unified parametrization method of human face cartoon samples of diverse styles |
CN101847268A (en) * | 2010-04-29 | 2010-09-29 | 北京中星微电子有限公司 | Cartoon human face image generation method and device based on human face images |
CN102542586A (en) * | 2011-12-26 | 2012-07-04 | 暨南大学 | Personalized cartoon portrait generating system based on mobile terminal and method |
CN104157001A (en) * | 2014-08-08 | 2014-11-19 | 中科创达软件股份有限公司 | Method and device for drawing head caricature |
Also Published As
Publication number | Publication date |
---|---|
CN104715447A (en) | 2015-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104715447B (en) | Image composition method and device | |
CN107818305B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US11989859B2 (en) | Image generation device, image generation method, and storage medium storing program | |
US20240129636A1 (en) | Apparatus and methods for image encoding using spatially weighted encoding quality parameters | |
CN105531998B (en) | For object detection and the method for segmentation, device and computer program product | |
US11113859B1 (en) | System and method for rendering three dimensional face model based on audio stream and image data | |
JP2017059235A (en) | Apparatus and method for adjusting brightness of image | |
CN104811684B (en) | A kind of three-dimensional U.S. face method and device of image | |
CN107564049B (en) | Faceform's method for reconstructing, device and storage medium, computer equipment | |
TW202109359A (en) | Face image processing method, image equipment and storage medium | |
US10987198B2 (en) | Image simulation method for orthodontics and image simulation device thereof | |
WO2022001806A1 (en) | Image transformation method and apparatus | |
JP2011039869A (en) | Face image processing apparatus and computer program | |
CN108682050B (en) | Three-dimensional model-based beautifying method and device | |
CN104408702B (en) | A kind of image processing method and device | |
US11496661B2 (en) | Image processing apparatus and image processing method | |
CN106651991B (en) | Intelligent mapping realization method and system thereof | |
CN107734207B (en) | Video object transformation processing method and device and computing equipment | |
CN107767326B (en) | Method and device for processing object transformation in image and computing equipment | |
CN113850709A (en) | Image transformation method and device | |
KR102171332B1 (en) | Apparatus, method and computer readable medium having computer program recorded for facial image correction | |
WO2023024096A1 (en) | Image processing method, image processing device, photographing equipment, and readable storage medium | |
JP2015184701A (en) | Image processing apparatus, image processing method, and program | |
CN111083345B (en) | Apparatus and method for generating a unique illumination and non-volatile computer readable medium thereof | |
CN114782240A (en) | Picture processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20150617 Assignee: Beijing Xiaoxiong Bowang Technology Co., Ltd. Assignor: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. Contract record no.: X2019990000095 Denomination of invention: An image composing method and device Granted publication date: 20190830 License type: Exclusive License Record date: 20190923 |
|
EE01 | Entry into force of recordation of patent licensing contract |