CN108334821A - A kind of image processing method and electronic equipment - Google Patents

A kind of image processing method and electronic equipment Download PDF

Info

Publication number
CN108334821A
CN108334821A CN201810047955.0A CN201810047955A CN108334821A CN 108334821 A CN108334821 A CN 108334821A CN 201810047955 A CN201810047955 A CN 201810047955A CN 108334821 A CN108334821 A CN 108334821A
Authority
CN
China
Prior art keywords
image
expression
attribute
target
expression attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810047955.0A
Other languages
Chinese (zh)
Other versions
CN108334821B (en
Inventor
刘伟
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810047955.0A priority Critical patent/CN108334821B/en
Publication of CN108334821A publication Critical patent/CN108334821A/en
Application granted granted Critical
Publication of CN108334821B publication Critical patent/CN108334821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of image processing methods, including:Obtain the first expression attribute of source object in the first image of acquisition;Obtain the second expression attribute with the first expression attributes match;Based on the second expression attribute, processing is updated to target object in the second image.The invention also discloses a kind of electronic equipment.

Description

A kind of image processing method and electronic equipment
Technical field
The present invention relates to image processing techniques more particularly to a kind of image processing methods and electronic equipment.
Background technology
Currently, electronic equipment is usually presented picture in a static manner, i.e., the picture that user is browsed by electronic equipment is not It can be changed based on the user of environment or electronic equipment described in electronic equipment.As user is passing through electronic equipment browse graph When piece, picture will not carry out adaptation change according to user.Therefore, how to make described in the picture and electronic equipment in electronic equipment Environment carries out intelligent adaptation and there is no solution.
Invention content
To solve existing technical problem, an embodiment of the present invention provides a kind of image processing methods and electronics to set It is standby, it can at least solve the above-mentioned problems in the prior art.
The embodiment of the present invention provides a kind of information processing method, including:Obtain the of source object in the first image of acquisition One expression attribute;Obtain the second expression attribute with the first expression attributes match;Based on the second expression attribute, to Target object is updated processing in two images.
In said program, the first expression attribute of source object in first image for obtaining acquisition, including:
Identify the characteristic point of source object in first image;
Based on the characteristic point of the source object, the first expression attribute of the source object is obtained.
In said program, the characteristic point based on the source object obtains the first expression attribute of the source object, packet It includes:
The objective attribute target attribute of the training sample of feature based point sample architecture and training sample label, training machine Learning model so that the machine learning model has the performance that corresponding objective attribute target attribute is predicted according to the training sample;
By the characteristic point input of source object machine learning model trained in advance, obtained using the machine learning model Take the first expression attribute of the source object.
In said program, the second expression attribute of the acquisition and the first expression attributes match, including:
Based on the first expression attribute, the target with the first expression attributes match is searched in candidate facial expression image Facial expression image set;
A target facial expression image is selected in the target image set;
The corresponding expression attribute of selected target facial expression image is determined as the second expression attribute.
In said program, the second expression attribute of the acquisition and the first expression attributes match, including:
Based on the first expression attribute, the target with the first expression attributes match is searched in candidate facial expression image Facial expression image set;
Determine the quantity of target object in second image;
The target facial expression image of quantity identical as the quantity of the target object is selected in the target image set;
The corresponding expression attribute of selected target facial expression image is determined as the second expression attribute.
It is described to be based on the second expression attribute in said program, processing is updated to target object in the second image, Including:
Identify at least one of second image object;
An object is selected at least one of second image object, as target object in the second image;
Based on the second expression attribute, processing is updated to a selected object.
It is described to be based on the second expression attribute in said program, processing is updated to target object in the second image, Including:
Identify the object in second image;
Using the whole objects identified as target object in the second image;
Based on the second expression attribute, processing is updated to the whole objects identified.
It is described to be based on the second expression attribute in said program, processing is updated to target object in the second image, Including:
The corresponding target facial expression image of the second expression attribute is merged with target object in second image Processing, to realize the update to target object in the second image.
It is described by mesh in the corresponding target facial expression image of the second expression attribute and second image in said program It marks object and carries out fusion treatment, including:
Identify the characteristic point of the target facial expression image;
Determine the characteristic point of target object and the target facial expression image same type in second image;
The position of characteristic point based on the target facial expression image updates the characteristic point of same type in second image Position, and update meets the spy of the first distance threshold with the distance between characteristic point of same type in second image Levy the position of point.
The embodiment of the present invention also provides a kind of electronic equipment, and the electronic equipment includes:Memory, it is executable for storing Program;
Processor is realized when for by executing the executable program stored in the memory:
Obtain the first expression attribute of source object in the first image of acquisition;
Obtain the second expression attribute with the first expression attributes match;
Based on the second expression attribute, processing is updated to target object in the second image.
In the embodiment of the present invention, according to the first expression attribute of source object in the first image of acquisition, in the second image The expression attribute of target object is updated processing so that the second image can according to the source object in the first image of acquisition into Row adaptation changes, and then realizes the intelligent adaptation of the second image.
Description of the drawings
Fig. 1 is an optional hardware architecture diagram of electronic equipment provided in an embodiment of the present invention;
Fig. 2 is an optional hardware architecture diagram of terminal provided in an embodiment of the present invention;
Fig. 3 is an optional flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 4 is that one of the first expression attribute of source object in the first image that the acquisition of terminal of the embodiment of the present invention acquires can Select flow diagram;
Fig. 5 is terminal of the embodiment of the present invention by the corresponding target facial expression image of the second expression attribute and second figure Target object carries out an optional flow diagram of fusion treatment as in;
Fig. 6 is the schematic diagram of face triangle provided in an embodiment of the present invention segmentation;
Fig. 7 is the updated second image schematic diagram of the embodiment of the present invention;
Fig. 8 is that the embodiment of the present invention increases by the second expression attribute schematic diagram on the second image;
Fig. 9 is another optional flow diagram of image processing method provided in an embodiment of the present invention;
Figure 10 is another optional flow diagram of image processing method provided in an embodiment of the present invention;
Figure 11 is another optional flow diagram of image processing method provided in an embodiment of the present invention.
Specific implementation mode
Invention is further described in detail in the following with reference to the drawings and specific embodiments.It should be appreciated that mentioned herein Embodiment be only used to explain the present invention, be not intended to limit the present invention.In addition, embodiment provided below is for real The section Example of the present invention is applied, rather than the whole embodiments for implementing the present invention are provided, in the absence of conflict, the present invention is real Apply example record technical solution can be in any combination mode implement.
Before the embodiment of the present invention is further elaborated, to the noun and term involved in the embodiment of the present invention It illustrates, the noun and term involved in the embodiment of the present invention are suitable for following explanation.
1) source object, user's portrait of electronic equipment acquisition.
2) target object, object to be treated, the i.e. object for being updated (fusion).
(certainly, object can be the arbitrary element that can be imaged in the picture, such as object, people so that object is face as an example Body or human body privileged site etc.), the second expression attribute of acquisition is used to be melted as the face in material, with target object It closes so that the face in target image is also with the second expression attribute in material;It should be noted that in the embodiment of the present invention The face mentioned includes the face of real user object and the face of cartoon object.
3) characteristic point can reflect local feature (such as color characteristic, shape feature and the texture spy of object in the picture Sign) point, the set of generally multiple pixels, by taking facial image as an example, characteristic point can be eye feature point, face feature Point or nose characteristic point etc..
5) it merges, the feature having when target object is imaged, with the feature progress that the second expression attribute has in material Merge, makes the feature of target object together with the Fusion Features that the second expression attribute has.
Next illustrate the illustrative hard of the electronic equipment for the image processing method for realizing the embodiment of the present invention according to Fig. 1 Part structure, electronic equipment can be implemented in a variety of manners, such as desktop computer, laptop or smart mobile phone are various types of The computer equipment of type.It elaborates below to the hardware configuration of the electronic equipment of the embodiment of the present invention, it will be understood that Fig. 1 The example arrangement rather than entire infrastructure of electronic equipment are illustrate only, the part-structure shown in Fig. 1 can be implemented as needed Or entire infrastructure.
Referring to Fig. 1, Fig. 1 is an optional hardware architecture diagram of electronic equipment provided in an embodiment of the present invention, real It can be applied to the various terminals of operation application program above-mentioned in the application of border, electronic equipment 100 shown in FIG. 1 includes:At least One processor 101, memory 102, user interface 103 and at least one network interface 104.It is each in electronic equipment 100 Component is coupled by bus system 105.It is appreciated that bus system 105 is for realizing the connection between these components Communication.Bus system 105 further includes power bus, controlling bus and status signal bus in addition in addition to including data/address bus.But For the sake of clear explanation, various buses are all designated as bus system 105 in Fig. 1.
Wherein, user interface 103 may include display, keyboard, mouse, trace ball, click wheel, button, button, sense of touch Plate or touch screen etc..
It is appreciated that memory 102 can be volatile memory or nonvolatile memory, may also comprise volatibility and Both nonvolatile memories.
Memory 102 in the embodiment of the present invention is for storing various types of data to support the behaviour of electronic equipment 100 Make.The example of these data includes:Any computer program for being operated on electronic equipment 100, such as executable program 1021, realize that the program of the image processing method of the embodiment of the present invention may be embodied in executable program 1021.
The image processing method that the embodiment of the present invention discloses can be applied in processor 101, or real by processor 101 It is existing.Processor 101 may be a kind of IC chip, the processing capacity with signal.During realization, image processing method Each step of method can be completed by the integrated logic circuit of the hardware in processor 101 or the instruction of software form.It is above-mentioned Processor 101 can be general processor, digital signal processor (DSP, Digital Signal Processor), or Other programmable logic device, discrete gate or transistor logic, discrete hardware components etc..Processor 201 may be implemented Or disclosed each method, step and logic diagram in the execution embodiment of the present invention.General processor can be microprocessor Or any conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, hardware can be embodied directly in Decoding processor executes completion, or in decoding processor hardware and software module combination execute completion.Software module can To be located in storage medium, which is located at memory 102, and processor 101 reads the information in memory 102, in conjunction with Its hardware completes the step of image processing method provided in an embodiment of the present invention.
In an embodiment of the invention, Fig. 2 shows electronic equipment be embodied as terminal, referring to Fig. 2, Fig. 2 is the present invention One optional hardware architecture diagram of the terminal that embodiment provides, terminal include:
Memory 102 is configured to storage executable program;
Processor 101 when being configured to execute the executable program stored in the memory, realizes that the embodiment of the present invention carries The above-mentioned image processing method supplied.
Memory 102 is also stored with the operating system 1022 of terminal.
Network interface 104 may include one or more communication modules, such as mutual including mobile communication module 1041 and wirelessly Networking module 1042.
A/V (audio/video) input unit 120 is for receiving audio or video signal.May include camera 121 and wheat Gram wind 122.
Sensing unit 140 includes sensor 141, realizes the acquisition of sensing data, for example, optical sensor, motion sensor, Pressure sensor, iris sensor etc..
Power supply unit 190 (such as battery), it is preferred that power supply unit 190 can pass through power-supply management system and processor 101 is logically contiguous, to realize the functions such as management charging, electric discharge and power managed by power-supply management system.
Output unit 150, including:
Display unit 151 shows information input by user or is supplied to the information of user, it may include display panel;
Audio output module 152 can be in call signal reception pattern, call mode, logging mode, voice in terminal When under recognition mode, broadcast reception mode isotype, audio data that is reception or storing in memory is converted into audio Frequency signal and output are sound.Moreover, audio output module can also provide the relevant sound of specific function executed with terminal Frequency output (for example, call signal receives sound, message sink sound etc.), may include loud speaker, buzzer etc..
Alarm unit 153, it can be achieved that the particular event of terminal alarm, such as fault warning.
So far, the electronic equipment involved in the embodiment of the present invention is described according to function, is based on above-mentioned electronic equipment Optional hardware architecture diagram, below to realizing that the application scenarios of image processing method of the embodiment of the present invention illustrate.
Embodiment one
Fig. 3 shows an optional flow diagram of image processing method provided in an embodiment of the present invention, the present invention The image processing method of embodiment is applied to terminal, is related to step S101 to step S103, illustrates separately below.
Step S101, terminal obtain the first expression attribute of source object in the first image acquired.
Here, terminal determines that the source object in the first image, the first image can be the dynamic videos of terminal acquisition first, It can also be the still image of terminal acquisition.Correspondingly, when the first image is the dynamic video of terminal acquisition, source object is Personage (i.e. terminal user) in dynamic video;When the first image is the still image of terminal acquisition, source object is static state Personage (i.e. terminal user) in image.
Terminal obtains the first expression attribute of source object after determining source object;First expression attribute is used for characterizing The expression at family, such as it is glad, sad, naughty.
Referring to Fig. 4, the first expression attribute of source object in the first image of acquisition is obtained for terminal of the embodiment of the present invention One optional flow diagram, includes the following steps:
Step S101a, the characteristic point of source object in the first image described in terminal recognition.
In one embodiment, terminal identifies the imaging region of source object from the first image;It will be from the imaging area of source object The feature of domain extraction and candidate characteristics of objects template matching;The characteristic point in successful characteristics of objects template will be compared, is identified For the characteristic point of source object.It can be understood as end side and be provided with characteristics of objects template library, be stored with multiple characteristics of objects templates, Multiple characteristic points are demarcated in each characteristics of objects template, when the feature and characteristics of objects template of the imaging region extraction of source object Feature match (similarity be more than predetermined threshold value) when, the characteristic point for being considered as the characteristics of objects template is the feature of source object Point.
In an optional embodiment, the feature that terminal is extracted during identifying the characteristic point of source object is generally divided into Visual signature, pixels statistics feature, facial image transformation coefficient feature, facial image algebraic characteristic etc.;It can will extract face figure As the method for feature is summarized as two major classes:One is Knowledge based engineering characterizing method, another kind is to be based on algebraic characteristic or statistics The characterizing method of study.Wherein, Knowledge based engineering characterizing method, mainly according to the shape description of human face and each face The distance between organ characteristic contributes to the characteristic of face classification, characteristic component to generally include between characteristic point to obtain Euclidean distance, curvature and angle etc.;Face is made of local features such as eyes, nose, mouth and chins, to these local features with And the geometric description of the structural relation between local feature, the important feature of identification face is can be used as, and these features are known as Geometric properties.
Step S101b, characteristic point of the terminal based on the source object obtain the first expression attribute of the source object.
In one embodiment, the training sample and the training sample of terminal feature based point sample architecture mark Objective attribute target attribute, training machine learning model so that the machine learning model has to be predicted accordingly according to the training sample The performance of objective attribute target attribute;By the characteristic point input of source object machine learning model trained in advance, the engineering is utilized Practise the first expression attribute that model obtains the source object.
Here, the objective attribute target attribute refers to the corresponding expression attribute of characteristic point sample;The method of training machine learning model Including but not limited to SVM, Random Forests, Adaboost scheduling algorithms.
The first expression attribute, the expression for characterizing the source object, such as glad, happy, sad, naughty table Feelings.
Certainly, in practical applications, can also be carried by iOS Face datection, OpenCV Face datections, Face++, The human face detection tech such as sensetime, the excellent figure Face datection of Tencent realize the operation of Face datection, and then obtain the source object The first expression attribute.
Step S102 obtains the second expression attribute with the first expression attributes match.
Here, in actual implementation, terminal be based on the first expression attribute, in candidate facial expression image search with it is described First expression attribute of the first expression attributes match.It can be understood as that espressiove matching library is arranged in terminal, the expression matching Candidate facial expression image is stored in library, each candidate's facial expression image and at least one expression attributes match.That is, for every A candidate's facial expression image, can be with one or more expression attributes match;It is matching by taking candidate facial expression image is naughty as an example Expression attribute can be sad and wail.Correspondingly, each expression attribute and at least one candidate expression images match.Also It is to say, it, can be with one or more candidate expression images match for each expression attribute;By taking expression attribute is sad as an example, Matching candidate facial expression image can be naughty and lovely.
In one embodiment, the matching relationship of expression attribute and candidate facial expression image, can flexible configuration according to actual needs. For example, when the first expression attribute is difficult out-of-date, the second expression attribute with the first expression attributes match is naughty, funny face Deng;It is to laugh, smile fatuously with the matched second expression attribute of the first expression when the first expression attribute is happy.
It in one embodiment, can be multiple when including multiple with the second expression attribute of the first expression attributes match An expression attribute is randomly choosed in second expression attribute, or presses one expression attribute of preset policy selection, for the Target object is updated processing in two images.
Here, preset strategy includes at least:Pair with multiple second expression attributes of the first expression attributes match according to going through History selects number to carry out descending arrangement, the preferential preceding expression attribute of selected and sorted;Or according to multiple second expression attributes with The degree of association of first expression attribute is ranked up, preferential selection and the first highest expression attribute of expression Attribute Association degree.
Step S103 is based on the second expression attribute, processing is updated to target object in the second image.
In actual implementation, terminal is by target in the corresponding target facial expression image of the second expression attribute and second image Object carries out fusion treatment.
In one embodiment, referring to Fig. 5, by the corresponding target facial expression image of the second expression attribute and second figure Target object carries out an optional flow diagram of fusion treatment as in, includes the following steps:
Step S103a, the characteristic point of target facial expression image described in terminal recognition.
In one embodiment, the specific implementation process of the characteristic point of target facial expression image described in terminal recognition, with step The specific implementation process of the characteristic point of source object is identical in the first image described in terminal recognition in S101a, and which is not described herein again.
Step S103b, terminal determine target object and the target facial expression image same type in second image Characteristic point.
In one embodiment, it is first determined the characteristic point of target facial expression image;By way of the position of alignment feature point, The size of adjustment target object is matched with the size of target facial expression image;Then target object and target are determined according to matching result The position of the characteristic point of the same type of facial expression image.Based on the position of the characteristic point of same type in target facial expression image, adjust The position of the characteristic point of the whole target object same type;If the feature of same type is to being mouth feature point, then by target The position of the mouth feature point of object is adjusted to the position of the mouth feature point in target facial expression image.
In another embodiment, it is first determined the characteristic point of target facial expression image;Pass through the side of the position of alignment feature point Formula, the size for adjusting target object are matched with the size of target facial expression image;Then according to matching result determine target object and The position of the characteristic point of the same type of target facial expression image.Position based on the characteristic point of same type in target facial expression image It sets, adjusts the position of the characteristic point of the target object same type, and update and same type in second image The distance between characteristic point meets the position of the characteristic point of the first distance threshold.
Using object as face for example, according to regional relation by the characteristic point in target object as vertex of a triangle Line is carried out, several triangles can be obtained, realize the segmentation using triangle pair face, is the present invention referring to Fig. 6, Fig. 6 The schematic diagram for the face triangle segmentation that embodiment provides, if changing the position of arbitrary characteristics point, corresponding triangle also can It is distorted, and then leads to face's deformation.It can be as unit of triangle, by the position for changing the characteristic point based on target object It sets, realizes the deformation (such as change in eyebrow shape and direction) of the corresponding biological characteristic of the characteristic point of target object, and then realize The corresponding expression attribute of target object and the secondth expression attribute of acquisition are same or similar.
As an example, the selection of characteristic point can be chosen according to actual needs, it is special such as to choose place between the eyebrows characteristic point, nose Levy point, mouth feature point etc.;Above-mentioned size matching is that size is consistent, in actual implementation, in alignment reference characteristic point On the basis of, when carrying out Image Adjusting, an adjustment standard can be also chosen, it is adjustment standard such as to choose interpupillary distance, and target object is contracted It is put into the same size with target facial expression image.
In the embodiment of the present invention, the second image be source object terminal in the image that presents, target object in the second image Including but not limited to personage, animal, cartoon character, animation image and object etc..First expression attribute of source object can be It is generated based on the second image, may not be and generated based on the second image.
When the first expression attribute of source object is generated based on the second image, corresponding scene is source object viewing terminal The second image presented, the sorrow generated according to the target object (friend for such as extincting family members, for a long time having no) of the second image The moods such as hinder, care for, missing, and then being corresponding first expression attribute in the facial expressiveness of source object.As shown in fig. 7, terminal By identifying the first expression attribute of source object, obtain with the second expression attribute of the first expression attributes match (it is such as naughty, open The heart smiles fatuously), the expression attribute of target object in the second image is replaced with to the second expression attribute of acquisition.It so, it is possible root The expression attribute of target object, realizes the second image and week in the second image presented according to the mood change terminal of terminal user The intelligent adaptation in collarette border.
When the first expression attribute of source object, which is not based on the second image, to be generated, corresponding scene is source object due to the Other factors other than two images generate the moods such as sad, gloomy, and terminal collects the focus vision of source object in terminal circle When the second image that face is presented, obtain with the second expression attribute of the first expression attributes match (such as naughty, happy, smile fatuously), The expression attribute of target object in second image is replaced with to the second expression attribute of acquisition.It so, it is possible according to terminal user Mood change terminal present the second image in target object expression attribute so that source object seeing change expression attribute After the second image afterwards, the negative emotions of itself can be changed, the intelligence for not only realizing the second image and ambient enviroment is suitable Match, and improves the experience of user.Here, target object can be that the cartoon characters such as Doraemon or Chibi Maruko Chan etc. are dynamic Unrestrained image, or the only objects such as egg.By taking target object is egg as an example, although expression, such as Fig. 8 is not present in the picture of egg It is shown, it can increase by the second expression attribute on the eggshell of egg, i.e., increase on the eggshell of egg naughty, happy or smile fatuously Etc. expressions.
Embodiment two
Fig. 9 shows another optional flow diagram of image processing method provided in an embodiment of the present invention, this hair The image processing method of bright embodiment is applied to terminal, is related to step S201 to step S204, illustrates separately below.
Step S201, terminal obtain the first expression attribute of source object in the first image acquired.
Here, terminal determines that the source object in the first image, the first image can be the dynamic videos of terminal acquisition first, It can also be the still image of terminal acquisition.Correspondingly, when the first image is the dynamic video of terminal acquisition, source object is Personage (i.e. terminal user) in dynamic video;When the first image is the still image of terminal acquisition, source object is static state Personage (i.e. terminal user) in image.
Terminal obtains the first expression attribute of source object after determining source object;First expression attribute is used for characterizing The expression at family, such as it is glad, sad, naughty.
Step S202 obtains a second expression attribute with the first expression attributes match.
Here, in actual implementation, terminal be based on the first expression attribute, in candidate facial expression image search with it is described First expression attribute of the first expression attributes match.It can be understood as that espressiove matching library is arranged in terminal, the expression matching Candidate facial expression image is stored in library, each candidate's facial expression image and at least one expression attributes match.That is, for every A candidate's facial expression image, can be with one or more expression attributes match;It is matching by taking candidate facial expression image is naughty as an example Expression attribute can be sad and wail.Correspondingly, each expression attribute and at least one candidate expression images match.Also It is to say, it, can be with one or more candidate expression images match for each expression attribute;By taking expression attribute is sad as an example, Matching candidate facial expression image can be naughty and lovely.
In one embodiment, the matching relationship of expression attribute and candidate facial expression image, can flexible configuration according to actual needs. For example, when the first expression attribute is difficult out-of-date, the second expression attribute with the first expression attributes match is naughty, funny face Deng;It is to laugh, smile fatuously with the matched second expression attribute of the first expression when the first expression attribute is happy.
It, can be multiple when including multiple with the second expression attribute of the first expression attributes match in the present embodiment An expression attribute is randomly choosed in two expression attributes, or presses one expression attribute of preset policy selection, for second Target object is updated processing in image.
Here, preset strategy includes at least:Pair with multiple second expression attributes of the first expression attributes match according to going through History selects number to carry out descending arrangement, the preferential preceding expression attribute of selected and sorted;Or according to multiple second expression attributes with The degree of association of first expression attribute is ranked up, preferential selection and the first highest expression attribute of expression Attribute Association degree.
Step S203 obtains a target object in the second image.
In one embodiment, when in the second image there is only when an object, using the object as the mesh in the second image Mark object.
In another embodiment, when in the second image there are when multiple objects, can be according to multiple objects in the second image In position be ranked up;It, will row if multiple objects press parallel and display interface direction sequencing in the second image Sequence centre position object as target object;If multiple objects are in the second image by perpendicular to the direction of display interface When sequence, then using the object to sort up front as target object.
In another embodiment, when, there are when multiple objects, terminal can pass through video camera or sensor in the second image The visual focus for acquiring user, selects the nearest object of visual focus positional distance in the second image with user as target pair As.
Step S204 is based on the second expression attribute, processing is updated to a target object of selection.
In actual implementation, terminal by the corresponding target facial expression image of the second expression attribute with selection a target object Carry out fusion treatment.
Terminal is based on the second expression attribute, the process of processing is updated to a target object of selection, with above-mentioned step The specific implementation process of rapid S103 is identical, and which is not described herein again.
Terminal obtains the second expression attribute with the first expression attributes match by the first expression attribute of identification source object The expression attribute of target object in second image, is replaced with the second expression attribute of acquisition by (such as naughty, happy, smile fatuously). The expression attribute that so, it is possible target object in the second image presented according to the mood change terminal of terminal user, realizes Second image is adapted to the intelligence of ambient enviroment.
Embodiment three
Figure 10 shows another optional flow diagram of image processing method provided in an embodiment of the present invention, this hair The image processing method of bright embodiment is applied to terminal, is related to step S301 to step S304, illustrates separately below.
Step S301, terminal obtain the first expression attribute of source object in the first image acquired.
Here, terminal determines that the source object in the first image, the first image can be the dynamic videos of terminal acquisition first, It can also be the still image of terminal acquisition.Correspondingly, when the first image is the dynamic video of terminal acquisition, source object is Personage (i.e. terminal user) in dynamic video;When the first image is the still image of terminal acquisition, source object is static state Personage (i.e. terminal user) in image.
Terminal obtains the first expression attribute of source object after determining source object;First expression attribute is used for characterizing The expression at family, such as it is glad, sad, naughty.
Step S302 obtains multiple second expression attributes with the first expression attributes match.
Here, in actual implementation, terminal be based on the first expression attribute, in candidate facial expression image search with it is described First expression attribute of the first expression attributes match.It can be understood as that espressiove matching library is arranged in terminal, the expression matching Candidate facial expression image is stored in library, each candidate's facial expression image and at least one expression attributes match.That is, for every A candidate's facial expression image, can be with one or more expression attributes match;It is matching by taking candidate facial expression image is naughty as an example Expression attribute can be sad and wail.Correspondingly, each expression attribute and at least one candidate expression images match.Also It is to say, it, can be with one or more candidate expression images match for each expression attribute;By taking expression attribute is sad as an example, Matching candidate facial expression image can be naughty and lovely.
In one embodiment, the matching relationship of expression attribute and candidate facial expression image, can flexible configuration according to actual needs. For example, when the first expression attribute is difficult out-of-date, the second expression attribute with the first expression attributes match is naughty, funny face Deng;It is to laugh, smile fatuously with the matched second expression attribute of the first expression when the first expression attribute is happy.
Step S303 obtains multiple target objects in the second image.
In the present embodiment, there are multiple objects in the second image, using whole objects in the second image as target object, That is there are multiple target objects in the second image.
Step S304 selects ready for use second based on the quantity of target object from multiple second expression attributes of acquisition Expression attribute.
Here, in actual implementation, terminal compares the quantitative relation of the second expression attribute and target object of acquisition, when obtaining When the quantity of the second expression attribute taken is more than the quantity of target object, selection and mesh in multiple second expression attributes of acquisition The second equal expression attribute of the quantity of object is marked, as the second expression attribute ready for use.Specifically, can be according to acquisition Second expression attribute carries out ascending order arrangement according to history access times, and selected and sorted is preceding, equal with target object quantity Second expression attribute is as the second expression attribute ready for use.For example, the second expression number of attributes of acquisition is ten, mesh It is six to mark object;Ten the second expression attributes of acquisition are ranked up according to history access times, i.e., it is access times are more The second expression attribute sort preceding;It selects to be sorted in the second expression attribute in the second expression attribute of the first six as to be used The second expression attribute.
In actual implementation, when the quantity of the second expression attribute of acquisition is less than the quantity of target object, by acquisition Second expression attribute is all as the second expression attribute ready for use.For example, the second expression number of attributes of acquisition is six A, target object is ten;It need to select four the second expression attributes in six the second expression attributes, selected four second Expression attribute is respectively used to two different target objects of update.
Step S305 is based on the second expression attribute ready for use, processing is updated to multiple target objects.
In actual implementation, terminal by the corresponding target facial expression image of the second expression attribute ready for use respectively with each mesh It marks object and carries out fusion treatment.
Terminal is based on the second expression attribute ready for use, and multiple target objects are updated with the process of processing, and above-mentioned The specific implementation process of step S103 is identical, and which is not described herein again.
Terminal obtains the second expression attribute with the first expression attributes match by the first expression attribute of identification source object The expression attribute of target object in second image, is replaced with the second expression attribute of acquisition by (such as naughty, happy, smile fatuously). The expression attribute that so, it is possible target object in the second image presented according to the mood change terminal of terminal user, realizes Second image is adapted to the intelligence of ambient enviroment.
Example IV
Figure 11 shows another optional flow diagram of image processing method provided in an embodiment of the present invention, this hair The image processing method of bright embodiment is applied to terminal, is related to step S401 to step S404, illustrates separately below.
Step S401, terminal obtain the first expression attribute of source object in the first image acquired.
Here, terminal determines that the source object in the first image, the first image can be the dynamic videos of terminal acquisition first, It can also be the still image of terminal acquisition.Correspondingly, when the first image is the dynamic video of terminal acquisition, source object is Personage (i.e. terminal user) in dynamic video;When the first image is the still image of terminal acquisition, source object is static state Personage (i.e. terminal user) in image.
Terminal obtains the first expression attribute of source object after determining source object;First expression attribute is used for characterizing The expression at family, such as it is glad, sad, naughty.
Step S402 obtains a second expression attribute with the first expression attributes match.
Here, in actual implementation, terminal be based on the first expression attribute, in candidate facial expression image search with it is described First expression attribute of the first expression attributes match.It can be understood as that espressiove matching library is arranged in terminal, the expression matching Candidate facial expression image is stored in library, each candidate's facial expression image and at least one expression attributes match.That is, for every A candidate's facial expression image, can be with one or more expression attributes match;It is matching by taking candidate facial expression image is naughty as an example Expression attribute can be sad and wail.Correspondingly, each expression attribute and at least one candidate expression images match.Also It is to say, it, can be with one or more candidate expression images match for each expression attribute;By taking expression attribute is sad as an example, Matching candidate facial expression image can be naughty and lovely.
In one embodiment, the matching relationship of expression attribute and candidate facial expression image, can flexible configuration according to actual needs. For example, when the first expression attribute is difficult out-of-date, the second expression attribute with the first expression attributes match is naughty, funny face Deng;It is to laugh, smile fatuously with the matched second expression attribute of the first expression when the first expression attribute is happy.
It, can be multiple when including multiple with the second expression attribute of the first expression attributes match in the present embodiment An expression attribute is randomly choosed in two expression attributes, or presses one expression attribute of preset policy selection, for second Target object is updated processing in image.
Here, preset strategy includes at least:Pair with multiple second expression attributes of the first expression attributes match according to going through History selects number to carry out descending arrangement, the preferential preceding expression attribute of selected and sorted;Or according to multiple second expression attributes with The degree of association of first expression attribute is ranked up, preferential selection and the first highest expression attribute of expression Attribute Association degree.
Step S403 obtains multiple target objects in the second image.
In the present embodiment, there are multiple objects in the second image, using whole objects in the second image as target object, That is there are multiple target objects in the second image.
Multiple target objects are updated processing by step S404, a second expression attribute based on acquisition.
In actual implementation, terminal by the corresponding target facial expression image of a second expression attribute of acquisition respectively with it is each Target object carries out fusion treatment so that multiple target objects the second expression attribute having the same in the second image.
Terminal is based on the second expression attribute ready for use, and multiple target objects are updated with the process of processing, and above-mentioned The specific implementation process of step S103 is identical, and which is not described herein again.
Terminal obtains the second expression attribute with the first expression attributes match by the first expression attribute of identification source object The expression attribute of target object in second image, is replaced with the second expression attribute of acquisition by (such as naughty, happy, smile fatuously). The expression attribute that so, it is possible target object in the second image presented according to the mood change terminal of terminal user, realizes Second image is adapted to the intelligence of ambient enviroment.
Embodiment five
The embodiment of the present invention five also provides a kind of electronic equipment, and the electronic equipment includes:
Memory, for storing executable program;
Processor is realized when for by executing the executable program stored in the memory:
Obtain the first expression attribute of source object in the first image of acquisition;
Obtain the second expression attribute with the first expression attributes match;
Based on the second expression attribute, processing is updated to target object in the second image.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
Identify the characteristic point of source object in first image;Based on the characteristic point of the source object, the source pair is obtained The first expression attribute of elephant.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
The objective attribute target attribute of the training sample of feature based point sample architecture and training sample label, training machine Learning model so that the machine learning model has the performance that corresponding objective attribute target attribute is predicted according to the training sample;It will The machine learning model that the characteristic point input of the source object is trained in advance, utilizes the machine learning model to obtain the source pair The first expression attribute of elephant.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
Based on the first expression attribute, the target with the first expression attributes match is searched in candidate facial expression image Facial expression image set;A target facial expression image is selected in the target image set;By selected target facial expression image Corresponding expression attribute is determined as the second expression attribute.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
Based on the first expression attribute, the target with the first expression attributes match is searched in candidate facial expression image Facial expression image set;Determine the quantity of target object in second image;In the target image set selection with it is described The target facial expression image of the identical quantity of quantity of target object;The corresponding expression attribute of selected target facial expression image is determined For the second expression attribute.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
Identify at least one of second image object;It is selected at least one of second image object One object, as target object in the second image;Based on the second expression attribute, a selected object is carried out more New processing.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
Identify the object in second image;Using the whole objects identified as target object in the second image;Base In the second expression attribute, processing is updated to the whole objects identified.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
The corresponding target facial expression image of the second expression attribute is merged with target object in second image Processing, to realize the update to target object in the second image.
In the embodiment of the present invention, when the processor is additionally operable to run the computer program, execute:
Identify the characteristic point of the target facial expression image;Determine target object and the target expression in second image The characteristic point of image same type;The position of characteristic point based on the target facial expression image updates phase in second image The position of the characteristic point of same type, and the distance between characteristic point of update and same type in second image meet the The position of the characteristic point of one distance threshold.
The embodiment of the present invention additionally provides a kind of readable storage medium storing program for executing, and storage medium may include:Movable storage device, with Machine accesses memory (RAM, Random Access Memory), read-only memory (ROM, Read-Only Memory), magnetic disc Or the various media that can store program code such as CD.The readable storage medium storing program for executing is stored with executable program;
The executable program, is realized when for being executed by processor:
Obtain the first expression attribute of source object in the first image of acquisition;
Obtain the second expression attribute with the first expression attributes match;
Based on the second expression attribute, processing is updated to target object in the second image.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
Identify the characteristic point of source object in first image;Based on the characteristic point of the source object, the source pair is obtained The first expression attribute of elephant.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
The objective attribute target attribute of the training sample of feature based point sample architecture and training sample label, training machine Learning model so that the machine learning model has the performance that corresponding objective attribute target attribute is predicted according to the training sample;It will The machine learning model that the characteristic point input of the source object is trained in advance, utilizes the machine learning model to obtain the source pair The first expression attribute of elephant.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
Based on the first expression attribute, the target with the first expression attributes match is searched in candidate facial expression image Facial expression image set;A target facial expression image is selected in the target image set;By selected target facial expression image Corresponding expression attribute is determined as the second expression attribute.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
Based on the first expression attribute, the target with the first expression attributes match is searched in candidate facial expression image Facial expression image set;Determine the quantity of target object in second image;In the target image set selection with it is described The target facial expression image of the identical quantity of quantity of target object;The corresponding expression attribute of selected target facial expression image is determined For the second expression attribute.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
Identify at least one of second image object;It is selected at least one of second image object One object, as target object in the second image;Based on the second expression attribute, a selected object is carried out more New processing.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
Identify the object in second image;Using the whole objects identified as target object in the second image;Base In the second expression attribute, processing is updated to the whole objects identified.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
The corresponding target facial expression image of the second expression attribute is merged with target object in second image Processing, to realize the update to target object in the second image.
In the embodiment of the present invention, the executable program is realized when being additionally operable to be executed by processor:
Identify the characteristic point of the target facial expression image;Determine target object and the target expression in second image The characteristic point of image same type;The position of characteristic point based on the target facial expression image updates phase in second image The position of the characteristic point of same type, and the distance between characteristic point of update and same type in second image meet the The position of the characteristic point of one distance threshold.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or executable program Product.Therefore, the shape of hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the present invention Formula.Moreover, the present invention can be used can use storage in the computer that one or more wherein includes computer usable program code The form for the executable program product implemented on medium (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and executable program product Figure and/or block diagram describe.It should be understood that can be by every first-class in executable program instructions implementation flow chart and/or block diagram The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These executable programs can be provided Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or reference programmable data processing device to produce A raw machine so that the instruction generation executed by computer or with reference to the processor of programmable data processing device is configured to Realize the dress for the function of being specified in one flow of flow chart or multiple flows and/or one box of block diagram or multiple boxes It sets.
These executable program instructions, which may also be stored in, can guide computer or with reference to programmable data processing device with spy Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or The function of being specified in multiple boxes.
These executable program instructions can also be loaded into computer or with reference in programmable data processing device so that count Calculation machine or with reference to executing series of operation steps on programmable device to generate computer implemented processing, in computer or It is arranged for carrying out in one flow of flow chart or multiple flows and/or block diagram with reference to the instruction offer executed on programmable device The step of function of being specified in one box or multiple boxes.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (10)

1. a kind of image processing method, which is characterized in that the method includes:
Obtain the first expression attribute of source object in the first image of acquisition;
Obtain the second expression attribute with the first expression attributes match;
Based on the second expression attribute, processing is updated to target object in the second image.
2. according to the method described in claim 1, it is characterized in that, described obtain first of source object in the first image acquired Expression attribute, including:
Identify the characteristic point of source object in first image;
Based on the characteristic point of the source object, the first expression attribute of the source object is obtained.
3. according to the method described in claim 2, it is characterized in that, the characteristic point based on the source object, described in acquisition First expression attribute of source object, including:
The objective attribute target attribute of the training sample of feature based point sample architecture and training sample label, training machine study Model so that the machine learning model has the performance that corresponding objective attribute target attribute is predicted according to the training sample;
By the characteristic point input of source object machine learning model trained in advance, institute is obtained using the machine learning model State the first expression attribute of source object.
4. according to the method described in claim 1, it is characterized in that, the acquisition and the second of the first expression attributes match Expression attribute, including:
Based on the first expression attribute, the target expression with the first expression attributes match is searched in candidate facial expression image Image collection;
A target facial expression image is selected in the target image set;
The corresponding expression attribute of selected target facial expression image is determined as the second expression attribute.
5. according to the method described in claim 1, it is characterized in that, the acquisition and the second of the first expression attributes match Expression attribute, including:
Based on the first expression attribute, the target expression with the first expression attributes match is searched in candidate facial expression image Image collection;
Determine the quantity of target object in second image;
The target facial expression image of quantity identical as the quantity of the target object is selected in the target image set;
The corresponding expression attribute of selected target facial expression image is determined as the second expression attribute.
6. according to the method described in claim 1, it is characterized in that, described be based on the second expression attribute, to the second image Middle target object is updated processing, including:
Identify at least one of second image object;
An object is selected at least one of second image object, as target object in the second image;
Based on the second expression attribute, processing is updated to a selected object.
7. according to the method described in claim 1, it is characterized in that, described be based on the second expression attribute, to the second image Middle target object is updated processing, including:
Identify the object in second image;
Using the whole objects identified as target object in the second image;
Based on the second expression attribute, processing is updated to the whole objects identified.
8. the method according to right 6 or 7, which is characterized in that it is described to be based on the second expression attribute, in the second image Target object is updated processing, including:
Target object in the corresponding target facial expression image of the second expression attribute and second image is subjected to fusion treatment, To realize the update to target object in the second image.
9. according to the method described in claim 8, it is characterized in that, described by the corresponding target expression of the second expression attribute Image carries out fusion treatment with target object in second image, including:
Identify the characteristic point of the target facial expression image;
Determine the characteristic point of target object and the target facial expression image same type in second image;
The position of characteristic point based on the target facial expression image updates the position of the characteristic point of same type in second image It sets, and the distance between update and the characteristic point of same type in second image meet the characteristic point of the first distance threshold Position.
10. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
Memory, for storing executable program;
Processor is realized when for by executing the executable program stored in the memory:
Obtain the first expression attribute of source object in the first image of acquisition;
Obtain the second expression attribute with the first expression attributes match;
Based on the second expression attribute, processing is updated to target object in the second image.
CN201810047955.0A 2018-01-18 2018-01-18 Image processing method and electronic equipment Active CN108334821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810047955.0A CN108334821B (en) 2018-01-18 2018-01-18 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810047955.0A CN108334821B (en) 2018-01-18 2018-01-18 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108334821A true CN108334821A (en) 2018-07-27
CN108334821B CN108334821B (en) 2020-12-18

Family

ID=62925258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810047955.0A Active CN108334821B (en) 2018-01-18 2018-01-18 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108334821B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072047A (en) * 2019-01-25 2019-07-30 北京字节跳动网络技术有限公司 Control method, device and the hardware device of image deformation
CN111784787A (en) * 2019-07-17 2020-10-16 北京沃东天骏信息技术有限公司 Image generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123749A (en) * 2014-07-23 2014-10-29 邢小月 Picture processing method and system
CN106303233A (en) * 2016-08-08 2017-01-04 西安电子科技大学 A kind of video method for secret protection merged based on expression
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
CN106803909A (en) * 2017-02-21 2017-06-06 腾讯科技(深圳)有限公司 The generation method and terminal of a kind of video file
CN107222675A (en) * 2017-05-23 2017-09-29 维沃移动通信有限公司 The photographic method and mobile terminal of a kind of mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123749A (en) * 2014-07-23 2014-10-29 邢小月 Picture processing method and system
CN106303233A (en) * 2016-08-08 2017-01-04 西安电子科技大学 A kind of video method for secret protection merged based on expression
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
CN106803909A (en) * 2017-02-21 2017-06-06 腾讯科技(深圳)有限公司 The generation method and terminal of a kind of video file
CN107222675A (en) * 2017-05-23 2017-09-29 维沃移动通信有限公司 The photographic method and mobile terminal of a kind of mobile terminal

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072047A (en) * 2019-01-25 2019-07-30 北京字节跳动网络技术有限公司 Control method, device and the hardware device of image deformation
WO2020151491A1 (en) * 2019-01-25 2020-07-30 北京字节跳动网络技术有限公司 Image deformation control method and device and hardware device
US11409794B2 (en) 2019-01-25 2022-08-09 Beijing Bytedance Network Technology Co., Ltd. Image deformation control method and device and hardware device
CN111784787A (en) * 2019-07-17 2020-10-16 北京沃东天骏信息技术有限公司 Image generation method and device
CN111784787B (en) * 2019-07-17 2024-04-09 北京沃东天骏信息技术有限公司 Image generation method and device

Also Published As

Publication number Publication date
CN108334821B (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US10360710B2 (en) Method of establishing virtual makeup data and electronic device using the same
Yang et al. Benchmarking commercial emotion detection systems using realistic distortions of facial image datasets
US20210366163A1 (en) Method, apparatus for generating special effect based on face, and electronic device
Caridakis et al. Modeling naturalistic affective states via facial and vocal expressions recognition
US10223838B2 (en) Method and system of mobile-device control with a plurality of fixed-gradient focused digital cameras
WO2021213067A1 (en) Object display method and apparatus, device and storage medium
CN110674664A (en) Visual attention recognition method and system, storage medium and processor
CN107145833A (en) The determination method and apparatus of human face region
JP2014516490A (en) Personalized program selection system and method
US12056927B2 (en) Systems and methods for generating composite media using distributed networks
CN107368182B (en) Gesture detection network training, gesture detection and gesture control method and device
CN110298380A (en) Image processing method, device and electronic equipment
CN105453070A (en) Machine learning-based user behavior characterization
JP2020507159A (en) Picture push method, mobile terminal and storage medium
CN105430269B (en) A kind of photographic method and device applied to mobile terminal
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
CN112669422B (en) Simulated 3D digital person generation method and device, electronic equipment and storage medium
CN110619656A (en) Face detection tracking method and device based on binocular camera and electronic equipment
KR20150064977A (en) Video analysis and visualization system based on face information
Chalup et al. Simulating pareidolia of faces for architectural image analysis
CN108334821A (en) A kind of image processing method and electronic equipment
CN110502959A (en) Sexual discriminating method, apparatus, storage medium and electronic equipment
Bacivarov et al. Smart cameras: 2D affine models for determining subject facial expressions
CN112149599B (en) Expression tracking method and device, storage medium and electronic equipment
CN111723758B (en) Video information processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant