CN110363718A - Face image processing process, device, medium and electronic equipment - Google Patents
Face image processing process, device, medium and electronic equipment Download PDFInfo
- Publication number
- CN110363718A CN110363718A CN201910583758.5A CN201910583758A CN110363718A CN 110363718 A CN110363718 A CN 110363718A CN 201910583758 A CN201910583758 A CN 201910583758A CN 110363718 A CN110363718 A CN 110363718A
- Authority
- CN
- China
- Prior art keywords
- mandibular
- processed
- special efficacy
- facial image
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000008569 process Effects 0.000 title claims abstract description 17
- 230000001815 facial effect Effects 0.000 claims abstract description 133
- 230000000694 effects Effects 0.000 claims abstract description 44
- 230000009471 action Effects 0.000 claims description 14
- 230000004044 response Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012790 confirmation Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 abstract description 19
- 238000009877 rendering Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000001360 synchronised effect Effects 0.000 description 8
- 230000006399 behavior Effects 0.000 description 6
- 210000000216 zygoma Anatomy 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 210000000988 bone and bone Anatomy 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000006378 damage Effects 0.000 description 4
- 230000003796 beauty Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
Embodiment of the disclosure provides a kind of face image processing process, device, medium and electronic equipment, this method comprises: obtaining multiple facial images to be processed;It receives user to operate the local special efficacy of wherein one facial image to be processed, includes that the user operates the special efficacy that face mandibular adjusts in the part special efficacy operation;It is operated according to the special efficacy to the adjustment of face mandibular, mandibular is carried out to described one facial image to be processed and adjusts special effect processing;Judge multiple described facial images to be processed whether include same people facial image to be processed;If the facial image to be processed including same people, synchronize the special effect processing result to the same people facial image to be processed.In the embodiments of the present disclosure, when receiving user's selection needs the selection operation of mandibular adjustment special efficacy to be added, mandibular can be adjusted to the corresponding effect addition of special efficacy in facial image to be processed, greatly reduce the time that mandibular adjusts the processing of special efficacy.
Description
Technical field
This disclosure relates to technical field of image processing, specifically, this disclosure relates to a kind of face image processing process, dress
It sets, medium and electronic equipment.
Background technique
With smart machine application it is more and more extensive, the means for obtaining facial image are more and more convenient, and user is for people
The requirement of face image is also higher and higher, and in existing U.S. face class tool, user can carry out U.S. face processing to facial image, to mention
Rise the aesthetic feeling of image, such as whitening, exposure adjustment and the finishing of shape of face.But in existing U.S. face class tool or it is
The automatic U.S. face set, this U.S. face mode user can not select, and smart machine provides U.S. face knot according to Predistribution Algorithm automatically
Fruit, sometimes, the face reparation difference provided automatically are obvious, have excessive distortion, while user also can not be as needed
Local directed complete set is carried out, inconvenience is brought., some U.S. face tools need user to carry out complicated operation manually, are no less than
A PS is carried out, such U.S. face tool operation is complicated, or even needs prolonged study ability smooth operation, for example, adjustment
Shape of face, user need to carry out face mask the manual operation of multi-angle, can just obtain ideal shape of face image.For adjusting manually
The whole unskilled user of image carries out the U.S. face processing of image by way of manually adjusting, is not only extremely difficult to the mesh of U.S. face
, and U.S. face processing short time consumption is long, and the usage experience of user is poor.Meanwhile for needing several facial images to carry out beauty
Yan Shi needs to operate one by one, time-consuming and laborious, inefficiency.
Therefore, in existing U.S. face class tool or local directed complete set, adjust automatically effect difference or user can not be carried out
Operating process is complicated, and user experience is poor or inefficiency, and it is impossible to meet the practical application requests of user.
Summary of the invention
The disclosure is designed to provide a kind of face image processing process, device, medium and electronic equipment, is able to solve
At least one technical problem mentioned above.Concrete scheme is as follows:
According to the specific embodiment of the disclosure, in a first aspect, the disclosure provides a kind of face image processing process, packet
It includes:
Obtain multiple facial images to be processed;
It receives user to operate the local special efficacy of wherein one facial image to be processed, in the part special efficacy operation
The special efficacy that face mandibular adjusts is operated including the user;
It is operated according to the special efficacy to the adjustment of face mandibular, mandibular is carried out to described one facial image to be processed
Adjust special effect processing;
Judge multiple described facial images to be processed whether include same people facial image to be processed;
If the facial image to be processed including same people, the special effect processing result is synchronized to the same people wait locate
Manage facial image.
Optionally, described to be operated according to the special efficacy to the adjustment of face mandibular, to described one face figure to be processed
Special effect processing is adjusted as carrying out mandibular, comprising:
Determine the coordinate origin value of a facial image to be processed;
Determine the current signature point of the mandibular in a facial image to be processed, it is true according to the coordinate origin value
The current characteristic value of the fixed current signature point;
Special efficacy adjustment processing is carried out to the mandibular according to the current characteristic value.
It is optionally, described that special efficacy adjustment processing is carried out to the mandibular according to the current characteristic value, comprising:
The mobile vector for determining current signature point is operated according to the current characteristic value and the local special efficacy, it is described to work as
Preceding mobile vector includes current moving direction and current moving distance;
It receives and starts adjustment instruction, according to the mobile vector of the current signature point, the mandibular current signature point is moved
Move to mandibular characteristic point after the matched movement of the adjustment instruction;
According to mandibular characteristic point after the movement and current signature point, mandibular adjustment region is determined;
The mandibular adjustment region is rendered, the face mandibular image after being adjusted.
Optionally, described that the mobile arrow for determining current signature point is operated according to the current characteristic value and local special efficacy
Amount, comprising:
The start position of the local special efficacy operation is determined according to the current characteristic value;
The final position of the mandibular characteristic point is determined according to the local special efficacy operation;
The mobile vector of current signature point is determined according to the start position and the final position.
Optionally, the mobile vector that current signature point is determined according to the start position and the final position, packet
It includes:
The moving direction of the current signature point is straight down or straight up;
The moving distance of the current signature point is the difference of the start position coordinate and the final position coordinate.
Optionally, it receives user to operate the local special efficacy of wherein one facial image to be processed, the part is special
It include that the user operates the special efficacy that face mandibular adjusts in effect operation, comprising:
It receives user and trigger action is adjusted to the face mandibular of a wherein facial image to be processed;
In response to the trigger action, display sliding interface identification, the sliding mark interface display has face mandibular
Adjusting parameter, the face mandibular adjusting parameter is for identifying mandibular adjustment intensity.
Optionally, if the facial image to be processed including same people, the special effect processing result is synchronized described in
The facial image to be processed of same people, comprising:
If the facial image to be processed including same people, prompting interface is provided, the prompting interface chooses whether for user
Synchronize processing;
If receiving the instruction of confirmation synchronization process, the special effect processing result is synchronized to the to be processed of the same people
Facial image.
According to the specific embodiment of the disclosure, second aspect, the disclosure provides a kind of face image processing device, packet
It includes:
Acquiring unit, for obtaining multiple facial images to be processed;
Receiving unit operates the local special efficacy of wherein one facial image to be processed for receiving user, described
It include that the user operates the special efficacy that face mandibular adjusts in local special efficacy operation;
Processing unit, for being operated according to the special efficacy to the adjustment of face mandibular, to a face to be processed
Image carries out mandibular and adjusts special effect processing;
Judging unit, for judge multiple described facial images to be processed whether include same people face figure to be processed
Picture;
Synchronization unit, if synchronizing the special effect processing result to institute for the facial image to be processed including same people
State the facial image to be processed of same people.
According to the specific embodiment of the disclosure, the third aspect, the disclosure provides a kind of computer readable storage medium,
On be stored with computer program, when described program is executed by processor realize as above described in any item methods.
According to the specific embodiment of the disclosure, fourth aspect, the disclosure provides a kind of electronic equipment, comprising: one or
Multiple processors;Storage device, for storing one or more programs, when one or more of programs are by one or more
When a processor executes, so that one or more of processors realize as above described in any item methods.
The above scheme of the embodiment of the present disclosure compared with prior art, at least has the advantages that the disclosure is implemented
Face image processing process, device, electronic equipment and the computer readable storage medium of example, wherein the method makes user
When needing to handle the mandibular in facial image, beaten by the image for obtaining captured in real-time or the image being stored in equipment
It opens, triggers the special efficacy operation of the lower jaw bony site to facial image of display interface, can be realized under facial image user
The special effect processing in jawbone region, while the operation of piece image will can be synchronized in other images, improve holding for U.S. face
Line efficiency.With this solution, the face mandibular image of one or more people can be carried out by way of ease of Use
Beauty is repaired, and obtains natural repairing effect by the rendering on backstage, is changed because mandibular is prominent without beautiful defect, whole
A reparation can carry out complicated editor without user with one-key operation to the mandibular region of facial image manually, subtract significantly
The time for having lacked the processing of mandibular adjustment special efficacy, the interactive experience of user is improved, execution efficiency is improved, it is with higher
Market value.
Detailed description of the invention
In order to illustrate more clearly of technical solution in embodiment of the disclosure, the embodiment of the present disclosure will be described below
Needed in attached drawing be briefly described.
Fig. 1 is the flow diagram for the image processing method that embodiment of the disclosure provides;
Fig. 2 is the schematic diagram of special efficacy selection interface in the embodiment of the present disclosure;
Fig. 3 is the schematic diagram of special efficacy sliding interface in the embodiment of the present disclosure;
Fig. 4 is the schematic diagram of the moving direction of mandibular characteristic point in the embodiment of the present disclosure;
Fig. 5 is the structural schematic diagram for the image processing apparatus that embodiment of the disclosure provides;
Fig. 6 is the structural schematic diagram for the processing unit that embodiment of the disclosure provides;
Fig. 7 is the structural schematic diagram for a kind of electronic equipment that embodiment of the disclosure provides.
Specific embodiment
In order to keep the purposes, technical schemes and advantages of the disclosure clearer, below in conjunction with attached drawing to the disclosure make into
It is described in detail to one step, it is clear that described embodiment is only disclosure a part of the embodiment, rather than whole implementation
Example.It is obtained by those of ordinary skill in the art without making creative efforts based on the embodiment in the disclosure
All other embodiment belongs to the range of disclosure protection.
The term used in the embodiments of the present disclosure is only to be not intended to be limiting merely for for the purpose of describing particular embodiments
The disclosure.In the embodiment of the present disclosure and the "an" of singular used in the attached claims, " described " and "the"
It is also intended to including most forms, unless the context clearly indicates other meaning, " a variety of " generally comprise at least two.
It should be appreciated that term "and/or" used herein is only a kind of incidence relation for describing affiliated partner, indicate
There may be three kinds of relationships, for example, A and/or B, can indicate: individualism A, exist simultaneously A and B, individualism B these three
Situation.In addition, character "/" herein, typicallys represent the relationship that forward-backward correlation object is a kind of "or".
It will be appreciated that though may be described in the embodiments of the present disclosure using term first, second, third, etc..,
But these ... it should not necessarily be limited by these terms.These terms be only used to by ... distinguish.For example, implementing not departing from the disclosure
In the case where example range, first ... can also be referred to as second ..., and similarly, second ... can also be referred to as the
One ....
Depending on context, word as used in this " if ", " if " can be construed to " ... when " or
" when ... " or " in response to determination " or " in response to detection ".Similarly, context is depended on, phrase " if it is determined that " or " such as
Fruit detection (condition or event of statement) " can be construed to " when determining " or " in response to determination " or " when detection (statement
Condition or event) when " or " in response to detection (condition or event of statement) ".
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
Include, so that commodity or device including a series of elements not only include those elements, but also including not clear
The other element listed, or further include for this commodity or the intrinsic element of device.In the feelings not limited more
Under condition, the element that is limited by sentence "including a ...", it is not excluded that in the commodity or device for including the element also
There are other identical elements.
The alternative embodiment of the disclosure is described in detail with reference to the accompanying drawing.
Embodiment 1
As shown in Figure 1, the disclosure provides a kind of face image processing process according to the specific embodiment of the disclosure, packet
Include following method and step:
Step S102: multiple facial images to be processed are obtained.
It wherein, include corresponding face position in facial image to be processed, for example, mandibular, eyes, nose, mouth, ear
Piece, chin, cheek and shape of face etc.;It may include the face of a people in image to be processed, also may include the people of multiple people
Face, the facial image to be processed can be colored or black and white or other any facial images that attached shooting effect.Wait locate
Reason facial image can be video image and be also possible to photograph image.Facial image to be processed can be obtains post-processing in real time,
Processing is opened again after being also possible to storage, such as the facial image to be processed shot before opening from picture storage folder.
Wherein, the means for obtaining the facial image to be processed do not do any restrictions, for example, can be by having shooting function
The terminal device of energy, which is shot, to be obtained, and terminal device refers to having for such as beauty Yan Xiangji, smart phone and tablet computer etc
The electronic product of image camera function.User can pass through the input of such as touch screen or physical button in terminal device etc
Equipment inputs camera enabled instruction, and the camera of controlling terminal equipment be in photographing mode, acquisition camera it is collected to
Handle facial image.Camera can be the built-in camera of terminal device, can also be with such as front camera and rear camera
It is the external camera of terminal device, such as rotating camera, optionally front camera.
It is also possible to after shooting perfect person's face image by above-mentioned camera terminal equipment, is stored in apparatus body or external storage
After device, the terminals such as image processing apparatus, such as the desktop computer without shooting function, server are uploaded to, but can be with
Corresponding program is pre-installed to realize the processing to image is imported, at this moment, the mode for obtaining facial image to be processed is exactly to pass through this
Ground equipment opens the facial image to be processed of storage.
Wherein, multiple include at least two, and when to wherein one progress image procossing, another kind is in shape to be processed
State receives the instruction for whether synchronizing processing at any time.Two images to be processed can be located at same Photo Viewer in or
Same file folder when image synchronizes, is executed in the same root following figure as the search of file.
Step S104: receiving user and operate to the local special efficacy of a wherein facial image to be processed, the part special efficacy
It include that the user operates the special efficacy that face mandibular adjusts in operation.
By way of Photo Viewer or same file folder or several pictures of captured in real-time, several described pictures are shown in
Display interface, slidably interface browses several images for being located at same editable region, when wherein a width carries out U.S. face behaviour for selection
When making, other images being located under same catalogue are in armed state.
Wherein, local special efficacy operation indicates that the mandibular, eyes, nose that are added to facial image to be processed are wanted in user's selection
The operation of son, mouth, ear, chin, cheek and shape of face etc., for example, user selects mandibular in the user interface of terminal device
Adjust the button of special efficacy, naturally it is also possible to the special efficacy button of other positions is selected, as shown in Fig. 2, user can be in display interface
The part of any face is selected to carry out U.S. face special efficacy operation button, terminal can detecte user in the application journey of client
Trigger action on the interface of sequence, and the trigger action is responded, such as after detecting the special efficacy selection operation, it can be based on use
The operation at family learns that user thinks mandibular adjustment special efficacy to be added.
In practical applications, it can be identified and be triggered in the operation, such as client end interface by the associated trigger of client
Specified trigger button or input frame, can also be the phonetic order of user, specifically, as shown in Fig. 2, can be in client
The virtual push button of the various mandibulars adjustment special efficacy shown on display interface, the operation that user clicks any button is to use
The special efficacy selection operation at family, for example, virtual push button, the mouth tune of the virtual push button of mandibular adjustment special efficacy, eyes adjustment special efficacy
The virtual push button etc. of whole special efficacy, the special efficacy that user clicks the movement as user of the virtual push button of mandibular adjustment special efficacy select behaviour
Make.
Optionally, the step S104 further include: receive user and touching is adjusted to the face mandibular of facial image to be processed
Hair operation;In response to the trigger action, display sliding interface identification, the sliding mark interface display has face mandibular tune
Whole parameter, the face mandibular adjusting parameter is for identifying mandibular adjustment intensity.
Into after the special effect processing interface of selection, such as after entering the display interface that is adjusted to mandibular, such as Fig. 3 institute
Show, special efficacy sliding selection interface can be further displayed, special efficacy can be adjusted index and graphical slide interface is identified in numeric form,
Different numerical value on sliding interface can represent different adjustment intensity, can be related to the moving distance between adjustment characteristic point
Connection can correspond to for example, the feature difference between characteristic point, which passes through, is calculated as 100 and be shown in sliding interface, can also be after
Platform mapping table carries out zooming in or out in proportion, such as the feature difference between characteristic point is by being calculated as 100, Ke Yixian
Registration value 10, to reduce the display scale on sliding interface.Alternatively, the feature difference between such as characteristic point is by being calculated as
10, it can be identified in sliding interface with 20 scale, to enhance adjustment sensitivity.The size and interface display value of characteristic value
Size can according to circumstances be configured, be advisable with being conducive to practical operation.
Practical operation example as shown in Figure 3, user can be by the sliding buttons on the display interface of client come real
The sliding selection of now jawbone adjustment amplitude, as shown in Figure 3 is a kind of for adjusting the user interface signal of special efficacy adjustment index
Figure, wherein image preview area can be used for real-time display facial image, and user can adjust sliding button by sliding special efficacy to select
Different adjustment amplitudes is selected, while can show the effect of adjustment in real time in image preview area, user can according to need certainly
It sets the tone whole termination.
Step S106: it is operated according to the special efficacy to the adjustment of face mandibular, to a facial image to be processed
It carries out mandibular and adjusts special effect processing.
After user selects an image to be processed, when determination needs to handle the mandibular in facial image, pass through triggering pair
The special efficacy selection operation of facial image to be processed can be realized at the special efficacy to the mandibular region of facial image user to be processed
Reason with this solution, can be by lower jaw when receiving user's selection needs the selection operation of mandibular adjustment special efficacy to be added
Bone adjusts the corresponding effect addition of special efficacy in facial image to be processed, that is, realizes user's mandibular area in key adjustment image
The function in domain manually edits the mandibular region of facial image without user, greatly reduces mandibular adjustment
The time of the processing of special efficacy improves the interactive experience of user.
Optionally, the step S106 includes following sub-step:
Step S1061: the coordinate origin value of the facial image to be processed is determined.
The selection of coordinate system can include but is not limited to rectangular coordinate system, cartesian coordinate system etc., for convenience, this
Embodiment is described using rectangular coordinate system.
The selection of coordinate origin can be selected according to image procossing convenience, such as can choose and image procossing area
Coordinate origin of the fixed position such as the upper left corner, the upper right corner, the lower left corner, the lower right corner in domain as image to be processed.Specifically, implementing
One of mode, such as coordinate origin (0,0) of the lower left corner of image as image is selected, right direction is X-axis, and upward direction is
Y-axis can be carried out defining coordinate origin by above-mentioned any way of course, it is possible to be not limited to this definition mode.In addition,
Can determine the coordinate of every bit on image to be processed as unit of pixel value, for example, with one, coordinate origin interval pixel
Coordinate be (0,1) or (1,0) etc..
Step S1062: determining the current signature point of the mandibular in the facial image to be processed, former according to the coordinate
Point value determines the current characteristic value of the current signature point.
It should be noted that the mandibular characteristic point in the embodiment of the present disclosure is used for identifying in facial image to be processed
The characteristic point of the lower jaw bone profile at family.Wherein it is determined that the mandibular characteristic point in the mandibular region in facial image to be processed
Specific method, and specifically need to detect the characteristic point at which position, it can be pre-configured with according to actual needs, the reality of the disclosure
Example is applied to be not specifically limited, for example, the mandibular characteristic point in the mandibular region in facial image to be processed can be directly detected,
It can also be by the characteristic point of other face positions in detection facial image to be processed, according to the characteristic point of other face positions
Mandibular characteristic point is calculated.
Determine the current signature point of mandibular, it is necessary first to determine the position of mandibular is located substantially at which seat of image
It marks in range, the current signature point of mandibular is then obtained by the SDK of image automatic identification, work as in conjunction with origin determination
The position of preceding characteristic point and coordinate determine the current characteristic value of mandibular, such as determine 3 current signature points of mandibular
B1, b2, b3, and determine that current characteristic value is b1 (x’1,y’1)、b2(x’2,y’2)、b3(x’3,y’3)。
Step S1064: special efficacy adjustment processing is carried out to the mandibular according to the current characteristic value.
Optionally, step S1064 includes:
The first, the mobile vector for determining current signature point is operated according to the current characteristic value and the local special efficacy,
The current mobile vector includes current moving direction and current moving distance.
For example, determining that the start position of the local special efficacy operation, such as coordinate are respectively according to the current characteristic value
b1(x’1,y’1)、b2(x’2,y’2)、b3(x’3,y’3);The mandibular characteristic point is determined according to the local special efficacy operation
Final position, such as coordinate are respectively a1 (x1, y1), a2 (x2, y2), a3 (x3, y3), according to the start position and the end
Point position determines the mobile vector of current signature point.
As above, the current characteristic value for determining current signature point is b1 (x’1,y’1)、b2(x’2,y’2)、b3(x’3,y’3) it, examines
The local special efficacy operation to mandibular is measured to stretch current signature point downwards, draw direction is convergence straight down
Terminal characteristic value be a1 (x1, y1), a2 (x2, y2), a3 (x3, y3).Pass through above-mentioned current characteristic value and local special efficacy behaviour
Make the mobile vector of determining current signature point.It specifically includes:
The moving direction that the mandibular current signature point is determined by partial operation is to stretch downwards, and the mandibular is worked as
The moving distance of preceding characteristic point is the difference of characteristic value after the cheekbone current characteristic value and cheekbone movement.It can determine cheekbone
The mobile vector of bone isWherein it is determined that current moving distance | b1a1 |, | b2a2 |, | b3a3 |.
The adjustment in above-mentioned direction is also not fixed, can choose and is inversely adjusted, such as shrinks upwards.
The second, it receives and starts adjustment instruction, according to the mobile vector of the current signature point, the mandibular current signature
Point is moved to and mandibular characteristic point after the matched movement of the adjustment instruction.
Optionally, it further comprises the steps of:
Third, according to mandibular characteristic point after the movement and current signature point, determine mandibular adjustment region.
After mandibular characteristic point carries out movement, so that it may determine the band of position of mobile front and back, which is exactly
Region adjusted, as shown in Figure 4.Special efficacy adjusts index and the corresponding relationship of characteristic point moving distance can be according to actual needs
It is pre-configured with, the corresponding relationship based on different special efficacy adjustment index and characteristic point moving distance can be in facial image
Mandibular carry out corresponding mandibular adjustment special effect processing after, obtain different treatment effects, further promote the friendship of user
Mutually experience.
4th, the mandibular adjustment region is rendered, the face mandibular image after being adjusted.
Wherein, rendering includes the rendering of the colour of skin, the rendering of smoothness, the rendering etc. of shading value.For the rendering skill of image
Art, which is not done, excessively to be repeated, and specific implementation process is to carry out Rendering operations automatically after the adjustment of region, and real-time display is pre- in image
Look at region.
A kind of optional rendering embodiment are as follows: preset each mandibular characteristic point pass corresponding with texture coordinate
System, after being adjusted to mandibular characteristic point, can determine lower jaw adjusted according to mandibular characteristic point adjusted
The representative points coordinate in bone region is then based on the corresponding relationship of preconfigured each mandibular characteristic point and texture coordinate,
And the corresponding texture coordinate of mandibular characteristic point, it determines texture coordinate corresponding to representative points coordinate, will obtain later
The corresponding texture of texture coordinate be attached on the corresponding position in mandibular region adjusted, obtain and mandibular adjustment special efficacy
Corresponding effect picture.
It further include filter addition operation of the user for filter special efficacy in embodiment of the disclosure, in special efficacy selection operation;
At this point, this method can also include:
Receive filter addition operation;
It is added and is operated according to filter, filter special efficacy is added in facial image to be processed.
Wherein, in order to meet user to more U.S. face demands of same facial image to be processed, filter spy can also be provided
Effect addition function, i.e., add operation by the filter of user, increases selected filter to the facial image currently handled
Special efficacy.Filter special efficacy may include multiple types, such as pure and fresh filter, pale filter and natural filter etc..
In practical applications, filter addition operation can act the concrete operations of special efficacy selection interface by user and realize,
For example, the corresponding region of facial image to be processed in the special efficacy selection interface that horizontally slips, can be realized the addition of filter special efficacy
And switching.
In embodiment of the disclosure, this method can also include:
Receive special efficacy destruction operation;
According to special efficacy destruction operation, the mandibular adjustment special efficacy and/or filter being added in facial image to be processed are special
Effect is removed.
In practical applications, it can according to need and configure different special efficacy cancel strategies, for example, can be adjusted according to mandibular
The sequencing that special efficacy is added in facial image to be processed carries out the removing of special efficacy, can also adjust to the mandibular of addition special
Effect is all removed.
In embodiment of the disclosure, user is received to the special efficacy selection operation of facial image to be processed, may include:
User is received to the special efficacy trigger action of facial image to be processed;
In response to special efficacy trigger action, it is special to show mandibular adjustment in special efficacy selection interface for special display effect selection interface
Imitate option and facial image to be processed;
By special efficacy selection interface, special efficacy selection operation is received.
Wherein, special efficacy trigger action indicates that user wants to add facial image mandibular to be processed special efficacy, i.e. user uses
Starting the movement of progress mandibular adjustment special efficacy addition in triggering, the concrete form of the operation can according to need configuration, for example,
The trigger action that can be user specific operation position on the display interface of client is also possible to specific virtual push button
Deng.In the program, after detecting the special efficacy trigger action, special efficacy selection interface can be carried out based on the operation of user
Display allows the user to carry out the selection operation of mandibular adjustment special efficacy at the interface.
In embodiment of the disclosure, facial image to be processed is the current face's image captured in real time or user from local
The facial image chosen in image library.
In practical applications, facial image to be processed can be to be obtained in real time by the terminal device with camera function
The facial image of video frame is also possible to the image of the selection from image library, wherein image library can store local or
Server, if being stored in server, facial image to be processed is to send the image that image acquisition request is got to server.
Step S108: judge multiple described facial images to be processed whether include same people facial image to be processed.
According to image recognition technology, identified from features such as eyes, nose, mouth, shapes of face in facial image, it can be with
The specific features of the layout or single position that identify multiple positions carry out image recognition, and the details of identification technology does not do excessive theory
Bright, the existing technology for being able to carry out image recognition is included within disclosure scheme.Judging identified person's face image one by one is
The no facial image with processing is same face, and is marked.
Specifically, after the completion of single image is processed, processing equipment automatically scanning is in same root in operation interface
Under (such as same file folder or same Photo Viewer etc.), if including several it is the facial image of same people, if including more
In the facial image to be processed of the same people of a width, then prompting interface is provided in display interface, the prompting interface is selected for user
It selects and whether synchronizes processing, wherein prompting interface can be the form of pop-up dialog box or choose the people for belonging to same people automatically
Face image synchronizes after further receiving operational order or without synchronous special effect processing result.
Step S110: if include same people facial image to be processed, synchronize the special effect processing result to it is described together
The facial image to be processed of one people.
After user selects synchronous special effect processing result, equipment receives the instruction of confirmation synchronization process, and equipment can synchronize
To be processed facial image of the special effect processing result to the same people.In this way, the face for belonging to same people has been carried out together
The special efficacy of sample operates.Further, the range of the synchronous special efficacy can be arbitrarily selected, such as has 100 same facial images,
User can choose any amount or any a few width facial images synchronize, to adapt to the demand of different occasions.
The disclosure above method makes user when needing to handle the mandibular in facial image, by obtaining captured in real-time
The image obtained or the image being stored in equipment are opened, and the special efficacy behaviour of the lower jaw bony site to facial image of display interface is triggered
Make, the special effect processing to the mandibular region of facial image user can be realized, while can be same by the operation to piece image
It walks in other images, improves the execution efficiency of U.S. face.It with this solution, can be by way of ease of Use, to one
The face mandibular image of people or multiple people carry out beautiful reparation, and obtain natural repairing effect by the rendering on backstage, change
Become because mandibular is prominent without beautiful defect, it is entire repair can with one-key operation without user manually to facial image
Mandibular region carries out complicated editor, greatly reduces the time of the processing of mandibular adjustment special efficacy, improves the friendship of user
Mutually experience, improves execution efficiency, market value with higher.
Embodiment 2
The present embodiment is accepted embodiment 1 and is contained for realizing method and step as described in Example 1 based on identical title
The explanation of justice is same as Example 1, has technical effect same as Example 1, details are not described herein again.
As shown in figure 5, the disclosure provides a kind of face image processing device according to the specific embodiment of the disclosure, packet
It includes: acquiring unit 502, receiving unit 504, processing unit 506, judging unit 508, synchronization unit 510.
Acquiring unit 502: for obtaining multiple facial images to be processed.
Receiving unit 504: the local special efficacy of a wherein facial image to be processed is operated for receiving user, the office
It include that the user operates the special efficacy that face mandibular adjusts in the operation of portion's special efficacy.
Processing unit 506: for being operated according to the special efficacy to the adjustment of face mandibular, to a people to be processed
Face image carries out mandibular and adjusts special effect processing.
Optionally, as shown in fig. 6, the processing unit 506 includes:
First determination unit 601: the coordinate origin value of the facial image to be processed is determined.
Second determination unit 602: the current signature point of the mandibular in the facial image to be processed is determined, according to described
Coordinate origin value determines the current characteristic value of the current signature point.
Special efficacy unit 603: special efficacy adjustment processing is carried out to the mandibular according to the current characteristic value.
Optionally, special efficacy unit 603 includes:
The first, the mobile vector for determining current signature point is operated according to the current characteristic value and the local special efficacy,
The current mobile vector includes current moving direction and current moving distance.
For example, determining that the start position of the local special efficacy operation, such as coordinate are respectively according to the current characteristic value
b1(x’1,y’1)、b2(x’2,y’2)、b3(x’3,y’3);The mandibular characteristic point is determined according to the local special efficacy operation
Final position, such as coordinate are respectively a1 (x1, y1), a2 (x2, y2), a3 (x3, y3), according to the start position and the end
Point position determines the mobile vector of current signature point.
As above, the current characteristic value for determining current signature point is b1 (x’1,y’1)、b2(x’2,y’2)、b3(x’3,y’3) it, examines
The local special efficacy operation to mandibular is measured to stretch current signature point downwards, draw direction is convergence straight down
Terminal characteristic value be a1 (x1, y1), a2 (x2, y2), a3 (x3, y3).Pass through above-mentioned current characteristic value and local special efficacy behaviour
Make the mobile vector of determining current signature point.It specifically includes:
The moving direction that the mandibular current signature point is determined by partial operation is to stretch downwards, and the mandibular is worked as
The moving distance of preceding characteristic point is the difference of characteristic value after the cheekbone current characteristic value and cheekbone movement.It can determine cheekbone
The mobile vector of bone isWherein it is determined that current moving distance | b1a1 |, | b2a2 |, | b3a3 |.
The adjustment in above-mentioned direction is also not fixed, can choose and is inversely adjusted, such as shrinks upwards.
The second, it receives and starts adjustment instruction, according to the mobile vector of the current signature point, the mandibular current signature
Point is moved to and mandibular characteristic point after the matched movement of the adjustment instruction.
Optionally, it further comprises the steps of:
Third, according to mandibular characteristic point after the movement and current signature point, determine mandibular adjustment region.
After mandibular characteristic point carries out movement, so that it may determine the band of position of mobile front and back, which is exactly
Region adjusted, as shown in Figure 4.Special efficacy adjusts index and the corresponding relationship of characteristic point moving distance can be according to actual needs
It is pre-configured with, the corresponding relationship based on different special efficacy adjustment index and characteristic point moving distance can be in facial image
Mandibular carry out corresponding mandibular adjustment special effect processing after, obtain different treatment effects, further promote the friendship of user
Mutually experience.
4th, the mandibular adjustment region is rendered, the face mandibular image after being adjusted.
Wherein, rendering includes the rendering of the colour of skin, the rendering of smoothness, the rendering etc. of shading value.For the rendering skill of image
Art, which is not done, excessively to be repeated, and specific implementation process is to carry out Rendering operations automatically after the adjustment of region, and real-time display is pre- in image
Look at region.
It further include filter addition operation of the user for filter special efficacy in embodiment of the disclosure, in special efficacy selection operation;
At this point, the device can also include:
Adding unit: filter addition operation is received;It is added and is operated according to filter, filter special efficacy is added to face to be processed
In image.
In practical applications, filter addition operation can act the concrete operations of special efficacy selection interface by user and realize,
For example, the corresponding region of facial image to be processed in the special efficacy selection interface that horizontally slips, can be realized the addition of filter special efficacy
And switching.
In embodiment of the disclosure, which can also include:
Revocation unit: special efficacy destruction operation is received;According to special efficacy destruction operation, it is added in facial image to be processed
Mandibular adjusts special efficacy and/or filter special efficacy is removed.
In embodiment of the disclosure, user is received to the special efficacy selection operation of facial image to be processed, may include:
User is received to the special efficacy trigger action of facial image to be processed;
In response to special efficacy trigger action, it is special to show mandibular adjustment in special efficacy selection interface for special display effect selection interface
Imitate option and facial image to be processed;
By special efficacy selection interface, special efficacy selection operation is received.
In embodiment of the disclosure, facial image to be processed is the current face's image captured in real time or user from local
The facial image chosen in image library.
Judging unit 508: for judge multiple described facial images to be processed whether include same people face to be processed
Image.
Synchronization unit 510: if including the facial image to be processed of same people, the special effect processing result is synchronized described in
The facial image to be processed of same people.
Disclosure above-mentioned apparatus makes user when needing to handle the mandibular in facial image, by obtaining captured in real-time
The image obtained or the image being stored in equipment are opened, and the special efficacy behaviour of the lower jaw bony site to facial image of display interface is triggered
Make, the special effect processing to the mandibular region of facial image user can be realized, while can be same by the operation to piece image
It walks in other images, improves the execution efficiency of U.S. face.It with this solution, can be by way of ease of Use, to one
The face mandibular image of people or multiple people carry out beautiful reparation, and obtain natural repairing effect by the rendering on backstage, change
Become because mandibular is prominent without beautiful defect, it is entire repair can with one-key operation without user manually to facial image
Mandibular region carries out complicated editor, greatly reduces the time of the processing of mandibular adjustment special efficacy, improves the friendship of user
Mutually experience, improves execution efficiency, market value with higher.
Embodiment 3
As shown in fig. 7, the equipment is used for image procossing, the electronic equipment, packet the present embodiment provides a kind of electronic equipment
It includes: at least one processor;And the memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by one processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out method and step described in embodiment as above.
Embodiment 4
The embodiment of the present disclosure provides a kind of nonvolatile computer storage media, and the computer storage medium is stored with
Method and step described in embodiment as above can be performed in computer executable instructions, the computer executable instructions.
Embodiment 5
Below with reference to Fig. 7, it illustrates the structural schematic diagrams for the electronic equipment for being suitable for being used to realize the embodiment of the present disclosure.This
Terminal device in open embodiment can include but is not limited to such as mobile phone, laptop, digit broadcasting receiver,
PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as vehicle mounted guidance
Terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electronic equipment shown in Fig. 7
700 be only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in fig. 7, electronic equipment may include processing unit (such as central processing unit, graphics processor etc.) 701,
Random access storage device can be loaded into according to the program being stored in read-only memory (ROM) 702 or from storage device 708
(RAM) program in 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with electronic device institute
The various programs and data needed.Processing unit 701, ROM 702 and RAM 703 are connected with each other by bus 706.Input/defeated
(I/O) interface 706 is also connected to bus 706 out.
In general, following device can connect to I/O interface 706: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 706 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 706 of dynamic device etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 706.Communication device
706, which can permit electronic equipment, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 7, which is shown, to be had respectively
The electronic equipment of kind device, it should be understood that being not required for implementing or having all devices shown.It can be alternatively real
Apply or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communication device 706 from above, or be pacified from storage device 708
Dress, or be mounted from ROM 702.When the computer program is executed by processing unit 701, the side of the embodiment of the present disclosure is executed
The above-mentioned function of being limited in method.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through any kind --- including local area network (LAN) or extensively
Domain net (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as provided using Internet service
Quotient is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions.
Claims (10)
1. a kind of face image processing process characterized by comprising
Obtain multiple facial images to be processed;
It receives user to operate the local special efficacy of wherein one facial image to be processed, includes in the part special efficacy operation
The user operates the special efficacy that face mandibular adjusts;
It is operated according to the special efficacy to the adjustment of face mandibular, mandibular adjustment is carried out to described one facial image to be processed
Special effect processing;
Judge multiple described facial images to be processed whether include same people facial image to be processed;
If the facial image to be processed including same people, synchronize the special effect processing result to the same people people to be processed
Face image.
2. the method according to claim 1, wherein described grasp according to the special efficacy to the adjustment of face mandibular
Make, mandibular carried out to described one facial image to be processed and adjusts special effect processing, comprising:
Determine the coordinate origin value of a facial image to be processed;
The current signature point for determining the mandibular in a facial image to be processed determines institute according to the coordinate origin value
State the current characteristic value of current signature point;
Special efficacy adjustment processing is carried out to the mandibular according to the current characteristic value.
3. according to the method described in claim 2, it is characterized in that, it is described according to the current characteristic value to the mandibular into
Row special efficacy adjustment processing, comprising:
The mobile vector for determining current signature point is operated according to the current characteristic value and the local special efficacy, it is described when Forward
Dynamic vector includes current moving direction and current moving distance;
It receives and starts adjustment instruction, according to the mobile vector of the current signature point, the mandibular current signature point is moved to
With mandibular characteristic point after the matched movement of the adjustment instruction;
According to mandibular characteristic point after the movement and current signature point, mandibular adjustment region is determined;
The mandibular adjustment region is rendered, the face mandibular image after being adjusted.
4. according to the method described in claim 3, it is characterized in that, described grasp according to the current characteristic value and local special efficacy
Make the mobile vector of determining current signature point, comprising:
The start position of the local special efficacy operation is determined according to the current characteristic value;
The final position of the mandibular characteristic point is determined according to the local special efficacy operation;
The mobile vector of current signature point is determined according to the start position and the final position.
5. according to the method described in claim 4, it is characterized in that, described true according to the start position and the final position
The mobile vector of settled preceding characteristic point, comprising:
The moving direction of the current signature point is straight down or straight up;
The moving distance of the current signature point is the difference of the start position coordinate and the final position coordinate.
6. the method according to claim 1, wherein receiving user to wherein one facial image to be processed
The operation of local special efficacy, include the special efficacy operation that the user adjust face mandibular in the part special efficacy operation, comprising:
It receives user and trigger action is adjusted to the face mandibular of a wherein facial image to be processed;
In response to the trigger action, display sliding interface identification, the sliding mark interface display has the adjustment of face mandibular
Parameter, the face mandibular adjusting parameter is for identifying mandibular adjustment intensity.
If 7. the method according to claim 1, wherein the facial image to be processed including same people,
Synchronize the special effect processing result to the same people facial image to be processed, comprising:
If the facial image to be processed including same people, prompting interface is provided, the prompting interface chooses whether to carry out for user
Synchronization process;
If receive confirmation synchronization process instruction, synchronize the special effect processing result to the same people face to be processed
Image.
8. a kind of face image processing device characterized by comprising
Acquiring unit, for obtaining multiple facial images to be processed;
Receiving unit operates the local special efficacy of wherein one facial image to be processed for receiving user, the part
It include that the user operates the special efficacy that face mandibular adjusts in special efficacy operation;
Processing unit, for being operated according to the special efficacy to the adjustment of face mandibular, to a facial image to be processed
It carries out mandibular and adjusts special effect processing;
Judging unit, for judge multiple described facial images to be processed whether include same people facial image to be processed;
Synchronization unit, if synchronizing the special effect processing result to described same for the facial image to be processed including same people
The facial image to be processed of one people.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is by processor
The method as described in any one of claims 1 to 7 is realized when execution.
10. a kind of electronic equipment characterized by comprising
One or more processors;
Storage device, for storing one or more programs, when one or more of programs are by one or more of processing
When device executes, so that one or more of processors realize the method as described in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910583758.5A CN110363718A (en) | 2019-06-28 | 2019-06-28 | Face image processing process, device, medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910583758.5A CN110363718A (en) | 2019-06-28 | 2019-06-28 | Face image processing process, device, medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110363718A true CN110363718A (en) | 2019-10-22 |
Family
ID=68217645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910583758.5A Pending CN110363718A (en) | 2019-06-28 | 2019-06-28 | Face image processing process, device, medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363718A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112052806A (en) * | 2020-09-10 | 2020-12-08 | 广州繁星互娱信息科技有限公司 | Image processing method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109427038A (en) * | 2017-08-30 | 2019-03-05 | 涂世满 | A kind of cell phone pictures display methods and system |
CN109544444A (en) * | 2018-11-30 | 2019-03-29 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN109584152A (en) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109614902A (en) * | 2018-11-30 | 2019-04-12 | 深圳市脸萌科技有限公司 | Face image processing process, device, electronic equipment and computer storage medium |
-
2019
- 2019-06-28 CN CN201910583758.5A patent/CN110363718A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109427038A (en) * | 2017-08-30 | 2019-03-05 | 涂世满 | A kind of cell phone pictures display methods and system |
CN109544444A (en) * | 2018-11-30 | 2019-03-29 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN109584152A (en) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109614902A (en) * | 2018-11-30 | 2019-04-12 | 深圳市脸萌科技有限公司 | Face image processing process, device, electronic equipment and computer storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112052806A (en) * | 2020-09-10 | 2020-12-08 | 广州繁星互娱信息科技有限公司 | Image processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929651B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110378847A (en) | Face image processing process, device, medium and electronic equipment | |
CN109584151B (en) | Face beautifying method, device, terminal and storage medium | |
WO2022179026A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN110378839A (en) | Face image processing process, device, medium and electronic equipment | |
CN106682632B (en) | Method and device for processing face image | |
CN104503749B (en) | Photo processing method and electronic equipment | |
CN111541907B (en) | Article display method, apparatus, device and storage medium | |
WO2022179025A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN109584180A (en) | Face image processing process, device, electronic equipment and computer storage medium | |
CN110503703A (en) | Method and apparatus for generating image | |
CN104574285B (en) | One kind dispels the black-eyed method of image automatically | |
US11257293B2 (en) | Augmented reality method and device fusing image-based target state data and sound-based target state data | |
CN105912912B (en) | A kind of terminal user ID login method and system | |
CN109584152A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN109614902A (en) | Face image processing process, device, electronic equipment and computer storage medium | |
KR20120070985A (en) | Virtual experience system based on facial feature and method therefore | |
CN106303293B (en) | Method for processing video frequency, device and mobile terminal | |
WO2020134558A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN111047511A (en) | Image processing method and electronic equipment | |
CN111062276A (en) | Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment | |
CN109544444A (en) | Image processing method, device, electronic equipment and computer storage medium | |
CN109559288A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN106791347A (en) | A kind of image processing method, device and the mobile terminal using the method | |
CN113453027B (en) | Live video and virtual make-up image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |