CN109544444A - Image processing method, device, electronic equipment and computer storage medium - Google Patents
Image processing method, device, electronic equipment and computer storage medium Download PDFInfo
- Publication number
- CN109544444A CN109544444A CN201811458390.1A CN201811458390A CN109544444A CN 109544444 A CN109544444 A CN 109544444A CN 201811458390 A CN201811458390 A CN 201811458390A CN 109544444 A CN109544444 A CN 109544444A
- Authority
- CN
- China
- Prior art keywords
- nose
- characteristic point
- modification information
- parameter modification
- facial image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000004048 modification Effects 0.000 claims abstract description 194
- 238000012986 modification Methods 0.000 claims abstract description 194
- 230000001815 facial effect Effects 0.000 claims abstract description 104
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000004044 response Effects 0.000 claims abstract description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims 1
- 230000002452 interceptive effect Effects 0.000 abstract 1
- 210000001331 nose Anatomy 0.000 description 400
- 238000010586 diagram Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 15
- 230000009471 action Effects 0.000 description 9
- 230000003993 interaction Effects 0.000 description 5
- 238000009877 rendering Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the present disclosure provides a kind of image processing method, device, electronic equipment and computer readable storage medium.It include the parameter modification information of the nasal region based on facial image to be processed in nose adjustment instruction this method comprises: receiving the nose adjustment instruction to facial image to be processed;In response to parameter modification information, the nose characteristic point in the nasal region of facial image to be processed is detected;The nose characteristic point after movement is obtained according to nose characteristic point;According to the nose characteristic point after movement, nasal region adjusted is determined, the facial image to be processed after being adjusted.In the embodiment of the present disclosure, the nasal region of facial image to be processed is adjusted according to the parameter modification information in nose adjustment instruction after receiving nose adjustment instruction, realize the function of the nasal region of user in key adjustment image, without being edited manually to the nasal region of facial image, the time for reducing processing improves the interactive experience of user.
Description
Technical field
This disclosure relates to technical field of image processing, specifically, this disclosure relates to a kind of image processing method, device,
Electronic equipment and computer storage medium.
Background technique
In U.S. face class tool, user can carry out U.S. face special effect processing to image, to promote the aesthetic feeling of image.But
When using existing U.S. face class tool, user needs each position to face in image to manually adjust, and institute just can be obtained
Need image.
For example, user wants adjustment nose shape, user just needs to manually adjust nasal area, can just be adjusted
The corresponding image of nose shape after whole.Generally for manually adjusting the unskilled user of image, by way of manually adjusting
The U.S. face processing for carrying out image is not only extremely difficult to the purpose of U.S. face, and U.S. face processing short time consumption is long, reduces user's
Usage experience.
Summary of the invention
The purpose of the disclosure is intended at least can solve above-mentioned one of technological deficiency, promotes the usage experience of user.This public affairs
Open the technical solution adopted is as follows::
In a first aspect, present disclose provides a kind of image processing methods, comprising:
Obtain facial image to be processed;
The nose adjustment instruction to facial image to be processed is received, includes being based on face figure to be processed in nose adjustment instruction
It seem the parameter modification information of nasal region;
In response to parameter modification information, the nose characteristic point in the nasal region of facial image to be processed is detected;
The nose characteristic point after movement is obtained according to nose characteristic point;
According to the nose characteristic point after movement, determine nasal region adjusted, and to nasal region adjusted into
Row rendering, the facial image to be processed after being adjusted;
Show facial image to be processed adjusted.
In embodiment of the disclosure, the nose characteristic point after movement is obtained according to nose characteristic point, comprising:
According to parameter modification information, the current mobile vector of nose characteristic point is determined;
According to current mobile vector, the movement of nose characteristic point, the nose characteristic point after being moved are controlled.
In embodiment of the disclosure, detect in the nasal region of facial image to be processed and nose characteristic point after, also
Include:
Interpolation processing is carried out to nose characteristic point, obtains the interpolation characteristic point in the nasal region of facial image to be processed,
Nose characteristic point further includes interpolation characteristic point.
In embodiment of the disclosure, current mobile vector includes current moving distance and current moving direction, nose adjustment
It further include the adjustment index of parameter modification information in instruction, adjustment index is used for the adjustment intensity of identification parameter modification information, adjusts
Whole intensity includes current moving distance;
Determine the current mobile vector of nose characteristic point, comprising:
Foundation adjusts index, and the corresponding relationship for the adjustment index and characteristic point moving distance being pre-configured, and determines currently
Moving distance.
In embodiment of the disclosure, the current mobile vector of nose characteristic point is determined, comprising:
According to the parameter modification information of pre-configuration and the corresponding relationship of moving direction, movement corresponding with nose special efficacy is determined
Direction, moving direction corresponding with parameter modification information are current moving direction.
In embodiment of the disclosure, the parameter modification information of nasal region includes the long parameter modification information of nose tune and nose
One in short parameter modification information is adjusted, and/or, in the wide parameter modification information of nose tune and the narrow parameter modification information of nose tune
One;
If the parameter modification information of nasal region includes the long parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves down;
If the parameter modification information of nasal region includes the short parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves up;
If the parameter modification information of nasal region is the wide parameter modification information of nose tune, moving direction is far from nose middle line
Direction;
If the parameter modification information of nasal region is the narrow parameter modification information of nose tune, moving direction is close to nose middle line
Direction.
In embodiment of the disclosure, the nose adjustment instruction to facial image to be processed is received, comprising:
It receives and triggering command is adjusted to the nose of facial image to be processed;
Triggering command is adjusted in response to nose, display nose adjusts interface, and nose adjustment shows nose parameter in interface
Modification information option and facial image to be processed;
Interface is adjusted by nose, receives nose adjustment instruction.
Second aspect, present disclose provides a kind of image processing apparatus, which includes:
Image collection module, for obtaining facial image to be processed;
Command reception module is wrapped in nose adjustment instruction for receiving the nose adjustment instruction to facial image to be processed
Include be based on facial image to be processed nasal region parameter modification information;
Characteristic point obtains module, is used in response to parameter modification information, in the nasal region for detecting facial image to be processed
Nose characteristic point, and according to nose characteristic point obtain it is mobile after nose characteristic point;
Image processing module, for determining nasal region adjusted, and exchange according to the nose characteristic point after movement
Nasal region after whole is rendered, the facial image to be processed after being adjusted;
Image display, for showing facial image to be processed adjusted.
In embodiment of the disclosure, characteristic point obtains nose characteristic point of the module after obtaining movement according to nose characteristic point
When, it is specifically used for:
According to parameter modification information, the current mobile vector of nose characteristic point is determined;
According to current mobile vector, the movement of nose characteristic point, the nose characteristic point after being moved are controlled.
In embodiment of the disclosure, characteristic point obtain module in the nasal region for detecting facial image to be processed and nose
After portion's characteristic point, it is also used to:
Interpolation processing is carried out to nose characteristic point, obtains the interpolation characteristic point in the nasal region of facial image to be processed,
Nose characteristic point further includes interpolation characteristic point.
In embodiment of the disclosure, current mobile vector includes current moving distance and current moving direction, nose adjustment
It further include the adjustment index of parameter modification information in instruction, adjustment index is used for the adjustment intensity of identification parameter modification information, adjusts
Whole intensity includes current moving distance;
Characteristic point obtains module when determining the current mobile vector of nose characteristic point, is specifically used for:
Foundation adjusts index, and the corresponding relationship for the adjustment index and characteristic point moving distance being pre-configured, and determines currently
Moving distance.
In embodiment of the disclosure, characteristic point obtains nose characteristic point of the module after obtaining movement according to nose characteristic point
When, it is specifically used for:
According to the parameter modification information of pre-configuration and the corresponding relationship of moving direction, movement corresponding with nose special efficacy is determined
Direction, moving direction corresponding with parameter modification information are current moving direction.
In embodiment of the disclosure, the parameter modification information of nasal region includes the long parameter modification information of nose tune and nose
One in short parameter modification information is adjusted, and/or, in the wide parameter modification information of nose tune and the narrow parameter modification information of nose tune
One;
If the parameter modification information of nasal region includes the long parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves down;
If the parameter modification information of nasal region includes the short parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves up;
If the parameter modification information of nasal region is the wide parameter modification information of nose tune, moving direction is far from nose middle line
Direction;
If the parameter modification information of nasal region is the narrow parameter modification information of nose tune, moving direction is close to nose middle line
Direction.
In embodiment of the disclosure, command reception module when receiving to the nose adjustment instruction of facial image to be processed,
It is specifically used for:
It receives and triggering command is adjusted to the nose of facial image to be processed;
Triggering command is adjusted in response to nose, display nose adjusts interface, and nose adjustment shows nose parameter in interface
Modification information option and facial image to be processed;
Interface is adjusted by nose, receives nose adjustment instruction.
The third aspect, present disclose provides a kind of electronic equipment, which includes:
Processor and memory;
Memory, for storing computer operation instruction;
Processor, for executing any embodiment of the first aspect such as the disclosure by calling computer operation instruction
Shown in method.
Fourth aspect stores on computer readable storage medium present disclose provides a kind of computer readable storage medium
There are computer program instructions, computer program instructions are used to make in any embodiment of the first aspect of the computer execution disclosure
Shown in method.
The technical solution that the embodiment of the present disclosure provides has the benefit that
In the embodiment of the present disclosure, receive after nose adjustment instruction i.e. according to the parameter modification information in nose adjustment instruction
In being adjusted to the nasal region of facial image to be processed, the function of the nasal region of user in key adjustment image is realized
Can, without editing manually to the nasal region of facial image, reduce the time of processing, improves the interaction of user
Experience.
Detailed description of the invention
It, below will be to institute in embodiment of the present disclosure description in order to illustrate more clearly of the technical solution in the embodiment of the present disclosure
Attached drawing to be used is needed to be briefly described.
Fig. 1 is a kind of flow diagram for image processing method that the embodiment of the present disclosure provides;
Fig. 2 a is a kind of schematic diagram of display interface in the embodiment of the present disclosure;
Fig. 2 b is a kind of schematic diagram at nose adjustment triggering interface in the embodiment of the present disclosure;
Fig. 3 a is a kind of schematic diagram of nose characteristic point in the embodiment of the present disclosure;
Fig. 3 b is the schematic diagram of another nose characteristic point in the embodiment of the present disclosure;
Fig. 4 is a kind of schematic diagram at adjustment index adjustment interface in the embodiment of the present disclosure;
Fig. 5 a is a kind for the treatment of effect schematic diagram of the narrow parameter modification information of nose tune in the embodiment of the present disclosure;
Fig. 5 b is a kind of schematic diagram of the long parameter modification information of nose tune in the embodiment of the present disclosure;
Fig. 6 is a kind of structural schematic diagram of image processing apparatus in the embodiment of the present disclosure;
Fig. 7 is a kind of structural schematic diagram of the electronic equipment of image procossing in the embodiment of the present disclosure.
Specific embodiment
Embodiment of the disclosure is described below in detail, the example of the embodiment is shown in the accompanying drawings, wherein phase from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached drawing
The embodiment of description is exemplary, and is only used for explaining the technical solution of the disclosure, and cannot be construed to the limitation to the disclosure.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, "one"
It may also comprise plural form with "the".It is to be further understood that wording " comprising " used in the specification of the disclosure is
Refer to that there are this feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition it is one or more its
His feature, integer, step, operation, element, component and/or their group.It should be understood that when we claim element be " connected " or
When " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be intermediary elements.This
Outside, " connection " or " coupling " used herein may include being wirelessly connected or wirelessly coupling.Wording "and/or" packet used herein
Include one or more associated wholes for listing item or any cell and all combination.
How the technical solution of the disclosure and the technical solution of the disclosure are solved with specifically embodiment below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, embodiment of the disclosure is described.
Embodiment of the disclosure provides a kind of image processing method, as shown in Figure 1, this method may include:
Step S110 obtains facial image to be processed.
Wherein, facial image to be processed is the facial image obtained in real time or the people that user chooses from local image library
Face image.
Step S120 is received to the nose adjustment instruction of facial image to be processed, include in nose adjustment instruction based on to
Processing facial image is the parameter modification information of nasal region.
Wherein, the parameter modification information of nasal region indicates to be handled in the nasal area in processing facial image
Processing strategie specifically needs to modify to which parameter or which parameter, in the sub- embodiment of the present disclosure, different parameters
Modification information can indicate different nose special efficacys, be handled according to different parameter modification informations facial image to be processed
It can be obtained by the effect picture of different nose special efficacys.
In embodiment of the disclosure, nose special efficacy may include the long special efficacy of nose tune, the short special efficacy of nose tune, the wide spy of nose tune
Effect and the narrow special efficacy of nose tune etc., can also be wide to the long special efficacy of nose tune, the short special efficacy of nose tune, nose tune in actual application
Special efficacy and the narrow special efficacy of nose tune are classified, and different types of nose special efficacy type is obtained, such as by the long special efficacy of nose tune, nose tune
Short special efficacy, the wide special efficacy of nose tune and the narrow special efficacy of nose tune are divided into two classes, and one kind is that nose tunes up special efficacy type, and one kind is nose tune
Small special efficacy type, wherein it may include the long special efficacy of nose tune and the wide special efficacy of nose tune that nose, which tunes up under special efficacy type, and nose is turned down
It may include the short special efficacy of nose tune and the narrow special efficacy of nose tune under special efficacy type.
Wherein, since different parameter modification informations can indicate different nose special efficacys, that is to say, that the short ginseng of nose tune
Number modification information can be expressed as the short special efficacy of nose tune among the above, and the long parameter modification information of nose tune can be expressed as the disclosure
The long special efficacy of nose tune in embodiment, it is wide that the wide parameter modification information of nose tune can be expressed as the nose tune in the embodiment of the present disclosure
Special efficacy, the narrow parameter modification information of nose tune can be expressed as the narrow special efficacy of nose tune in the embodiment of the present disclosure.
In the embodiments of the present disclosure, nose adjustment instruction indicates the finger that nose special efficacy selection operation is sent by display interface
It enables, wherein nose special efficacy selection operation is to indicate that user's selection is added to the nose special efficacy of facial image to be processed, i.e. user uses
In the movement of selection nose special efficacy addition, the concrete form of the operation configures as needed, for example, the selection operation can be use
Family is on the display interface of the application program of client to the click action of specified operating position corresponding to nose special efficacy.The party
In case, after detecting the nose special efficacy selection operation, it can learn that user thinks nose to be added based on the operation of user
Sub- special efficacy.
In practical applications, the nose special efficacy that mark can be selected to receive user by the related special efficacy of client selects behaviour
Make, wherein the concrete form of special efficacy selection mark can configure according to actual needs, for example, can be on client end interface
Specified trigger button or input frame, can also be the phonetic order of user.Specifically, for example, it may be client display
The virtual push button of the various nose special efficacys shown on interface, the nose that user clicks the operation as user of any button are special
Selection operation is imitated, as user clicks the nose special efficacy selection of the movement as user of virtual push button corresponding to the long special efficacy of nose tune
It operates, includes the selected nose special efficacy of user, that is, long special efficacy of nose tune in the operation, which indicates that user wants to be added to
It is the long special efficacy of nose tune by the nose special efficacy in facial image to be processed.
Step S130 detects the nose feature in the nasal region of facial image to be processed in response to parameter modification information
Point.
Wherein, the specific method of the nose characteristic point of the nasal region to facial image is detected, and specifically needs to detect
The characteristic point at which position can be pre-configured with according to actual needs, and embodiment of the disclosure is not specifically limited, for example, should
In scheme, which feature for needing to detect which position to facial image can be configured based on different parameter modification informations
Point.
For example, in one example, when nose special efficacy special efficacy long for nose tune, the nose feature for needing to detect may include
The left feature point and right feature point of the upper wing of nose, left nose hole upper apex feature point, left apex feature point and lower apex feature
The characteristic point of point, the upper apex feature point in right nostril, right apex feature point and lower apex feature point and nose position;Work as nose
When sub- special efficacy is nose tune narrow special efficacy, the nose feature for needing to detect at this time may include left feature point and the right side of the upper wing of nose
Characteristic point, upper apex feature point, left apex feature point and the lower apex feature point in left nose hole, right nostril upper apex feature point,
Right apex feature point and lower apex feature point.
Step S140 obtains the nose characteristic point after movement according to nose characteristic point.
In the embodiments of the present disclosure, root nose characteristic point obtains the nose characteristic point after movement, comprising:
According to parameter modification information, the current mobile vector of nose characteristic point is determined;
According to current mobile vector, the movement of nose characteristic point, the nose characteristic point after being moved are controlled.
Wherein, it when being adjusted to nose characteristic point, needs to know the current mobile vector of each nose characteristic point, that is, knows
The moving direction and moving distance of the road nose characteristic point.Wherein, the current mobile vector of each nose characteristic point can be pre-
The mobile vector of configuration, that is to say, that the current moving distance of each nose characteristic point is configured to the moving distance of default, each
The current moving direction of nose characteristic point is configured to the moving direction of default.Specifically, when each nose characteristic point is when Forward
When dynamic vector is the mobile vector being pre-configured, what the current location of the nose characteristic point no matter detected is, the nose feature
The corresponding current mobile vector of point is all consistent with the mobile vector of pre-configuration, that is to say, that the shifting based on pre-configuration
Dynamic vector, when being adjusted to any one nose characteristic point, the moving distance and moving direction of the nose characteristic point with should
The moving distance of the default of nose characteristic point and the moving direction of default are identical.For different nose characteristic points, default
The mobile vector of configuration may be the same or different.
Step S150 determines nasal region adjusted according to the nose characteristic point after movement, and to nose adjusted
Portion region is rendered, the facial image to be processed after being adjusted.
That is, after the nose characteristic point after being moved, so that it may be determined according to the nose characteristic point after movement
Nasal region adjusted out is then based on nasal region adjusted and is rendered, so that it may is after being adjusted to be processed
Facial image.
Step S160 shows facial image to be processed adjusted.
In the embodiment of the present disclosure, receive after nose adjustment instruction i.e. according to the parameter modification information in nose adjustment instruction
In being adjusted to the nasal region of facial image to be processed, the function of the nasal region of user in key adjustment image is realized
Can, without editing manually to the nasal region of facial image, reduce the time of processing, improves the interaction of user
Experience.
In embodiment of the disclosure, in step S120, the nose adjustment instruction to facial image to be processed is received, can wrap
It includes:
It receives and triggering command is adjusted to the nose of facial image to be processed;
Triggering command is adjusted in response to nose, display nose adjusts interface, and nose adjustment shows nose parameter in interface
Modification information option and facial image to be processed;
Interface is adjusted by nose, receives nose adjustment instruction.
Wherein, in the embodiments of the present disclosure, nose adjustment triggering command can be expressed as sending nose by display interface
The instruction of special efficacy trigger action, wherein nose special efficacy trigger action indicates that user wants to add nose to facial image to be processed
Special efficacy, that is, user is used to trigger the movement for starting to carry out the addition of nose special efficacy, and the concrete form of the operation, which can according to need, matches
It sets, for example, it may be the trigger action of user specific operation position on the display interface of client, is also possible to specific void
Quasi- button etc..It, can be based on the operation of user, by nose spy after detecting the nose special efficacy trigger action in the program
Effect selection interface is shown, allows the user to carry out nose special efficacy selection operation at the interface.
In practical applications, user can be identified by the associated trigger of client and trigger the nose special efficacy trigger action, than
Such as the specified trigger button or input frame on client display interface, the phonetic order etc. of user can also be.For example, can one
It selects in mode, can be the virtual push button of " the nose adjustment " that shows on the display interface of client, user clicks the button
Operation be user nose special efficacy trigger action.
Wherein, nose adjustment interface can receive user by the interface for the interaction between electronic equipment and user
Relevant operation in the nose special efficacy adding procedure of facial image to be processed.
As an example, a kind of nose adjustment triggering interface of client in a kind of electronic equipment is shown in Fig. 2 a i.e.
The schematic diagram of user interface shows pending facial image (facial image shown in figure), interface in the interface
The virtual push button of " the nose adjustment " of upper display is nose special efficacy trigger button, and user clicks to be shown on the display interface
The virtual push button of " nose adjustment ", as the nose special efficacy trigger action of user, after receiving the trigger action, can show nose
Son adjustment interface.
As an example, show the schematic diagram at another nose adjustment interface in Fig. 2 b, A, B shown in figure, C,
D, E is respectively virtual push button corresponding to different parameter modification informations, and user can be by clicking one of virtual push button
It determines parameter modification information, for example clicks virtual push button A, then the parameter modification information of user chosen is virtual push button A institute
Corresponding parameter modification information can be based on the parameter modification information in the operation, to facial image after receiving the operation
Carry out corresponding special effect processing.
Wherein, nose adjustment triggering interface and nose adjustment interface can be same user interface, be also possible to different
User interface.
In practical applications, user can also be changed according to identical mode of operation by changing selected virtual push button
Variable element modification information, in order to provide better usage experience for user, in user's nose special efficacy selection operation of every completion
Afterwards, processing result image corresponding to the operation can be shown to user, allow the user to according to display result determination be
The no selection for needing to re-start parameter modification information.For example, after user selects virtual push button A, according to virtual push button A institute
After corresponding parameter modification information carries out respective handling to facial image to be processed, processing result is shown to user, is used at this time
If family is unsatisfied with adjustment as a result, any parameter modification information in virtual push button B, C, D, E can be reselected again, treat again
It handles facial image and carries out respective handling.
In actual application, due to being usually discrete point between each nose characteristic point, and then in image procossing
During, the display effect that may result in image is unsmooth.Therefore, in the embodiments of the present disclosure, face to be processed is detected
In the nasal region of image and nose characteristic point after, further includes:
Interpolation processing is carried out to nose characteristic point, obtains the interpolation characteristic point in the nasal region of facial image to be processed,
Nose characteristic point further includes interpolation characteristic point.
In the embodiments of the present disclosure, since nose characteristic point of the meeting to user carries out interpolation processing, the nose of user is obtained
Interpolation characteristic point in region, thus the image that can make that treated appear to it is smoother, natural.Wherein, to nose feature
The specific method of row interpolation processing is clicked through, the embodiment of the present disclosure is not specifically limited, and can such as join Robert Cowell in practical applications
The methods of meter Te, Suresh Kumar or trigonometric function carry out interpolation processing, and details are not described herein again for the embodiment of the present disclosure.
When nose characteristic point includes the nose characteristic point that the nose characteristic point detected and interpolation obtain, according to user
Selected parameter modification information when performing corresponding processing to image to be processed, then to nose characteristic point and can insert simultaneously
The nose characteristic point that value obtains is handled.In practical applications, since interpolation algorithm is typically all to be pre-configured, for same
For nose special efficacy, carry out what interpolation arithmetic obtained based on nose characteristic point corresponding to the detected parameter modification information
Interpolation characteristic point is also fixation, and therefore, the current mobile vector of interpolation characteristic point similarly can be pre-configuration.
In one example, as shown in Figure 3a, nose characteristic point may include the left feature point a and right feature of the upper wing of nose
Point b, upper apex feature point c, the left apex feature point d in left nose hole and lower apex feature point e, right nostril upper apex feature point f,
Right apex feature point g and lower apex feature point h carries out interpolation to nose characteristic point, obtains interpolation characteristic point packet as shown in Figure 3b
Include k1, k2.
In the embodiments of the present disclosure, current mobile vector includes current moving distance and current moving direction, nose adjustment
It further include the adjustment index of parameter modification information in instruction, adjustment index is used for the adjustment intensity of identification parameter modification information, adjusts
Whole intensity includes current moving distance;
Determine the current mobile vector of nose characteristic point, comprising:
Foundation adjusts index, and the corresponding relationship for the adjustment index and characteristic point moving distance being pre-configured, and determines currently
Moving distance.
Wherein, adjustment index can be the numerical value or numberical range according to practical adjustment demand configuration, be also possible to other
The adjustment intensity that can be used in identification parameter modification information index, to the specific implementation of adjustment index in the embodiment of the present disclosure
Mode is without limitation.
Different adjustment indexs can correspond to different adjustment intensity, that is to say, that for same parameters modification information, when
When user selects different adjustment indexs, the parameter modification information can be based on, to the nasal region of facial image to be processed into
The processing of the different adjustment intensity of row.In the embodiment of the present disclosure, it can indicate that adjustment is strong by the moving distance of nose characteristic point
Degree, different moving distances characterize different adjustment intensity, corresponding different treatment effect.
In an optinal plan, adjustment index can be presented to the user in the form of numberical range list, numberical range column
Different numerical value in table can represent different adjustment intensity, and user can pass through the sliding button on the display interface of client
To realize the selection of adjustment index, a kind of client display interface as shown in Figure 4, wherein image preview area can be used for showing
Face image of leting others have a look at (can be facial image to be processed, the facial image being also possible in treatment process is also possible to that treated
Facial image), the corresponding numberical range list of adjustment index can be 0 to 100, user can by the special whole index button of sliding come
Select different adjustment indexs.
Since numerical value different in adjustment index can correspond to different adjustment intensity, in order to understand user, oneself is selected
The corresponding adjustment intensity effect of adjustment index, preview function can also be provided, selected any of 0 to 100 in user
When adjusting index, such as when 15, based on adjustment index 15, after obtaining current moving distance corresponding with the adjustment index
Facial image to be processed is performed corresponding processing based on current moving distance and current moving direction, i.e. control nose characteristic point
Movement, by the display of facial image after corresponding adjustment in image preview area.
Wherein, the corresponding relationship for adjusting index and characteristic point moving distance, can be pre-configured with according to actual needs.
In practical applications, since parameter modification information is usually all there are many meetings, the adjustment index and feature of different parameters modification information
The corresponding relationship of point moving distance can be different.For example, when parameter modification information parameter modification information narrow for nose tune,
The corresponding relationship of index and characteristic point moving direction is adjusted as the corresponding relationship 2 in preconfigured correspondence set, works as ginseng
When number modification informations are nose tune short parameter modification information, the corresponding relationship for adjusting index and characteristic point moving direction is to be
Corresponding relationship 3 in preconfigured correspondence set.That is, different parameter modification informations can be directed to, respectively
Configure the corresponding relationship of different adjustment index and characteristic point moving distance.It certainly, can also will be different in actual application
The adjustment index of parameter modification information and the corresponding relationship of characteristic point moving distance be disposed as it is identical, such as no matter parameter is repaired
Converting to breath is the narrow parameter modification information of nose tune or the short parameter modification information of nose tune, adjusts index and characteristic point moving distance
Corresponding relationship be all corresponding relationship 3 in preconfigured correspondence set.
In practical applications, for same parameters modification information, in pair of configuration adjustment index and characteristic point moving distance
When should be related to, it can be directed to each nose characteristic point, a corresponding relationship is respectively configured, i.e., each nose characteristic point is corresponding each
From corresponding relationship, also can choose by part nose characteristic point configure it is identical adjustment index and characteristic point moving distance pair
It should be related to, or whole nose characteristic points are configured to the corresponding relationship of identical adjustment index and characteristic point moving distance.Also
It is to say, for same parameters modification information, it is mobile that different adjustment index and characteristic point nose characteristic point can be respectively configured
The corresponding relationship of distance.
It in one example, can be by whole noses when nose parameter modification information parameter modification information short for nose tune
The adjustment index of characteristic point and the corresponding relationship of characteristic point moving distance are each configured to corresponding relationship 3;Or also can choose by
The adjustment index of interpolation characteristic point in nose characteristic point and the corresponding relationship of characteristic point moving distance are configured to corresponding relationship 2,
Corresponding relationship is configured by the corresponding relationship of the adjustment index of the nose characteristic point in nose characteristic point and characteristic point moving distance
3, as the upper apex feature point c, left apex feature point d, lower vertex in the left nose hole in nose characteristic point shown in Fig. 3 b is special
It is mobile to levy point e, the upper apex feature point f in right nostril, right apex feature point g, the adjustment index of lower apex feature point h and characteristic point
The corresponding relationship of distance is configured to corresponding relationship 3, by the adjustment index of interpolation characteristic point k1 and k2 and characteristic point moving distance
Corresponding relationship is configured to corresponding relationship 2.
Further, since current mobile vector includes current moving distance and current moving direction, so in addition to true
Determine the current moving distance of nose characteristic point, it is also necessary to determine the current moving direction of nose characteristic point, therefore in disclosure reality
It applies in example, determines the current mobile vector of nose characteristic point, comprising:
According to the parameter modification information of pre-configuration and the corresponding relationship of moving direction, movement corresponding with nose special efficacy is determined
Direction, moving direction corresponding with parameter modification information are current moving direction.
Wherein, the corresponding relationship of parameter modification information and moving direction can be pre-configured with according to actual needs.Such as
The direction that the corresponding relationship of the short parameter modification information of nose tune and moving direction moves up along the vertical direction, at this time just will be along perpendicular
Current moving direction of the histogram to the direction moved up as the short parameter modification information of nose tune.
Further, in the corresponding relationship of configuration parameter modification information and moving direction, it can choose and modify parameter
The corresponding whole nose characteristic points of information configure identical moving direction, can also be by the corresponding part nose of parameter modification information
Characteristic point configures identical moving direction.Certainly, in actual application, the same parameter modification information is also an option that
It configures the corresponding relationship of the adjustment index of nose characteristic point and moving distance to identical, configures moving direction to not identical
's.
In one example, it when parameter modification information parameter modification information narrow for nose tune, can choose institute in Fig. 3 a
The upper apex feature point c in the left nose hole shown, left apex feature point d, lower apex feature point e, right nostril upper apex feature point f,
Right apex feature point g, lower apex feature point h adjustment index be each configured to corresponding pass with the corresponding relationship of characteristic point moving distance
Be 3, specifically, by the moving direction of the upper apex feature point c in left nose hole, left apex feature point d, lower apex feature point e be with
It is set to and moves right, the moving direction by the upper apex feature point f in right nostril, right apex feature point g, lower apex feature point h is
It is configured to be moved to the left, (i.e. by upper apex feature point c, left apex feature point d, the lower apex feature point e, right nostril in left nose hole
Upper apex feature point f, right apex feature point g, lower apex feature point h moving direction be configured to the side close to nose middle line
To).
Further, after the current moving direction and current moving distance for determining nose characteristic point, so that it may according to
Determining current moving direction and the mobile nose characteristic point of current moving distance, the nose characteristic point after being moved, and according to
Nose characteristic point after movement determines nasal area adjusted.
In the embodiments of the present disclosure, the parameter modification information of nasal region includes the long parameter modification information of nose tune and nose
One in short parameter modification information is adjusted, and/or, in the wide parameter modification information of nose tune and the narrow parameter modification information of nose tune
One;
If the parameter modification information of nasal region includes the long parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves down;
If the parameter modification information of nasal region includes the short parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves up;
If the parameter modification information of nasal region is the wide parameter modification information of nose tune, moving direction is far from nose middle line
Direction;
If the parameter modification information of nasal region is the narrow parameter modification information of nose tune, moving direction is close to nose middle line
Direction
Wherein, it is by actual configuring condition which parameter modification information is the parameter modification information of nasal region, which specifically include,
It determines, for example, in an optinal plan, it is assumed that the optional parameter modification information of configuration (is supplied to the special efficacy choosing of user
) it is that the long parameter modification information of nose tune, the short parameter modification information of nose tune, the wide parameter modification information of nose tune and nose tune are narrow
Parameter modification information then only includes one of four kinds of parameter modification informations in the nose adjustment instruction of user, it is, user selects
The parameter modification information selected is one of which;In another optinal plan, it is assumed that the optional parameter modification information of configuration is nose
Son tunes up parameter modification information and nose turns parameter modification information down, and nose tunes up parameter modification information while including nose tune
Long parameter modification information and the wide parameter modification information of nose tune, when user select nose tune up when, at this time then user selection nose
It then simultaneously include the long parameter modification information of nose tune and the wide parameter modification information of nose tune in sub- special efficacy.
Wherein, the wide parameter modification information of nose tune and the narrow parameter modification information of nose tune refer to the nasion (eye to nose
Nasion part below eyeball, but do not include the nasion of eyes middle section) and the width of wing of nose part be adjusted, i.e., by nose
Characteristic point to the direction close to nose middle line it is mobile or to the direction far from nose middle line it is mobile, expanded or thin with reaching nose
The effect of nose;The long parameter modification information of nose tune and the short parameter modification information of nose tune are exactly the length adjustment to nose, not right
The width of nose is adjusted, and refers to carrying out elongating to the nose in facial image to be processed or shortening is handled, is i.e. control nose
Portion's characteristic point is moved up or is moved down along nose middle line, to achieve the effect that nose is elongated or shortened.It is understood that
It is that nose middle line among the above refers to the middle line of the nose vertical direction in facial image to be processed, and the middle line of nose can
To be calculated according to nose characteristic point, as shown in the dotted portion in Fig. 5 a and 5b.
It should be noted that parameter modification information is merely illustrative among the above, the parameter modification letter in actual application
Breath be not limited to these four, the embodiment of the present disclosure to specific parameter modification information without limitation.
In one example, if the adjustment index that user selects is 8, the nose characteristic point detected includes such as institute in Fig. 3 b
Characteristic point a, b, c, d, e, f, g, the h shown, and to nose characteristic point carry out after interpolation processing obtained interpolation characteristic point k1 and
k2。
1, as shown in Figure 5 a, when nose special efficacy parameter modification information narrow for nose tune, according to the adjustment index of pre-configuration
With the corresponding relationship of characteristic point moving distance, and be pre-configured parameter modification information and moving direction corresponding relationship, determine
The current moving distance and current moving direction of nose characteristic point a, b, c, d, e, f, g, h and interpolation characteristic point k1, k2 out, such as 6a
The mobile vector of nose characteristic point d shown in figure is vectorThe direction of the vector is nose characteristic point d when Forward
Dynamic direction, the length of the vector is the current moving distance of nose characteristic point d, then according to determining each nose characteristic point
Current moving distance and current moving direction to nose characteristic point a, b, c, d, e, f, g, h and interpolation characteristic point be k1, k2 into
Row movement, nose characteristic point c ', d ', e ', f ', g ', h ', k1 ' and k2 ' after being moved, according to the nose feature after movement
Point determines nasal area adjusted, and then the facial image to be processed after being adjusted.As can be seen from Figure, in this example
In, the mobile vector of nose characteristic point a, b is 0;Characteristic point c, d, e, f, g, h and interpolation characteristic point are k1, k2 according to pre-configuration
Adjustment index and characteristic point moving distance corresponding relationship, determine current moving distance be X, according to the nose tune of pre-configuration
The corresponding relationship of narrow parameter modification information and moving direction determines that current moving direction is the nose middle line close to vertical direction
Direction (the mobile vector of nose characteristic point d as illustrated in fig. 5 aCurrent moving direction and current moving distance).
2, as shown in Figure 5 b, when nose special efficacy parameter modification information long for nose tune, according to the adjustment index of pre-configuration
With the corresponding relationship of characteristic point moving distance, and be pre-configured parameter modification information and moving direction corresponding relationship, determine
The current moving distance and current moving direction of nose characteristic point a, b, c, d, e, f, g, h and interpolation characteristic point k1, k2 out, such as 6b
The mobile vector of nose characteristic point d shown in figure is vectorThe front direction of working as of the vector is working as nose characteristic point d
Preceding moving direction, the current length of the vector are the current moving distance of nose characteristic point d, then according to determining when Forward
Dynamic distance and current moving direction move nose characteristic point a, b, c, d, e, f, g, h and interpolation characteristic point k1, k2, obtain
Nose characteristic point c ', d ', e ', f ', g ', h ', k1 ' and k2 ' after movement, after determining adjustment according to nose characteristic point adjusted
Nasal area, and then the facial image to be processed after being adjusted.As can be seen from Figure, nose characteristic point in this example
A, the mobile vector of b;And characteristic point c, d, e, f, g, h and interpolation characteristic point k1, k2 are according to the adjustment index and feature of pre-configuration
Point moving distance corresponding relationship, determine current moving distance be X, according to the long parameter modification information of the nose tune of pre-configuration with
The corresponding relationship of moving direction, the current moving direction determined are along the direction that nose middle line direction moves down (in such as 5a figure
Shown in nose characteristic point d mobile vectorCurrent moving direction and preceding moving distance).
In actual application, in order to make facial image to be processed adjusted more true nature, it can also adjust
While whole nose characteristic point, carried out according to the processing mode of pre-configuration other human face regions associated to nasal region corresponding
Adjustment, for example, some regions (region between such as mouth and nose) around nasal region can be smoothed, or
The characteristic point at other positions is done corresponding translation and handled by person, can be with for example, after according to the long parameter modification information processing of nose tune
The characteristic point progress of lip portion is moved up into processing accordingly, so that nose adjusted more true nature.
In addition, in actual application in embodiment of the disclosure, can also to facial image to be processed adjusted into
Row rendering, obtains effect picture corresponding with nose special efficacy, wherein without limitation to the specific implementation of rendering.
Wherein, a kind of optional rendering embodiment are as follows: it is corresponding with texture coordinate to preset each nose characteristic point
After obtaining nose characteristic point interpolation characteristic point can be calculated according to nose characteristic point, corresponding texture coordinate in relationship
Texture coordinate.After being adjusted to nose characteristic point, nose adjusted can be determined according to nose characteristic point adjusted
The representative points coordinate in portion region is then based on the corresponding relationship of preconfigured each nose characteristic point and texture coordinate, with
And the corresponding texture coordinate of nose characteristic point, determine texture coordinate corresponding to representative points coordinate, the line that will be obtained later
The corresponding texture of reason coordinate is attached on the corresponding position of nasal region adjusted, obtains effect corresponding with parameter modification information
Fruit figure.
Certainly, in actual application, the corresponding relationship of each nose characteristic point and texture coordinate can not also be preset,
After determining representative points coordinate, it is corresponding representative points coordinate can be calculated by the calculation of pre-configuration in real time
Then the corresponding texture of the texture coordinate being calculated is attached to after adjusting on the corresponding position of nasal region by texture coordinate,
Obtain effect picture corresponding with nose special efficacy.
It should be noted that being merely illustrative among the above to the rendering embodiment of facial image to be processed, in reality
Application in the mode that implements can set according to the actual needs.
Further, corresponding nose processing is being carried out to facial image to be processed, is obtaining and state parameter modification information pair
After the effect picture answered, if the picture for receiving user saves operation, it can will be located according to pre-configured store path at this time
The effect picture obtained after reason is stored in corresponding memory space.Wherein, pre-configured store path can be local picture library
Memory space, be also possible to cloud server, the embodiment of the present disclosure to it is secondary without limitation.
Based on the same principle with method shown in Fig. 1, a kind of facial image editor is additionally provided in embodiment of the disclosure
Device 20, as shown in fig. 6, the device 20 may include: image collection module 210, command reception module 220, characteristic point acquisition
Module 230, image processing module 240 and image display 250, wherein
Image collection module 210, for obtaining facial image to be processed;
Command reception module 220, for receiving the nose adjustment instruction to facial image to be processed, in nose adjustment instruction
Parameter modification information including based on facial image to be processed being nasal region;
Characteristic point obtains module 230, for detecting the nasal region of facial image to be processed in response to parameter modification information
In nose characteristic point, and according to nose characteristic point obtain it is mobile after nose characteristic point;
Image processing module 240, for determining nasal region adjusted according to the nose characteristic point after movement, and
Nasal region adjusted is rendered, the facial image to be processed after being adjusted;
Image display 250, for showing facial image to be processed adjusted.
In embodiment of the disclosure, characteristic point obtains nose characteristic point of the module after obtaining movement according to nose characteristic point
When, it is specifically used for:
According to parameter modification information, the current mobile vector of nose characteristic point is determined;
According to current mobile vector, the movement of nose characteristic point, the nose characteristic point after being moved are controlled.
In embodiment of the disclosure, characteristic point obtain module in the nasal region for detecting facial image to be processed and nose
After portion's characteristic point, it is also used to:
Interpolation processing is carried out to nose characteristic point, obtains the interpolation characteristic point in the nasal region of facial image to be processed,
Nose characteristic point further includes interpolation characteristic point.
In embodiment of the disclosure, current mobile vector includes current moving distance and current moving direction, nose adjustment
It further include the adjustment index of parameter modification information in instruction, adjustment index is used for the adjustment intensity of identification parameter modification information, adjusts
Whole intensity includes current moving distance;
Characteristic point obtains module when determining the current mobile vector of nose characteristic point, is specifically used for:
Foundation adjusts index, and the corresponding relationship for the adjustment index and characteristic point moving distance being pre-configured, and determines currently
Moving distance.
In embodiment of the disclosure, characteristic point obtains nose characteristic point of the module after obtaining movement according to nose characteristic point
When, it is specifically used for:
According to the parameter modification information of pre-configuration and the corresponding relationship of moving direction, movement corresponding with nose special efficacy is determined
Direction, moving direction corresponding with parameter modification information are current moving direction.
In embodiment of the disclosure, the parameter modification information of nasal region includes the long parameter modification information of nose tune and nose
One in short parameter modification information is adjusted, and/or, in the wide parameter modification information of nose tune and the narrow parameter modification information of nose tune
One;
If the parameter modification information of nasal region includes the long parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves down;
If the parameter modification information of nasal region includes the short parameter modification information of nose tune, current moving direction is along nose
The direction that middle line direction moves up;
If the parameter modification information of nasal region is the wide parameter modification information of nose tune, moving direction is far from nose middle line
Direction;
If the parameter modification information of nasal region is the narrow parameter modification information of nose tune, moving direction is close to nose middle line
Direction.
In embodiment of the disclosure, command reception module when receiving to the nose adjustment instruction of facial image to be processed,
It is specifically used for:
It receives and triggering command is adjusted to the nose of facial image to be processed;
Triggering command is adjusted in response to nose, display nose adjusts interface, and nose adjustment shows nose parameter in interface
Modification information option and facial image to be processed;
Interface is adjusted by nose, receives nose adjustment instruction.
In the embodiment of the present disclosure, receive after nose adjustment instruction i.e. according to the parameter modification information in nose adjustment instruction
In being adjusted to the nasal region of facial image to be processed, the function of user's nasal region in key adjustment image is realized,
Without editing manually to the nasal region of facial image, reduce the time of processing, improves the interaction body of user
It tests.
Based on principle identical with the image processing method in embodiment of the disclosure, also provided in embodiment of the disclosure
A kind of electronic equipment, the electronic equipment can include but is not limited to: processor and memory;Memory is calculated for storing
Machine operational order;Processor, for by calling computer operation instruction to execute method shown in embodiment.
Based on principle identical with the image processing method in embodiment of the disclosure, present disclose provides a kind of computers
Readable storage medium storing program for executing is stored with computer program instructions on computer readable storage medium, and computer program instructions are based on making
Calculation machine realizes method shown in above-described embodiment, and details are not described herein.
Embodiment of the disclosure compared with prior art,
In the embodiment of the present disclosure, receive after nose adjustment instruction i.e. according to the parameter modification information in nose adjustment instruction
In being adjusted to the nasal region of facial image to be processed, the function of user's nasal region in key adjustment image is realized,
Without editing manually to the nasal region of facial image, reduce the time of processing, improves the interaction body of user
It tests.
Below with reference to Fig. 7, it illustrates the structural representations for the electronic equipment 700 for being suitable for being used to realize the embodiment of the present disclosure
Figure, which can be terminal device or server.Wherein, terminal device can include but is not limited to such as move
Phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable more matchmakers
Body player), the mobile terminal of car-mounted terminal (such as vehicle mounted guidance terminal) etc. and number TV, desktop computer etc.
Deng fixed terminal.Electronic equipment shown in Fig. 7 is only an example, function to the embodiment of the present disclosure and should not use model
Shroud carrys out any restrictions.
As shown in fig. 7, electronic equipment 700 may include processing unit (such as central processing unit, graphics processor etc.)
701, random access can be loaded into according to the program being stored in read-only memory (ROM) 702 or from storage device 708
Program in memory (RAM) 703 and execute various movements appropriate and processing.In RAM 703, it is also stored with electronic equipment
Various programs and data needed for 700 operations.Processing unit 701, ROM 702 and RAM 703 pass through the phase each other of bus 704
Even.Input/output (I/O) interface 705 is also connected to bus 704.
In general, following device can connect to I/O interface 705: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 706 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 707 of dynamic device etc.;Storage device 708 including such as tape, hard disk etc.;And communication device 709.Communication device
709, which can permit electronic equipment 700, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 7 shows tool
There is the electronic equipment 700 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 709, or from storage device 708
It is mounted, or is mounted from ROM 702.When the computer program is executed by processing unit 701, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity
When sub- equipment executes, so that the electronic equipment executes method shown in above-described embodiment.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the
One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (10)
1. a kind of image processing method characterized by comprising
Obtain facial image to be processed;
The nose adjustment instruction to the facial image to be processed is received, includes based on described in the nose adjustment instruction wait locate
Reason facial image is the parameter modification information of nasal region;
In response to the parameter modification information, the nose characteristic point in the nasal region of the facial image to be processed is detected;
The nose characteristic point after movement is obtained according to the nose characteristic point;
According to the nose characteristic point after the movement, nasal region adjusted is determined, and to the nose area adjusted
Domain is rendered, the facial image to be processed after being adjusted;
Show the facial image to be processed adjusted.
2. the method according to claim 1, wherein described obtain the nose after movement according to the nose characteristic point
Portion's characteristic point, comprising:
According to the parameter modification information, the current mobile vector of nose characteristic point is determined;
According to the current mobile vector, the movement of the nose characteristic point, the nose characteristic point after being moved are controlled.
3. the method according to claim 1, wherein the nasal region of the detection facial image to be processed
In and nose characteristic point after, further includes:
Interpolation processing is carried out to the nose characteristic point, obtains the interpolation feature in the nasal region of the facial image to be processed
Point, the nose characteristic point further include the interpolation characteristic point.
4. according to the method in claim 2 or 3, which is characterized in that the current mobile vector includes current moving distance
It further include the adjustment index of the parameter modification information in the nose adjustment instruction with current moving direction, the adjustment refers to
The adjustment intensity for identifying the parameter modification information is marked, the adjustment intensity includes the current moving distance;
The current mobile vector of the determining nose characteristic point, comprising:
According to the adjustment index, and the corresponding relationship of adjustment index and characteristic point moving distance being pre-configured, described in determination
Current moving distance.
5. according to the method any in claim 2 to 4, which is characterized in that the determining nose characteristic point works as Forward
Dynamic vector, comprising:
According to the parameter modification information of pre-configuration and the corresponding relationship of moving direction, movement corresponding with the nose special efficacy is determined
Direction, the moving direction corresponding with the parameter modification information are the current moving direction.
6. according to the method described in claim 5, it is characterized in that, the parameter modification information of the nasal region includes nose tune
One in long parameter modification information and the short parameter modification information of nose tune, and/or, the wide parameter modification information of nose tune and nose
Adjust one in narrow parameter modification information;
If the parameter modification information of the nasal region includes the long parameter modification information of the nose tune, the current moving direction
For the direction moved down along nose middle line direction;
If the parameter modification information of the nasal region includes the short parameter modification information of the nose tune, the current moving direction
For the direction moved up along nose middle line direction;
If the parameter modification information of the nasal region is the wide parameter modification information of the nose tune, the moving direction is separate
The direction of nose middle line;
If the parameter modification information of the nasal region is the narrow parameter modification information of the nose tune, the moving direction is close
The direction of nose middle line.
7. the method according to claim 1, wherein the nose tune received to the facial image to be processed
Whole instruction, comprising:
It receives and triggering command is adjusted to the nose of the facial image to be processed;
Triggering command is adjusted in response to the nose, display nose adjusts interface, and the nose adjustment shows nose in interface
Parameter modification information option and the facial image to be processed;
Interface is adjusted by the nose, receives the nose adjustment instruction.
8. a kind of image processing apparatus characterized by comprising
Image collection module, for obtaining facial image to be processed;
Command reception module, for receiving the nose adjustment instruction to the facial image to be processed, the nose adjustment instruction
In to include based on the facial image to be processed be nasal region parameter modification information;
Characteristic point obtains module, for detecting the nose area of the facial image to be processed in response to the parameter modification information
Nose characteristic point in domain, and the nose characteristic point after movement is obtained according to the nose characteristic point;
Image processing module, for determining nasal region adjusted, and exchange according to the nose characteristic point after the movement
The nasal region after whole is rendered, the facial image to be processed after being adjusted;
Image display, for showing the facial image to be processed adjusted.
9. a kind of electronic equipment characterized by comprising
Processor and memory;
The memory, for storing computer operation instruction;
The processor, for executing any one of the claims 1 to 7 institute by calling the computer operation instruction
The method stated.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program instruction, the computer program instructions are used to that computer to be made to execute side described in any one of the claims 1 to 7
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811458390.1A CN109544444A (en) | 2018-11-30 | 2018-11-30 | Image processing method, device, electronic equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811458390.1A CN109544444A (en) | 2018-11-30 | 2018-11-30 | Image processing method, device, electronic equipment and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109544444A true CN109544444A (en) | 2019-03-29 |
Family
ID=65851920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811458390.1A Pending CN109544444A (en) | 2018-11-30 | 2018-11-30 | Image processing method, device, electronic equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109544444A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008911A (en) * | 2019-04-10 | 2019-07-12 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN110084204A (en) * | 2019-04-29 | 2019-08-02 | 北京字节跳动网络技术有限公司 | Image processing method, device and electronic equipment based on target object posture |
CN110211033A (en) * | 2019-06-28 | 2019-09-06 | 北京字节跳动网络技术有限公司 | Face image processing process, device, medium and electronic equipment |
CN110363718A (en) * | 2019-06-28 | 2019-10-22 | 北京字节跳动网络技术有限公司 | Face image processing process, device, medium and electronic equipment |
CN110956595A (en) * | 2019-11-29 | 2020-04-03 | 广州酷狗计算机科技有限公司 | Method, device and system for face beautifying processing and storage medium |
CN112734626A (en) * | 2019-10-14 | 2021-04-30 | 成都武侯珍妍医疗美容门诊部有限公司 | Nose virtual shaping method of deep learning model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106296571A (en) * | 2016-07-29 | 2017-01-04 | 厦门美图之家科技有限公司 | A kind of based on face grid reduce wing of nose method, device and calculating equipment |
CN106339694A (en) * | 2016-09-14 | 2017-01-18 | 北京金山安全软件有限公司 | Image processing method and device and electronic equipment |
CN107767333A (en) * | 2017-10-27 | 2018-03-06 | 努比亚技术有限公司 | Method, equipment and the computer that U.S. face is taken pictures can storage mediums |
CN107844764A (en) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN107909058A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
-
2018
- 2018-11-30 CN CN201811458390.1A patent/CN109544444A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106296571A (en) * | 2016-07-29 | 2017-01-04 | 厦门美图之家科技有限公司 | A kind of based on face grid reduce wing of nose method, device and calculating equipment |
CN106339694A (en) * | 2016-09-14 | 2017-01-18 | 北京金山安全软件有限公司 | Image processing method and device and electronic equipment |
CN107767333A (en) * | 2017-10-27 | 2018-03-06 | 努比亚技术有限公司 | Method, equipment and the computer that U.S. face is taken pictures can storage mediums |
CN107844764A (en) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN107909058A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008911A (en) * | 2019-04-10 | 2019-07-12 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN110008911B (en) * | 2019-04-10 | 2021-08-17 | 北京旷视科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN110084204A (en) * | 2019-04-29 | 2019-08-02 | 北京字节跳动网络技术有限公司 | Image processing method, device and electronic equipment based on target object posture |
CN110084204B (en) * | 2019-04-29 | 2020-11-24 | 北京字节跳动网络技术有限公司 | Image processing method and device based on target object posture and electronic equipment |
CN110211033A (en) * | 2019-06-28 | 2019-09-06 | 北京字节跳动网络技术有限公司 | Face image processing process, device, medium and electronic equipment |
CN110363718A (en) * | 2019-06-28 | 2019-10-22 | 北京字节跳动网络技术有限公司 | Face image processing process, device, medium and electronic equipment |
CN112734626A (en) * | 2019-10-14 | 2021-04-30 | 成都武侯珍妍医疗美容门诊部有限公司 | Nose virtual shaping method of deep learning model |
CN110956595A (en) * | 2019-11-29 | 2020-04-03 | 广州酷狗计算机科技有限公司 | Method, device and system for face beautifying processing and storage medium |
CN110956595B (en) * | 2019-11-29 | 2023-11-24 | 广州酷狗计算机科技有限公司 | Face beautifying processing method, device, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544444A (en) | Image processing method, device, electronic equipment and computer storage medium | |
CN109583385A (en) | Face image processing process, device, electronic equipment and computer storage medium | |
CN109858445A (en) | Method and apparatus for generating model | |
CN108985257A (en) | Method and apparatus for generating information | |
CN109614902A (en) | Face image processing process, device, electronic equipment and computer storage medium | |
CN109800732A (en) | The method and apparatus for generating model for generating caricature head portrait | |
CN109543646A (en) | Face image processing process, device, electronic equipment and computer storage medium | |
US20180213077A1 (en) | Method and apparatus for controlling smart device, and computer storage medium | |
CN109584152A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN109981787B (en) | Method and device for displaying information | |
CN109559288A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN109522869A (en) | Face image processing process, device, terminal device and computer storage medium | |
CN110136054A (en) | Image processing method and device | |
CN109815365A (en) | Method and apparatus for handling video | |
CN110009059A (en) | Method and apparatus for generating model | |
CN109255814A (en) | Method and apparatus for handling image | |
CN108170282A (en) | For controlling the method and apparatus of three-dimensional scenic | |
CN110471733A (en) | Information processing method and device | |
CN112560540B (en) | Cosmetic wearing recommendation method and device | |
CN108415653A (en) | Screen locking method and device for terminal device | |
CN108882025A (en) | Video frame treating method and apparatus | |
CN109508450A (en) | The operating method of table, device, storage medium and electronic equipment in online document | |
CN110472558A (en) | Image processing method and device | |
CN110516598A (en) | Method and apparatus for generating image | |
CN108521516A (en) | Control method and device for terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |