CN109448069A - A kind of template generation method and mobile terminal - Google Patents
A kind of template generation method and mobile terminal Download PDFInfo
- Publication number
- CN109448069A CN109448069A CN201811280785.7A CN201811280785A CN109448069A CN 109448069 A CN109448069 A CN 109448069A CN 201811280785 A CN201811280785 A CN 201811280785A CN 109448069 A CN109448069 A CN 109448069A
- Authority
- CN
- China
- Prior art keywords
- target
- input
- subgraph
- face
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Abstract
The present invention provides a kind of template generation method and mobile terminal, method includes: the first input for receiving user in N alternative facial images;M for obtaining M chosen target person face region of the first input refer to subgraph;Subgraph is referred to based on M, generates target face template;Wherein, M target person face region is the face subregion selected from least two alternative facial images, and every corresponds to people's face region with reference to subgraph, and every alternative facial image includes at least one face subregion;N, M is the integer greater than 1.The template generation scheme provided through the embodiment of the present invention, the M in the M target person face region chosen based on user is risen with reference to subgraph, generate target face template, target face template can be made according to personal preference, the facial image of user can be adjusted by the target face template, it is adjusted different parameters manually without user, it is easy to operate.
Description
Technical field
The present invention relates to technical field of mobile terminals, more particularly to a kind of template generation method and mobile terminal.
Background technique
As mobile terminal is taken pictures the promotion of ability, function it is perfect, more and more people get used to mobile terminal
Camera records oneself life.Due to everybody pursuit to beauty, but also U.S. face function is stronger and stronger, it is more more and more universal.
But people to beauty pursuit and be not content with U.S. face, slowly have stronger demand to U.S. type.City at present
Also occur the U.S. type camera in part on field, can allow user oneself adjusting parameter, to realize the U.S. type to oneself, but to each
Parameter is adjusted, cumbersome, and to solve the above-mentioned problems, the U.S. pattern plate that many businessmans provide obstructed style allows user
It applies.
However, the minority that the template of different-style only has businessman to provide is optional, and existing U.S. pattern plate is not able to satisfy
The demand of user, when user is dissatisfied to U.S. pattern plate, user still needs to adjust different parameters, causes cumbersome.
Summary of the invention
The embodiment of the present invention provides a kind of template generation method and mobile terminal, to solve in the prior art to facial image
Parameter be adjusted, cause cumbersome problem.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of template generation methods, which comprises receive user in N
Open the first input in alternative facial image;Obtain the M ginsengs in M chosen target person face region of first input
Examine subgraph;Subgraph is referred to based on the M, generates target face template;Wherein, the M target person face region is
Face subregion selected from least two alternative facial images, every corresponds to people's face region with reference to subgraph, and every standby
Face image of choosing includes at least one face subregion;N, M is the integer greater than 1.
Second aspect, the embodiment of the invention provides a kind of mobile terminal, the mobile terminal includes: the first reception mould
Block, for receiving first input of the user in N alternative facial images;First obtains module, defeated for obtaining described first
M for entering M chosen target person face region refer to subgraph;First generation module, for based on the M reference
Image generates target face template;Wherein, the M target person face region is selected from least two alternative facial images
Face subregion, every corresponds to people's face region with reference to subgraph, and every alternative facial image includes at least one face
Subregion;N, M is the integer greater than 1.
The third aspect the embodiment of the invention also provides a kind of mobile terminal, including processor, memory and is stored in institute
The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor
The step of realizing the template generation method.
Fourth aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium
Computer program is stored on storage media, and the step of the template generation method is realized when the computer program is executed by processor
Suddenly.
In embodiments of the present invention, by receiving first input of the user in N alternative facial images;Obtain described
M of M chosen target person face region of one input refer to subgraph;Subgraph is referred to based on the M, generates target
Face template, the template generation scheme provided through the embodiment of the present invention, the M target person face region chosen based on user
M rises with reference to subgraph, generates target face template, can make target face template according to personal preference, pass through the target person
Face template can be adjusted the facial image of user, be adjusted different parameters manually without user, easy to operate.
Detailed description of the invention
Fig. 1 is one of the flow chart of template generation method provided in an embodiment of the present invention;
Fig. 2 is the two of the flow chart of template generation method provided in an embodiment of the present invention;
Fig. 3 is that the image of template generation method provided in an embodiment of the present invention chooses interface schematic diagram;
Fig. 4 is the image segmentation interface schematic diagram of template generation method provided in an embodiment of the present invention;
Fig. 5 is that template generation method provided in an embodiment of the present invention for image segmentation interface chooses interface schematic diagram;
Fig. 6 is the preview template for displaying interface schematic diagram of template generation method provided in an embodiment of the present invention;
Fig. 7 is the template construct interface schematic diagram of template generation method provided in an embodiment of the present invention;
Fig. 8 is the target face template schematic diagram of template generation method provided in an embodiment of the present invention;
Fig. 9 is the stencil-chosen interface schematic diagram of template generation method provided in an embodiment of the present invention;
Figure 10 is the structural block diagram of mobile terminal provided in an embodiment of the present invention;
Figure 11 is the hardware structural diagram of mobile terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Referring to Fig.1, a kind of one of the flow chart of template generation method provided in an embodiment of the present invention is shown.
Template generation method provided in an embodiment of the present invention the following steps are included:
Step 101: receiving first input of the user in N alternative facial images.
Wherein, N alternative facial images can be user's arbitrary image that user selects in image data base, can also be with
For the facial image of user.
It should be noted that the first input, which is user, chooses operation to face subregion in N alternative facial image.
Step 102: M for obtaining M chosen target person face region of the first input refer to subgraph.
User can choose M of chosen target person face region with reference to subgraph, wherein referring to subgraph is M
Different zones in each target person face region, such as: shape of face, forehead, mouth, nose, eyes, eye spacing, chin, tooth
Deng.
Step 103: referring to subgraph based on M, generate target face template.
Wherein, M target person face region is the face subregion selected from least two alternative facial images, every reference
Subgraph corresponds to people's face region, and every alternative facial image includes at least one face subregion;N, M is greater than 1
Integer.
M can be the different face regions in multiple alternative facial images with reference to subgraph, or one alternative
Face region in image refers to subgraph based on M, generates target face template, and user can be based on target face template
The facial image of user is handled.
In embodiments of the present invention, by receiving first input of the user in N alternative facial images;Obtain described
M of M chosen target person face region of one input refer to subgraph;Subgraph is referred to based on the M, generates target
Face template, the template generation scheme provided through the embodiment of the present invention, the M target person face region chosen based on user
M rises with reference to subgraph, generates target face template, can make target face template according to personal preference, pass through the target person
Face template can be adjusted the facial image of user, be adjusted different parameters manually without user, easy to operate.
Referring to Fig. 2, the two of the flow chart of a kind of template generation method provided in an embodiment of the present invention are shown.
Template generation method provided in an embodiment of the present invention the following steps are included:
Step 201: obtaining target facial image.
Target facial image can be user's arbitrary image that user selects in image data base, or user's
Facial image.
Alternative facial image is selected as shown in figure 3, choosing in interface in image.
Step 202: image segmentation being carried out to target facial image, generates M people's face region.
Target facial image is identified based on recognition of face, recognition of face may include: the side based on geometrical characteristic
Method, the method based on template and the method based on model.Method based on template can be divided into method based on relevant matches,
Eigenface method, linear discriminant analysis method, singular value decomposition method, neural network method, Dynamic link library matching process etc..Base
Then have in the method for model based on hidden markov model, active shape model and the method for active appearance models etc..
The knowledge of face is carried out to the facial image obtained in image preview interface by above-mentioned any one face recognition algorithms
Not.
As shown in figure 4, the face recognized are split, it is divided into M people's face region.
Step 203: M mark of display.
Wherein, each mark is used to indicate people's face region.
Mark includes but is not limited to: shape of face, forehead, mouth, nose, eyes, eye spacing, chin, tooth etc..
Show M mark, in image preview interface in order to the subsequent selection to mark.
Step 204: receiving user and the first son of the target subregion in M people's face region is inputted.
User can input the first son of each one the face region M input, wherein the first son input can for clicking operation,
Double click operation, long press operation etc., the embodiment of the present invention is not specifically limited this.
Specifically, each face subregion after segmentation is exported into display in image preview interface, as shown in figure 5, with
Family can select at least one region based on personal preference, can be using the side chosen to the selection operation at least one region
Formula selects multiple regions, the face subregion chosen is determined as target subregion.
Step 205: being inputted in response to the first son, display and the associated at least alternative facial image of target subregion.
Each target subregion is corresponding with an alternative facial image, carries out for the ease of user to the region chosen
Solution, the associated at least alternative facial image of displaying target subregion.
Step 206: receiving at least once son input of the user at least one alternative facial image.
It should be noted that the first son input, which is user, chooses operation to face subregion in N alternative facial image.
Step 207: M for obtaining M chosen target person face region of the first input refer to subgraph.
User can choose M of chosen target person face region with reference to subgraph, wherein referring to subgraph is M
Different zones in each target person face region, such as: shape of face, forehead, mouth, nose, eyes, eye spacing, chin, tooth
Deng.
Step 208: receiving second input of the user to object reference subgraph.
User needs first to choose default template, as shown in fig. 6, choosing in Fig. 6 before clicking object reference subgraph
One template is as default template.
User can select multiple object reference subgraphs, wherein the second input can be clicking operation, dragging
Operation etc. is chosen in operation, and the embodiment of the present invention is not specifically limited this.
Step 209: in response to the second input, object reference subgraph being shown into default face template the second input pair
The position answered.
The object reference subgraph that second inputs selection is added in default face template, is existed in default face template
Object reference subgraph is added the corresponding position in default face template by face subgraph.
As shown in fig. 7, the object reference subgraph that the second input is chosen, i.e. material, are dragged at corresponding position.
Step 210: subgraph being referred to based on default face template and M, generates target face template.
Wherein, M include object reference subgraph with reference to subgraph.
It selects to be default face template as user, then each face subgraph in default face template is replaced with into each ginseng
Subgraph is examined, other regions preset in face template retain, as shown in figure 8, generating target face template.
Based on the target face template that default face template generates, user is not necessarily to other other than reference subgraph
Region is added, and is adjusted without the parameter to the region in addition to reference subgraph, and the operation of user is facilitated.
When the default template that user selects is empty face template, then it is corresponding to be added to sky face template with reference to subgraph
In region, other un-added regions, user can select other in other facial images with reference to subgraph, by other again
It is added in sky face template with reference to subgraph, the face template that makes to have leisure is as a complete target face template.
Step 211: in the case where user does not name target face template, obtaining preset object naming.
It is automatically the default name of target face template generation in the case that user is not to the name of target face template, and
By target face template with default name is corresponding stores into mobile terminal.
Step 212: by target face template and object naming associated storage.
User can be named for the target face template generated, when not naming target face template, be automatically
The default name of target face template generation, facilitates user subsequent in use, easy-to-look-up in the terminal.
Receive the third input of user;
It inputs in response to third, according to target face template, target facial image is adjusted.
As shown in figure 9, self-defined template 1 is target face template, user can select self-defined template 1, press
Target facial image is adjusted according to self-defined template 1.
User is received to the selection operation of target template, i.e. the third input of reception user, is determined in target face template
Each region of the facial image of user is adjusted by the parameter of face based on each parameter, generates the target of user's facial image
Image.
It is adjusted by facial image of the target face template to user, user no longer needs to manually carry out each region
Adjustment, directly automatically generates customer satisfaction system image, easy to operate.
In embodiments of the present invention, by receiving first input of the user in N alternative facial images;Obtain described
M of M chosen target person face region of one input refer to subgraph;Subgraph is referred to based on the M, generates target
Face template, the template generation scheme provided through the embodiment of the present invention, the M target person face region chosen based on user
M rises with reference to subgraph, generates target face template, can make target face template according to personal preference, pass through the target person
Face template can be adjusted the facial image of user, be adjusted different parameters manually without user, easy to operate.
Referring to Fig.1 0, show the structural block diagram of mobile terminal provided in an embodiment of the present invention.
Mobile terminal provided in an embodiment of the present invention includes: the first receiving module 301, alternative at N for receiving user
The first input in facial image;First obtains module 302, for obtaining M chosen target face of first input
The M of subregion refer to subgraph;First generation module 303 generates target face for referring to subgraph based on the M
Template;Wherein, the M target person face region is the face subregion selected from least two alternative facial images, every ginseng
The corresponding people's face region of subgraph is examined, every alternative facial image includes at least one face subregion;N, M is to be greater than
1 integer.
Preferably, the terminal further include: second obtains module, for receiving user in N in first receiving module
Before opening the first input in alternative facial image, target facial image is obtained;Second generation module, for the target person
Face image carries out image segmentation, generates M people's face region;Display module, for showing M mark;Wherein, each mark is used
In one people's face region of instruction.
Preferably, first input includes son input at least once;The terminal further include: the second receiving module is used
In after the display module shows M mark, reception user to the target subregion in the M people face region the
One son input;First respond module, for inputting in response to first son, display and the target subregion are associated at least
One alternative facial image;First receiving module is specifically used for: receiving user in an at least alternative facial image
In at least once son input.
Preferably, first generation module, comprising: the first receiving submodule, for receiving user to object reference
Second input of image;Submodule is responded, in response to second input, the object reference subgraph to be shown to pre-
If the corresponding position of the second input described in face template;Submodule is generated, for based on default face template and the M ginsengs
Subgraph is examined, target face template is generated;Wherein, the M include the object reference subgraph with reference to subgraph.
Preferably, the terminal further include: name module, for being based on the M references in first generation module
Subgraph after generating target face template, in the case where the user does not name the target face template, obtains pre-
If object naming;Memory module is used for the target face template and the object naming associated storage.
Preferably, the terminal further include: third receiving module, for being opened in first generation module based on the M
With reference to subgraph, after generating target face template, the third input of user is received;Second respond module, in response to institute
Third input is stated, according to the target face template, the target facial image is adjusted.
Mobile terminal provided in an embodiment of the present invention can be realized mobile terminal in the embodiment of the method for Fig. 1 to Fig. 2 and realize
Each process, to avoid repeating, which is not described herein again.
In embodiments of the present invention, by receiving first input of the user in N alternative facial images;Obtain described
M of M chosen target person face region of one input refer to subgraph;Subgraph is referred to based on the M, generates target
Face template, the template generation scheme provided through the embodiment of the present invention, the M target person face region chosen based on user
M rises with reference to subgraph, generates target face template, can make target face template according to personal preference, pass through the target person
Face template can be adjusted the facial image of user, be adjusted different parameters manually without user, easy to operate.
Embodiment five
Referring to Fig.1 1, the hardware structural diagram of a kind of mobile terminal of each embodiment to realize the present invention.
The mobile terminal 400 includes but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, defeated
Enter unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor
The components such as 410 and power supply 411.It will be understood by those skilled in the art that mobile terminal structure shown in Figure 11 is not constituted
Restriction to mobile terminal, mobile terminal may include than illustrating more or fewer components, perhaps combine certain components or
Different component layouts.In embodiments of the present invention, mobile terminal include but is not limited to mobile phone, tablet computer, laptop,
Palm PC, car-mounted terminal, wearable device and pedometer etc..
Processor 410, for receiving first input of the user in N alternative facial images;Obtain first input
The M in the M target person face region chosen refer to subgraph;Subgraph is referred to based on the M, generates target face
Template;Wherein, the M target person face region is the face subregion selected from least two alternative facial images, every ginseng
The corresponding people's face region of subgraph is examined, every alternative facial image includes at least one face subregion;N, M is to be greater than
1 integer.
In embodiments of the present invention, by receiving first input of the user in N alternative facial images;Obtain described
M of M chosen target person face region of one input refer to subgraph;Subgraph is referred to based on the M, generates target
Face template, the template generation scheme provided through the embodiment of the present invention, the M target person face region chosen based on user
M rises with reference to subgraph, generates target face template, can make target face template according to personal preference, pass through the target person
Face template can be adjusted the facial image of user, be adjusted different parameters manually without user, easy to operate.
It should be understood that the embodiment of the present invention in, radio frequency unit 401 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 410 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 401 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 401 can also by wireless communication system and network and other set
Standby communication.
Mobile terminal provides wireless broadband internet by network module 402 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 403 can be received by radio frequency unit 401 or network module 402 or in memory 409
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 403 can also be provided and be moved
The relevant audio output of specific function that dynamic terminal 400 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 403 includes loudspeaker, buzzer and receiver etc..
Input unit 404 is for receiving audio or video signal.Input unit 404 may include graphics processor
(Graphics Processing Unit, GPU) 4041 and microphone 4042, graphics processor 4041 capture mould in video
The image data of the static images or video that are obtained in formula or image capture mode by image capture apparatus (such as camera) carries out
Processing.Treated, and picture frame may be displayed on display unit 406.Through graphics processor 4041, treated that picture frame can
To be stored in memory 409 (or other storage mediums) or be sent via radio frequency unit 401 or network module 402.
Microphone 4042 can receive sound, and can be audio data by such acoustic processing.Treated, and audio data can
To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 401 in the case where telephone calling model.
Mobile terminal 400 further includes at least one sensor 405, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 4061, and proximity sensor can close when mobile terminal 400 is moved in one's ear
Display panel 4061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 405 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 406 is for showing information input by user or being supplied to the information of user.Display unit 406 can wrap
Display panel 4061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 4061.
User input unit 407 can be used for receiving the number or character information of input, and generate the use with mobile terminal
Family setting and the related key signals input of function control.Specifically, user input unit 407 include touch panel 4071 with
And other input equipments 4072.Touch panel 4071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 4071 or in touch panel 4071
Neighbouring operation).Touch panel 4071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 410, receiving area
It manages the order that device 410 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Seed type realizes touch panel 4071.In addition to touch panel 4071, user input unit 407 can also include other input equipments
4072.Specifically, other input equipments 4072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 4071 can be covered on display panel 4061, when touch panel 4071 is detected at it
On or near touch operation after, send processor 410 to determine the type of touch event, be followed by subsequent processing device 410 according to touching
The type for touching event provides corresponding visual output on display panel 4061.Although in Figure 11, touch panel 4071 and aobvious
Show that panel 4061 is the function that outputs and inputs of realizing mobile terminal as two independent components, but in some embodiments
In, can be integrated by touch panel 4071 and display panel 4061 and realize the function that outputs and inputs of mobile terminal, it is specific this
Place is without limitation.
Interface unit 408 is the interface that external device (ED) is connect with mobile terminal 400.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 408 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in mobile terminal 400 or can be used in 400 He of mobile terminal
Data are transmitted between external device (ED).
Memory 409 can be used for storing software program and various data.Memory 409 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 409 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 410 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the software program and/or module that are stored in memory 409, and calls and is stored in storage
Data in device 409 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place
Managing device 410 may include one or more processing units;Preferably, processor 410 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 410.
Mobile terminal 400 can also include the power supply 411 (such as battery) powered to all parts, it is preferred that power supply 411
Can be logically contiguous by power-supply management system and processor 410, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, mobile terminal 400 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 410, and memory 409 is stored in
On memory 409 and the computer program that can run on the processor 410, the computer program are executed by processor 410
Each process of the above-mentioned template generation method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating, here
It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned template generation method embodiment, and energy when being executed by processor
Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (14)
1. a kind of template generation method is applied to mobile terminal, which is characterized in that the described method includes:
Receive first input of the user in N alternative facial images;
M for obtaining M chosen target person face region of first input refer to subgraph;
Subgraph is referred to based on the M, generates target face template;
Wherein, the M target person face region is the face subregion selected from least two alternative facial images, every reference
Subgraph corresponds to people's face region, and every alternative facial image includes at least one face subregion;N, M is greater than 1
Integer.
2. the method according to claim 1, wherein receive user in N alternative facial images the
Before one input, further includes:
Obtain target facial image;
Image segmentation is carried out to the target facial image, generates M people's face region;
M mark of display;
Wherein, each mark is used to indicate people's face region.
3. according to the method described in claim 2, it is characterized in that, first input includes son input at least once;
After the display M mark, further includes:
User is received to input the first son of the target subregion in the M people face region;
In response to the first son input, display and the associated at least alternative facial image of the target subregion;
First input that user is received in N alternative facial images, comprising:
Receive at least once son input of the user in described at least one alternative facial image.
4. generating target person the method according to claim 1, wherein described refer to subgraph based on the M
Face template, comprising:
Receive second input of the user to object reference subgraph;
In response to second input, the object reference subgraph is shown to the second input pair described in default face template
The position answered;
Subgraph is referred to based on default face template and the M, generates target face template;
Wherein, the M include the object reference subgraph with reference to subgraph.
5. generating target the method according to claim 1, wherein referring to subgraph based on the M described
After face template, further includes:
In the case where the user does not name the target face template, preset object naming is obtained;
By the target face template and the object naming associated storage.
6. according to the method described in claim 2, it is characterized in that, described refer to subgraph, generation target person based on the M
After face template, the method also includes:
Receive the third input of user;
It inputs in response to the third, according to the target face template, the target facial image is adjusted.
7. a kind of terminal, which is characterized in that the terminal includes:
First receiving module, for receiving first input of the user in N alternative facial images;
First obtains module, and M for obtaining M chosen target person face region of first input refer to subgraph
Picture;
First generation module generates target face template for referring to subgraph based on the M;
Wherein, the M target person face region is the face subregion selected from least two alternative facial images, every reference
Subgraph corresponds to people's face region, and every alternative facial image includes at least one face subregion;N, M is greater than 1
Integer.
8. terminal according to claim 7, which is characterized in that the terminal further include:
Second obtains module, for receiving first input of the user in N alternative facial images in first receiving module
Before, target facial image is obtained;
Second generation module generates M people's face region for carrying out image segmentation to the target facial image;
Display module, for showing M mark;
Wherein, each mark is used to indicate people's face region.
9. terminal according to claim 8, which is characterized in that first input includes son input at least once;It is described
Terminal further include:
Second receiving module, for receiving user to the M Ge Ren face area after the display module shows M mark
First son input of the target subregion in domain;
First respond module is shown and the target subregion associated at least one for inputting in response to first son
Alternative facial image;
First receiving module is specifically used for: it is sub at least once in described at least one alternative facial image to receive user
Input.
10. terminal according to claim 7, which is characterized in that first generation module, comprising:
First receiving submodule, for receiving second input of the user to object reference subgraph;
Submodule is responded, in response to second input, the object reference subgraph to be shown to default face template
Described in the second corresponding position of input;
Submodule is generated, for referring to subgraph based on default face template and the M, generates target face template;
Wherein, the M include the object reference subgraph with reference to subgraph.
11. terminal according to claim 7, which is characterized in that the terminal further include:
Name module, refer to subgraph for being based on the M in first generation module, generate target face template it
Afterwards, in the case where the user does not name the target face template, preset object naming is obtained;
Memory module is used for the target face template and the object naming associated storage.
12. terminal according to claim 8, which is characterized in that the terminal further include:
Third receiving module refers to subgraph for being based on the M in first generation module, generates target face template
Later, the third input of user is received;
Second respond module, for being inputted in response to the third, according to the target face template, to the target face figure
As being adjusted.
13. a kind of mobile terminal, which is characterized in that including processor, memory and be stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 6 is realized when the computer program is executed by the processor
Any one of described in template generation method the step of.
14. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium
Sequence is realized when the computer program is executed by processor such as template generation method described in any one of claims 1 to 6
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811280785.7A CN109448069B (en) | 2018-10-30 | 2018-10-30 | Template generation method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811280785.7A CN109448069B (en) | 2018-10-30 | 2018-10-30 | Template generation method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109448069A true CN109448069A (en) | 2019-03-08 |
CN109448069B CN109448069B (en) | 2023-07-18 |
Family
ID=65549029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811280785.7A Active CN109448069B (en) | 2018-10-30 | 2018-10-30 | Template generation method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109448069B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110933354A (en) * | 2019-11-18 | 2020-03-27 | 深圳传音控股股份有限公司 | Customizable multi-style multimedia processing method and terminal thereof |
CN111080747A (en) * | 2019-12-26 | 2020-04-28 | 维沃移动通信有限公司 | Face image processing method and electronic equipment |
CN111488104A (en) * | 2020-04-16 | 2020-08-04 | 维沃移动通信有限公司 | Font editing method and electronic equipment |
CN112346614A (en) * | 2020-10-28 | 2021-02-09 | 京东方科技集团股份有限公司 | Image display method and device, electronic device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015058381A1 (en) * | 2013-10-23 | 2015-04-30 | 华为终端有限公司 | Method and terminal for selecting image from continuous images |
WO2016011834A1 (en) * | 2014-07-23 | 2016-01-28 | 邢小月 | Image processing method and system |
CN105654420A (en) * | 2015-12-21 | 2016-06-08 | 小米科技有限责任公司 | Face image processing method and device |
CN107085823A (en) * | 2016-02-16 | 2017-08-22 | 北京小米移动软件有限公司 | Face image processing process and device |
CN108648142A (en) * | 2018-05-21 | 2018-10-12 | 北京微播视界科技有限公司 | Image processing method and device |
-
2018
- 2018-10-30 CN CN201811280785.7A patent/CN109448069B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015058381A1 (en) * | 2013-10-23 | 2015-04-30 | 华为终端有限公司 | Method and terminal for selecting image from continuous images |
WO2016011834A1 (en) * | 2014-07-23 | 2016-01-28 | 邢小月 | Image processing method and system |
CN105654420A (en) * | 2015-12-21 | 2016-06-08 | 小米科技有限责任公司 | Face image processing method and device |
CN107085823A (en) * | 2016-02-16 | 2017-08-22 | 北京小米移动软件有限公司 | Face image processing process and device |
CN108648142A (en) * | 2018-05-21 | 2018-10-12 | 北京微播视界科技有限公司 | Image processing method and device |
Non-Patent Citations (1)
Title |
---|
梁凌宇等: "自适应编辑传播的人脸图像光照迁移", 《光学精密工程》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110933354A (en) * | 2019-11-18 | 2020-03-27 | 深圳传音控股股份有限公司 | Customizable multi-style multimedia processing method and terminal thereof |
CN110933354B (en) * | 2019-11-18 | 2023-09-01 | 深圳传音控股股份有限公司 | Customizable multi-style multimedia processing method and terminal thereof |
CN111080747A (en) * | 2019-12-26 | 2020-04-28 | 维沃移动通信有限公司 | Face image processing method and electronic equipment |
CN111080747B (en) * | 2019-12-26 | 2023-04-07 | 维沃移动通信有限公司 | Face image processing method and electronic equipment |
CN111488104A (en) * | 2020-04-16 | 2020-08-04 | 维沃移动通信有限公司 | Font editing method and electronic equipment |
CN112346614A (en) * | 2020-10-28 | 2021-02-09 | 京东方科技集团股份有限公司 | Image display method and device, electronic device and storage medium |
CN112346614B (en) * | 2020-10-28 | 2022-07-29 | 京东方科技集团股份有限公司 | Image display method and device, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109448069B (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109461117A (en) | A kind of image processing method and mobile terminal | |
CN109448069A (en) | A kind of template generation method and mobile terminal | |
CN109814968A (en) | A kind of data inputting method, terminal device and computer readable storage medium | |
CN107786811B (en) | A kind of photographic method and mobile terminal | |
CN109461124A (en) | A kind of image processing method and terminal device | |
CN108920119A (en) | A kind of sharing method and mobile terminal | |
CN108427873A (en) | A kind of biological feather recognition method and mobile terminal | |
CN109151367A (en) | A kind of video call method and terminal device | |
CN109167914A (en) | A kind of image processing method and mobile terminal | |
CN109409244A (en) | A kind of object puts the output method and mobile terminal of scheme | |
CN109215683A (en) | A kind of reminding method and terminal | |
CN110007758A (en) | A kind of control method and terminal of terminal | |
CN109062411A (en) | A kind of screen luminance adjustment method and mobile terminal | |
CN109669611A (en) | Fitting method and terminal | |
CN109085963A (en) | A kind of interface display method and terminal device | |
CN108881782A (en) | A kind of video call method and terminal device | |
CN109671034A (en) | A kind of image processing method and terminal device | |
CN109413264A (en) | A kind of background picture method of adjustment and terminal device | |
CN110096203A (en) | A kind of screenshot method and mobile terminal | |
CN109949809A (en) | A kind of sound control method and terminal device | |
CN109639981A (en) | A kind of image capturing method and mobile terminal | |
CN109164908A (en) | A kind of interface control method and mobile terminal | |
CN109166164A (en) | A kind of generation method and terminal of expression picture | |
CN107563353A (en) | A kind of image processing method, device and mobile terminal | |
CN110443752A (en) | A kind of image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |