CN108021905A - image processing method, device, terminal device and storage medium - Google Patents

image processing method, device, terminal device and storage medium Download PDF

Info

Publication number
CN108021905A
CN108021905A CN201711394859.5A CN201711394859A CN108021905A CN 108021905 A CN108021905 A CN 108021905A CN 201711394859 A CN201711394859 A CN 201711394859A CN 108021905 A CN108021905 A CN 108021905A
Authority
CN
China
Prior art keywords
face image
target face
labeling
sense organs
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711394859.5A
Other languages
Chinese (zh)
Inventor
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711394859.5A priority Critical patent/CN108021905A/en
Publication of CN108021905A publication Critical patent/CN108021905A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application discloses a kind of image processing method, device, terminal device and storage medium.This method includes:Obtain the face feature of target facial image;The target facial image is labeled according to the face feature;Target facial image after mark is inputted into training pattern, the face of the target facial image are adjusted.Image processing method provided by the embodiments of the present application, the facial image for carrying out face feature mark is inputted into training pattern, to realize the adjustment to facial image face, the efficiency of picture processing can be not only improved, the accuracy to the adjustment of facial image face can also be improved.

Description

Picture processing method and device, terminal equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a picture processing method and device, terminal equipment and a storage medium.
Background
With the development of mobile communication technology and the popularization of intelligent terminal devices, intelligent mobile terminals have become one of the indispensable tools in people's lives. The terminal equipment not only has the function of photographing, but also has the function of picture processing. For example, the human face in the picture is subjected to skin grinding, whitening, acne removing, face thinning, large eye and bright eye treatment.
In the related art, when a face area in a picture is processed, a user needs to manually select one of the functions for processing, and when the picture processing functions are more and the user performs a plurality of processing operations on a loaded picture, the user needs to manually perform the operations respectively, so that the operation process is complex, and the processing effect is sometimes unnatural and cannot achieve the effect desired by the user. The existing picture processing method has certain defects.
Disclosure of Invention
The embodiment of the application provides a picture processing method and device, terminal equipment and a storage medium, so as to improve the picture processing efficiency.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring the five sense organs characteristics of the target face image;
labeling the target face image according to the features of the five sense organs;
and inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, where the apparatus includes:
the facial feature acquisition module is used for acquiring facial features of the target face;
the target face image labeling module is used for labeling the target face image according to the facial features;
and the facial features adjusting module is used for inputting the labeled target face image into the training model and adjusting the facial features of the target face image.
In a third aspect, an embodiment of the present application further provides a terminal device, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application further provides a storage medium on which a computer program is stored, where the computer program is executed by a processor to implement the method according to the first aspect.
According to the embodiment of the application, the features of the five sense organs of the target face image are firstly obtained, then the target face image is labeled according to the features of the five sense organs, finally the labeled target face image is input into a training model, and the five sense organs of the target face image are adjusted. According to the image processing method provided by the embodiment of the application, the facial image subjected to facial feature labeling is input into the training model so as to realize the adjustment of facial features, so that the image processing efficiency can be improved, and the accuracy of the adjustment of facial features can be improved.
Drawings
Fig. 1 is a flowchart of a picture processing method in an embodiment of the present application;
fig. 2 is a flowchart of another picture processing method in the embodiment of the present application;
FIG. 3 is a flowchart of another image processing method in the embodiment of the present application;
FIG. 4 is a flowchart of another image processing method in the embodiment of the present application;
FIG. 5 is a schematic structural diagram of a picture processing apparatus in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another terminal device in the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a picture processing method according to an embodiment of the present application, where the present embodiment is applicable to a case of processing a picture including a face image, and the method may be executed by a picture processing apparatus, and the apparatus may be integrated in a terminal device such as a mobile phone or a tablet. As shown in fig. 1, the method includes the following steps.
And step 110, acquiring the five sense organs characteristics of the target face image.
The target face image may include at least one face image, an image that may be captured by a local terminal device, an image downloaded from a network database, or an image received by social software and transmitted by another terminal device. Five sense organs can refer to various parts of the human face, including ears, eyebrows, eyes, nose, and lips. The characteristics of the five sense organs can comprise the positions and the attributes of the five sense organs, and under the application scene, only eyebrows, eyes, a nose and lips in the five sense organs can be concerned, and the characteristics of ears are not considered. Wherein the position of the five sense organs can be the position of the five sense organs relative to the whole face. The five sense organs attribute can characterize the size and type of the five sense organs, wherein the size of the five sense organs is determined by the ratio of the five sense organs to the face. The type of the five sense organs can be determined in the existing classification manner. For example, the eyebrow shapes may include string-moon eyebrows, straight eyebrows, triangular eyebrows, willow-leaf eyebrows, splayed eyebrows, sword eyebrows, and the like, and the eyebrows may include dense and sparse eyebrows; the eye type can comprise peach blossom eye, red phoenix eye, sleeping phoenix eye, willow leaf eye, apricot eye, fox eye, copper bell eye, longan, red phoenix eye, fawn eye and the like, and the eyes can comprise different situations such as large eye, middle eye, small eye and the like; the nasal types may include super-narrow nose, middle nose, broad nose, super-broad nose, and the like, and the bridge may include high bridge, collapse bridge, and the like. The lip characteristics may include thin, wide, large, declined mouth angle, cola, doll, and dune model, among others.
In this embodiment, the manner of obtaining the position of the five sense organs may be to locate the five sense organs by using a location algorithm in the related art, where the location algorithm may be a Supervisory Descending Method (SDM), an Active Application Model (AAM) algorithm, an Active Shape Model (ASM) algorithm, or the like. The mode of acquiring the five sense organs attribute may be to compare the five sense organs of the face image with the standard template respectively to obtain the attribute of the five sense organs. Illustratively, a certain target face image is subjected to recognition analysis, and the five sense organs are characterized by thick willow eyebrows, middle-sized and middle-sized daniella eyes, high nose bridges, middle noses and thin lips.
In this embodiment, the process of obtaining the facial features of the target face image may be to perform face recognition on the target face image, extract an area including the face, recognize the facial features of the target face image, obtain the position of each facial feature, analyze each facial feature, or compare the facial features with a preset template to obtain the attributes of the facial features, so as to obtain the facial features of the target face image.
And 120, labeling the target face image according to the characteristics of the five sense organs.
In this embodiment, the labeling of the target face image may be to label the features of five sense organs in the target face image. The method for marking the target face image according to the features of the five sense organs can be that the target face image is input into a neural network for marking the features of the five sense organs so as to mark the features of the five sense organs on the target face image, or a manual method is adopted and the features of the five sense organs are marked on the target face image according to the pixel level. The facial feature labeling neural network can be an existing neural network capable of realizing facial feature labeling functions. The manual mode is a manual mode, the five sense organs are manually positioned by means of a positioning algorithm in the related technology to obtain the positions of the five sense organs, so that the marking of the positions of the five sense organs is realized, and after the five sense organs of the target face image are manually compared with the template to obtain the attributes of the five sense organs, the attributes of the five sense organs are marked in the target face image.
Specifically, after the facial features of the target face image are obtained, the target face image is labeled according to the obtained facial features. Illustratively, the five sense organs of the target face image are thick willow eyebrows, middle-sized danshen eyes, high nose bridges, middle noses and thin lips, and the obtained five sense organs are marked on the target face image.
And step 130, inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
Wherein, the training model can be a model for adjusting the five sense organs of the face image. The training model may be a model obtained by continuous training through a sample set based on a set machine learning language. In this embodiment, the training model may adjust the facial features in the face image after the facial features are labeled.
In this embodiment, the process of adjusting the target face image by the training model may be that, after the labeled target face image is input into the training model, the training model analyzes the characteristics of the five sense organs, and according to the attributes, positions and proportion of the five sense organs to the whole face, the eyebrows, eyes, nose and lips in the five sense organs are respectively adjusted, and finally, the adjusted face image is output. For example, if the eyes in the target face image appear smaller than the whole face, the eyes are adjusted to be larger according to the size of the whole face, the nose bridge is relatively lower, the nose bridge is appropriately adjusted to be higher, and the like.
According to the technical scheme of the embodiment, the facial features of the five sense organs of the target facial image are firstly obtained, then the target facial image is labeled according to the facial features of the five sense organs, finally the labeled target facial image is input into a training model, and the five sense organs of the target facial image are adjusted. According to the image processing method provided by the embodiment of the application, the facial image subjected to facial feature labeling is input into the training model so as to realize the adjustment of facial features, so that the image processing efficiency can be improved, and the accuracy of the adjustment of facial features can be improved.
Optionally, after the target face image is labeled according to the features of the five sense organs, the method further includes the following steps: acquiring the face shape characteristics of a target face image; and labeling the target face image according to the face shape characteristics.
The face shape may be an outline of the face. Facial features may include: round, rectangular, square, triangular and melon seed faces. The process of obtaining the face shape feature of the target face image may be analyzing a face contour in the target face image, or comparing a face in the target face image with a standard face template to obtain the face shape feature. After the facial features are obtained, the facial features are marked in the target facial image. For example, assuming that the facial feature of the target face image is a round face, the round face is labeled in the target face image. And finally, inputting the target face image subjected to facial feature labeling and facial feature labeling into a training model to realize the adjustment of the facial features of the target face image.
According to the technical scheme of the embodiment, the face shape characteristics of the target face image are obtained, and the target face image is labeled according to the face shape characteristics. And after the facial features of five sense organs and the facial features of the human face are labeled simultaneously on the target human face image, inputting the labeled facial features into a training model to realize the adjustment of the target human face image, so that the accuracy of the adjustment of the five sense organs of the facial image can be improved.
Fig. 2 is a flowchart of another image processing method according to an embodiment of the present disclosure. As shown in fig. 2, the method includes the following steps.
Step 210, acquiring a face image set.
The face image set can be obtained from a network database and can be composed of a large number of beautified star portrait pictures. In this embodiment, the manner of obtaining the face image set may be to retrieve pictures under face image classification from a network database, and then download a large number of face images in batch to form the face image set. Or, the photo sets of the stars with different facial feature characteristics are searched respectively, and a preset number of pictures are downloaded from the photo sets of the stars respectively. Illustratively, 10 photo sets of 100 female stars and 100 male stars can be downloaded respectively to form a face image set.
And step 220, labeling the face image set according to the features of the five sense organs to obtain a face image sample set.
The method for labeling the face image set according to the features of the five sense organs can be that the face image set is input into a neural network for labeling the features of the five sense organs so as to label the features of the five sense organs on the face image set; or, the facial image set is subjected to facial feature labeling manually according to the pixel level.
The neural network for labeling the characteristics of the five sense organs can be an existing neural network with the function of labeling the characteristics of the five sense organs. The manual mode is a manual mode, the facial features are manually positioned by means of a positioning algorithm in the related technology to obtain the position of the facial features of each picture of the face image set, then labeling is carried out according to the pixel level, and the facial features of the face image set are manually compared with the template to obtain the attributes of the facial features, and then the attributes of the facial features are labeled in the face image set. In this embodiment, each image in the face image set needs to be labeled, and each labeled face image can be used as a sample.
Specifically, after a face image set is obtained, labeling is respectively performed on each face image according to the facial features of each image in the face image set, so that a face image sample set is formed.
And step 230, training the training model based on a set machine learning algorithm according to the face image sample set.
In this embodiment, after the face sample set is obtained, the training model is trained based on a set machine learning algorithm, so that the training model learns the position relationship between the five sense organs and the combination relationship between the attributes of the five sense organs in the face image sample set. The training model has the function of adjusting the facial features in the face image according to the facial features, so that the face image is beautified. After the training model is successfully trained, it can be used to process the face image.
And step 240, acquiring the five sense organs characteristics of the target face image.
And step 250, labeling the target face image according to the characteristics of the five sense organs.
And step 260, inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
Optionally, after acquiring the face image set, the method preferably includes the following steps: and acquiring the facial features of the facial image set.
In this embodiment, the manner of obtaining the face shape feature of the face image set may be to analyze a face contour of each face image in the face images, or compare the face of each face image in the face image set with a standard template to obtain the face shape feature.
Optionally, the face image set is labeled according to features of five sense organs to obtain a face image sample set, which can be implemented in the following manner: and labeling the face image set according to the features of the five sense organs and the features of the face shape to obtain a face image sample set.
In this embodiment, after the facial features and facial features of each image in the facial image set are obtained, each facial image in the facial image set is labeled according to the facial features and facial features, so as to obtain a facial image sample set. And training the training model based on a set machine learning algorithm according to the sample set, so that the training model has the function of adjusting the five sense organs of the face image according to the characteristics of the five sense organs and the facial features of the face. The face image set is subjected to face shape feature labeling, so that the advantage is that when the training model adjusts the five sense organs, the five sense organs feature and the face shape feature can be used as reference bases, and the reliability of the training model for processing the image is improved.
According to the technical scheme of the embodiment, a face image set is obtained, then the face image set is labeled according to the characteristics of the five sense organs, a face image sample set is obtained, and finally a training model is trained based on a set machine learning algorithm according to the face image sample set. The acquired face image set is labeled to form a face image sample set, and the training model is trained by using the image sample set, so that the accuracy of the training model for processing pictures can be improved.
Fig. 3 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 3, the method includes the following steps.
In step 310, facial features of the target face are obtained.
And 320, labeling the target face image according to the characteristics of the five sense organs.
And 330, inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
And step 340, acquiring the exposure parameters of the target face image.
The exposure parameter may be an exposure amount when the camera captures a face image of the target. After the camera finishes taking the picture, the exposure of the taken image is determined. In this embodiment, the exposure parameter of the target face image may be obtained by searching for an attribute parameter of the target face image and obtaining the exposure parameter from the attribute parameter.
And 350, adjusting the face complexion of the target face image according to the exposure parameters.
Optionally, the method for adjusting the face complexion of the target face image according to the exposure parameter may be to obtain a complexion parameter of the target face image, determine a complexion correction parameter according to the complexion parameter and the exposure parameter, and adjust the face complexion of the target face image according to the complexion correction parameter.
The skin color parameter may be a color value of a pixel point in a skin region in the target face image, that is, an RGB value of the pixel point in the skin region. The manner of obtaining the skin color parameter of the target face image may be to identify a skin region of the target face image, count color data of pixel points in the skin region, and calculate an average value of the color data, thereby obtaining the skin color parameter of the target face image.
The correction parameter may be a skin color parameter after adjustment of the target face image. The skin color correction parameter may be determined according to the skin color parameter and the exposure parameter by searching a preset skin color parameter corresponding to the exposure parameter in a preset mapping table, and then determining the correction parameter according to the preset skin color parameter and the skin color parameter of the target face image. For example, the correction parameter may be a preset skin color parameter, or a numerical value between the preset skin color parameter and the skin color parameter of the target face image, such as an average value of the two, or a weight of the two to determine the correction parameter. The preset mapping table may be a corresponding relationship table of exposure parameters and skin color parameters determined by analyzing a large number of beautified pictures.
Specifically, after the exposure parameter of the target face image is obtained, the skin color parameter of the target face image is obtained, then the correction parameter is determined according to the skin color parameter and the exposure parameter, and finally the skin color of the target face image is adjusted according to the correction parameter.
According to the technical scheme of the embodiment, the skin color parameter of the target face image is obtained, the skin color correction parameter is determined according to the skin color parameter and the exposure parameter, and the face skin color of the target face image is adjusted according to the skin color correction parameter. After the facial features of the target face image are adjusted, the skin color of the target face image is adjusted, and the beautifying effect of the picture is improved.
Fig. 4 is a flowchart of another image processing method according to an embodiment of the present application. As a further explanation of the above embodiment, as shown in fig. 4, the method comprises the following steps.
Step 401, acquiring a face image set, and acquiring facial features and facial shape features of the face image set;
and step 402, labeling the face image set according to the features of the five sense organs and the features of the face shape to obtain a face image sample set.
And 403, training the training model based on a set machine learning algorithm according to the face image sample set.
In step 404, facial features and facial features of the target face image are obtained.
And 405, labeling the target face image according to the features of the five sense organs and the face shape.
And 406, inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
Step 407, acquiring an exposure parameter and a skin color parameter of the target face image;
and step 408, determining a skin color correction parameter according to the skin color parameter and the exposure parameter.
And step 409, adjusting the face complexion of the target face image according to the complexion correction parameter.
According to the technical scheme, the target face image subjected to facial feature and facial feature labeling is input into a trained model to realize the adjustment of facial features of the target face image, and then the skin color of the face is adjusted according to the exposure parameters and the skin color parameters of the target face image to realize the beautification of the target face image. And the efficiency of picture processing is improved.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus includes: a facial feature acquisition module 510, a target face image labeling module 520 and a facial feature adjustment module 530.
A facial features acquiring module 510, configured to acquire facial features of the target person;
a target face image labeling module 520, configured to label a target face image according to the features of the five sense organs;
and a facial features adjusting module 530, configured to input the labeled target face image into the training model, and adjust facial features of the target face image.
Optionally, the method further includes:
the face shape characteristic acquisition module is used for acquiring the face shape characteristic of the target face image;
and the face shape feature labeling module is used for labeling the target face image according to the face shape feature.
Optionally, the method further includes:
the face image set acquisition module is used for acquiring a face image set;
the face image sample set acquisition module is used for labeling the face image set according to the characteristics of the five sense organs to obtain a face image sample set;
and the training model training module is used for training the training model based on a set machine learning algorithm according to the face image sample set.
Optionally, the face image sample set obtaining module is further configured to:
inputting the face image set into a facial feature labeling neural network to label facial features of the face image set; or,
and (4) performing facial feature labeling on the human face image set manually according to the pixel level.
Optionally, the method further includes:
acquiring the face shape characteristics of a face image set;
correspondingly, the face image sample set obtaining module is further configured to:
and labeling the face image set according to the features of the five sense organs and the features of the face shape to obtain a face image sample set.
Optionally, the method further includes:
the exposure parameter acquisition module is used for acquiring the exposure parameters of the target face image;
and the human face skin color adjusting module is used for adjusting the human face skin color of the target human face image according to the exposure parameters.
Optionally, the face skin color adjustment module is further configured to:
obtaining skin color parameters of a target face image;
determining a skin color correction parameter according to the skin color parameter and the exposure parameter;
and adjusting the face complexion of the target face image according to the complexion correction parameter.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 600 comprises a memory 601 and a processor 602, wherein the processor 602 is configured to perform the following steps:
acquiring the five sense organs characteristics of the target face image;
labeling the target face image according to the features of the five sense organs;
and inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
Fig. 7 is a schematic structural diagram of another terminal device provided in an embodiment of the present application. As shown in fig. 7, the terminal may include: a housing (not shown), a memory 701, a Central Processing Unit (CPU) 702 (also called a processor, hereinafter referred to as CPU), a computer program stored in the memory 701 and operable on the processor 702, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU702 and the memory 701 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the memory 701 is used for storing executable program codes; the CPU702 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 701.
The terminal further comprises: peripheral interfaces 703, RF (Radio Frequency) circuitry 705, audio circuitry 706, speakers 711, power management chip 708, input/output (I/O) subsystems 709, touch screen 712, other input/control devices 710, and external port 704, which communicate over one or more communication buses or signal lines 707.
It should be understood that the illustrated terminal device 700 is merely one example of a terminal, and that the terminal device 700 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail the terminal device for processing pictures provided in this embodiment, where the terminal device is a smart phone as an example.
A memory 701, the memory 701 being accessible by the CPU702, the peripheral interface 703, and the like, the memory 701 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
A peripheral interface 703, said peripheral interface 703 may connect input and output peripherals of the device to the CPU702 and the memory 701.
An I/O subsystem 709, which I/O subsystem 709 may connect input and output peripherals on the device, such as a touch screen 712 and other input/control devices 710, to the peripheral interface 703. The I/O subsystem 709 may include a display controller 7091 and one or more input controllers 7092 for controlling other input/control devices 710. Where one or more input controllers 7092 receive electrical signals from or transmit electrical signals to other input/control devices 710, the other input/control devices 710 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 7092 may be connected to any one of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
The touch screen 712 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operation principle of the touch screen and the classification of a medium for transmitting information. Classified by the installation method, the touch screen 712 may be: external hanging, internal or integral. Classified according to technical principles, the touch screen 712 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 712, the touch screen 712 being an input interface and an output interface between the user terminal and the user, displaying visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 712 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the processor 702.
The display controller 7091 in the I/O subsystem 709 receives electrical signals from the touch screen 712 or transmits electrical signals to the touch screen 712. The touch screen 712 detects a contact on the touch screen, and the display controller 7091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 712, i.e., implements a human-computer interaction, and the user interface object displayed on the touch screen 712 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 705 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 706 is mainly used to receive audio data from the peripheral interface 703, convert the audio data into an electric signal, and transmit the electric signal to the speaker 711.
And the loudspeaker 711 is used for reducing the voice signal received by the intelligent sound box from the wireless network through the RF circuit 705 into sound and playing the sound to the user.
And a power management chip 708 for supplying power and managing power to the hardware connected to the CPU702, the I/O subsystem, and the peripheral interface.
In this embodiment, the central processor 702 is configured to:
acquiring the five sense organs characteristics of the target face image;
labeling the target face image according to the features of the five sense organs;
and inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
Further, after labeling the target face image according to the features of the five sense organs, the method further includes:
acquiring the face shape characteristics of the target face image;
and labeling the target face image according to the face shape characteristics.
Further, before acquiring the five sense organ features of the target face image, the method further comprises:
acquiring a face image set;
labeling the face image set according to the features of the five sense organs to obtain a face image sample set;
and training the training model based on a set machine learning algorithm according to the face image sample set.
Further, the labeling the face image set according to the feature of the five sense organs includes:
inputting the face image set into a facial feature labeling neural network to label facial features of the face image set; or,
and performing feature labeling on the facial image set by five sense organs manually according to the pixel level.
Further, after acquiring the face image set, the method further includes:
acquiring the face shape characteristics of the face image set;
correspondingly, the labeling the face image set according to the features of the five sense organs to obtain a face image sample set includes:
and labeling the face image set according to the facial features and the facial features to obtain a face image sample set.
Further, after the five sense organs of the target face image are adjusted, the method further comprises the following steps:
acquiring exposure parameters of the target face image;
and adjusting the face complexion of the target face image according to the exposure parameters.
Further, the adjusting the face skin color of the target face image according to the exposure parameter includes:
obtaining skin color parameters of the target face image;
determining a skin color correction parameter according to the skin color parameter and the exposure parameter;
and adjusting the face complexion of the target face image according to the complexion correction parameter.
The embodiment of the application also provides a storage medium containing terminal equipment executable instructions, and the terminal equipment executable instructions are used for executing the picture processing method when being executed by a terminal equipment processor.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided in the embodiments of the present application and containing computer-executable instructions is not limited to the above-described image processing operations, and may also perform related operations in the image processing method provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring the five sense organs characteristics of the target face image;
labeling the target face image according to the features of the five sense organs;
and inputting the labeled target face image into a training model, and adjusting the five sense organs of the target face image.
2. The method of claim 1, further comprising, after labeling the target face image according to the features of the five sense organs:
acquiring the face shape characteristics of the target face image;
and labeling the target face image according to the face shape characteristics.
3. The method of claim 1, further comprising, before acquiring the facial features of the target person, the steps of:
acquiring a face image set;
labeling the face image set according to the features of the five sense organs to obtain a face image sample set;
and training the training model based on a set machine learning algorithm according to the face image sample set.
4. The method of claim 3, wherein labeling the set of face images according to facial features comprises:
inputting the face image set into a facial feature labeling neural network to label facial features of the face image set; or,
and performing feature labeling on the facial image set by five sense organs manually according to the pixel level.
5. The method of claim 3, further comprising, after acquiring the set of face images:
acquiring the face shape characteristics of the face image set;
correspondingly, the labeling the face image set according to the features of the five sense organs to obtain a face image sample set includes:
and labeling the face image set according to the facial features and the facial features to obtain a face image sample set.
6. The method of claim 1, further comprising, after adjusting the five sense organs of the target face image:
acquiring exposure parameters of the target face image;
and adjusting the face complexion of the target face image according to the exposure parameters.
7. The method of claim 6, wherein the adjusting the face skin color of the target face image according to the exposure parameter comprises:
obtaining skin color parameters of the target face image;
determining a skin color correction parameter according to the skin color parameter and the exposure parameter;
and adjusting the face complexion of the target face image according to the complexion correction parameter.
8. A picture processing apparatus, comprising:
the facial feature acquisition module is used for acquiring facial features of the target face;
the target face image labeling module is used for labeling the target face image according to the facial features;
and the facial features adjusting module is used for inputting the labeled target face image into the training model and adjusting the facial features of the target face image.
9. A terminal device, comprising: a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-7 when executing the computer program.
10. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201711394859.5A 2017-12-21 2017-12-21 image processing method, device, terminal device and storage medium Pending CN108021905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711394859.5A CN108021905A (en) 2017-12-21 2017-12-21 image processing method, device, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711394859.5A CN108021905A (en) 2017-12-21 2017-12-21 image processing method, device, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN108021905A true CN108021905A (en) 2018-05-11

Family

ID=62074379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711394859.5A Pending CN108021905A (en) 2017-12-21 2017-12-21 image processing method, device, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN108021905A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN108985215A (en) * 2018-07-09 2018-12-11 Oppo(重庆)智能科技有限公司 A kind of image processing method, picture processing unit and terminal device
CN110136054A (en) * 2019-05-17 2019-08-16 北京字节跳动网络技术有限公司 Image processing method and device
CN110866469A (en) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 Human face facial features recognition method, device, equipment and medium
CN110927806A (en) * 2019-10-29 2020-03-27 清华大学 Magnetotelluric inversion method and magnetotelluric inversion system based on supervised descent method
CN111598818A (en) * 2020-04-17 2020-08-28 北京百度网讯科技有限公司 Face fusion model training method and device and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN108985215A (en) * 2018-07-09 2018-12-11 Oppo(重庆)智能科技有限公司 A kind of image processing method, picture processing unit and terminal device
CN110136054A (en) * 2019-05-17 2019-08-16 北京字节跳动网络技术有限公司 Image processing method and device
CN110136054B (en) * 2019-05-17 2024-01-09 北京字节跳动网络技术有限公司 Image processing method and device
CN110927806A (en) * 2019-10-29 2020-03-27 清华大学 Magnetotelluric inversion method and magnetotelluric inversion system based on supervised descent method
CN110927806B (en) * 2019-10-29 2021-03-23 清华大学 Magnetotelluric inversion method and magnetotelluric inversion system based on supervised descent method
CN110866469A (en) * 2019-10-30 2020-03-06 腾讯科技(深圳)有限公司 Human face facial features recognition method, device, equipment and medium
CN111598818A (en) * 2020-04-17 2020-08-28 北京百度网讯科技有限公司 Face fusion model training method and device and electronic equipment
US11830288B2 (en) 2020-04-17 2023-11-28 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for training face fusion model and electronic device

Similar Documents

Publication Publication Date Title
US11798246B2 (en) Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
CN108021905A (en) image processing method, device, terminal device and storage medium
CN110147805B (en) Image processing method, device, terminal and storage medium
CN110059685B (en) Character area detection method, device and storage medium
CN107844781A (en) Face character recognition methods and device, electronic equipment and storage medium
CN111476783B (en) Image processing method, device and equipment based on artificial intelligence and storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
US11461949B2 (en) Electronic device for providing avatar and operating method thereof
CN110807361A (en) Human body recognition method and device, computer equipment and storage medium
CN105518579A (en) Information processing device and information processing method
KR102045575B1 (en) Smart mirror display device
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN113570052B (en) Image processing method, device, electronic equipment and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
JP2020507159A (en) Picture push method, mobile terminal and storage medium
CN107967667A (en) Generation method, device, terminal device and the storage medium of sketch
WO2023197648A1 (en) Screenshot processing method and apparatus, electronic device, and computer readable medium
CN108491780B (en) Image beautification processing method and device, storage medium and terminal equipment
CN111325220B (en) Image generation method, device, equipment and storage medium
CN108055461B (en) Self-photographing angle recommendation method and device, terminal equipment and storage medium
CN110378318B (en) Character recognition method and device, computer equipment and storage medium
CN112149599B (en) Expression tracking method and device, storage medium and electronic equipment
CN113284206A (en) Information acquisition method and device, computer readable storage medium and electronic equipment
CN112449098B (en) Shooting method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180511

RJ01 Rejection of invention patent application after publication