WO2021120626A1 - Procédé de traitement d'image, terminal et support de stockage informatique - Google Patents

Procédé de traitement d'image, terminal et support de stockage informatique Download PDF

Info

Publication number
WO2021120626A1
WO2021120626A1 PCT/CN2020/104638 CN2020104638W WO2021120626A1 WO 2021120626 A1 WO2021120626 A1 WO 2021120626A1 CN 2020104638 W CN2020104638 W CN 2020104638W WO 2021120626 A1 WO2021120626 A1 WO 2021120626A1
Authority
WO
WIPO (PCT)
Prior art keywords
portrait
image
processing
feature
processed
Prior art date
Application number
PCT/CN2020/104638
Other languages
English (en)
Chinese (zh)
Inventor
冯少江
吕乐
Original Assignee
上海传英信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海传英信息技术有限公司 filed Critical 上海传英信息技术有限公司
Publication of WO2021120626A1 publication Critical patent/WO2021120626A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • This application relates to the field of terminals, in particular to an image processing method, terminal and computer storage medium.
  • smart terminals such as smart phones are used to take pictures.
  • smart terminals have replaced cameras as the mainstream tool for ordinary users to take pictures.
  • the beauty function has gradually become a standard feature of camera applications in smart terminals.
  • the smart terminal usually adopts unified beauty parameters for processing different users in the beauty mode, or uses unified beauty parameters for the entire person.
  • the beauty effect required for a particular feature may be different from other features. If the beauty effect is not distinguished, the user experience may be affected.
  • the purpose of this application is to provide an image processing method, terminal, and computer storage medium, which can improve the user experience by performing personalized beauty processing on the user according to the characteristics of the user.
  • This application first provides an image processing method, which is applied to a terminal, and the method includes:
  • the characteristics include at least one of the following: gender, face shape, eyes, race, and age.
  • the performing beautification processing on the region corresponding to the at least one portrait feature includes:
  • said adopting a beauty solution corresponding to the at least one portrait feature to perform beauty processing on the area corresponding to the at least one portrait feature includes:
  • a selection of the at least one beauty solution is received, and beauty processing is performed on the region corresponding to the at least one portrait feature according to the selected beauty solution.
  • the performing beauty processing on the region corresponding to the at least one portrait feature and/or not performing beauty processing on the region corresponding to the at least one portrait feature includes at least one of the following processes:
  • the face shape is a round face, performing face-lifting and beautifying processing on the face area of the portrait in the image to be processed;
  • eyes are single eyelids, perform beauty treatment on the eye area of the portrait in the image to be processed;
  • wrinkle-preserving processing is performed on the facial area of the portrait in the image to be processed
  • the age is a young person, perform lip color rejuvenation processing on the lip region of the portrait in the image to be processed;
  • the performing beauty processing on the region corresponding to the at least one portrait feature and/or not performing beauty processing on the region corresponding to the at least one portrait feature includes:
  • the performing beauty reduction processing on the region corresponding to the at least one portrait feature includes:
  • the corresponding beauty level is gradually reduced when performing beauty processing from the edge to the center of the region corresponding to the at least one portrait feature
  • the state of the region corresponding to the at least one portrait feature in the image to be processed after the beautification processing is restored to the state before the beautification processing.
  • the performing feature recognition on the portrait in the to-be-processed image and acquiring at least two portrait features in the to-be-processed image includes:
  • the neural network model is obtained by training based on historical images and corresponding historical portrait features.
  • the terminal includes a processor and a memory for storing a program; when the program is executed by the processor, the processor realizes the image processing method described above.
  • the present application also provides a computer storage medium that stores a computer program, and when the computer program is executed by a processor, the image processing method described above is implemented.
  • the image processing method, terminal, and computer storage medium of the present application acquire at least two portrait features obtained by feature recognition of a portrait in an image to be processed, and perform beauty processing and/or on an area corresponding to the at least one portrait feature No beautification processing is performed on the area corresponding to the at least one portrait feature, so as to implement personalized beautification processing for the user according to the user characteristics, thereby improving the user experience.
  • the image processing method of the present application detects whether the image to be processed contains a preset feature identifier, and determines at least two portrait features in the image to be processed according to the detection result, which can realize rapid and accurate processing of the image to be processed. Character recognition is performed on the portrait in the image.
  • the image processing method provided by the present application performs beautification processing on the region corresponding to the portrait feature through different protection strategies, so as to realize the protection of the region corresponding to the portrait feature.
  • the image processing method provided by the present application displays at least one portrait feature corresponding region, so that the user can select one or several portrait feature corresponding regions to perform beauty processing, which is flexible in operation and further improves the user experience.
  • the image processing method provided by this application is flexible and convenient by providing users with a variety of beauty solutions to choose from, and further improves the user experience.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of this application.
  • FIG. 2 is a schematic structural diagram of a terminal provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of a specific flow of an image processing method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of a process of performing gender recognition according to face region data in an embodiment of the application
  • FIG. 5 is a schematic diagram of the male beard area after beautification protection is performed in an embodiment of the application.
  • Fig. 6 is a schematic diagram of a woman's eyebrow nevus and nasal area after beauty protection is performed in an embodiment of the application.
  • an image processing method provided by an embodiment of this application is applicable to the situation of performing beautification processing on an image.
  • the image processing method can be performed by an image processing apparatus provided by an embodiment of the application.
  • the image processing device may be implemented in software and/or hardware.
  • the image processing device may be a terminal such as a smart phone, a personal digital assistant, or a tablet computer.
  • the image processing method includes the following steps:
  • Step S101 the terminal obtains an image to be processed
  • the image to be processed is a single frame image
  • the image to be processed may be a preview image collected by the terminal through a camera device such as a camera.
  • the terminal as a mobile phone
  • the mobile phone uses the preview image collected by the camera as the image to be processed after the camera application is turned on and switched to the beauty mode.
  • the preview image is displayed on the shooting preview interface of the camera application of the mobile phone. on. That is, after receiving the camera start instruction, the terminal switches to the beauty mode, and collects the preview image taken by the camera as the image to be processed. If the mobile phone shoots a person, an image including the person is displayed on the shooting preview interface of the camera application of the mobile phone.
  • Step S102 The terminal performs feature recognition on the portrait in the image to be processed, and acquires at least two portrait features in the image to be processed;
  • the terminal performs feature recognition on the portrait in the image to be processed obtained in step S101, so as to correspondingly acquire at least two portrait features in the image to be processed.
  • the characteristics include at least one of the following: gender, face shape, eyes, race, and age.
  • the race can refer to the classification based on skin color, specifically it can be yellow race, white race, black race and brown race, or can refer to the classification based on region, specifically it can be Asian, European People, Africans, Americans, etc.
  • the age can refer to the type of age, such as young people, old people, etc.
  • the terminal may first obtain the face image in the image to be processed by performing face detection on the image to be processed, and then implement the face image in the image to be processed based on the face image. Feature recognition.
  • the recognition may be performed by adopting a feature recognition model established based on an artificial intelligence algorithm, or by detecting the identification features of the portrait.
  • the terminal may collect the voice of the photographer, and obtain the gender recognition result of the voice of the photographer, so as to correspondingly realize the gender recognition of the portrait in the image to be processed.
  • the terminal may also determine the portrait feature in the image to be processed according to the features input by the user.
  • Step S103 The terminal performs beauty processing on the region corresponding to the at least one portrait feature and/or does not perform beauty processing on the region corresponding to the at least one portrait feature.
  • the way of performing beautification processing on the image to be processed may be different, and correspondingly, the corresponding areas of the portrait feature that need to be beautified and not be beautified may be different.
  • the portrait feature corresponding area may include a beard area; and/or, if the gender is female, the portrait feature corresponding area may include an eyebrow mole area and/or nose decoration area.
  • the face area of a portrait may be used as the corresponding area of the face; for a single eyelid, the eye area of the portrait may be used as the corresponding area of the eye.
  • the performing beauty processing on the region corresponding to the at least one portrait feature may be performing beauty reduction processing on the region corresponding to the at least one portrait feature when performing beauty processing on the image to be processed , It is also possible to use a beautifying solution corresponding to the at least one portrait feature to perform beautification processing on the area corresponding to the at least one portrait feature. Understandably, if the beauty reduction process is performed on the region corresponding to the at least one portrait feature, after the beauty process of the to-be-processed image is completed, the effect of the region corresponding to the at least one portrait feature will be inconsistent with the previous effect.
  • the effect change degree of the region corresponding to the at least one portrait feature will still be weaker than the effect change degree of other regions; if the beauty solution corresponding to the portrait feature is used to perform beautification processing on the personality feature region, the After the beauty processing of the image to be processed is completed, the effect of the region corresponding to the at least one portrait feature may better match the user's purpose.
  • the performing beauty reduction processing on the region corresponding to the at least one portrait feature may include: gradually reducing the corresponding beauty level when performing beauty processing from the edge to the center of the region corresponding to the at least one portrait feature Or, not performing beauty processing on the region corresponding to the at least one portrait feature; or, restoring the state of the region corresponding to the at least one portrait feature in the image to be processed after the beauty processing to the state before the beauty processing .
  • the beautification treatment including the dermabrasion treatment as an example, when the dermabrasion treatment is performed on the male beard area, the level of the dermabrasion treatment can be reduced.
  • the restoration of the state of the region corresponding to the at least one portrait feature in the image to be processed after the beautification processing to the state before the beautification processing can be understood as after the beautification processing is performed on the image to be processed, Restore the state of the region corresponding to the portrait feature in the image to be processed after the beautification processing to the state before the beauty process, so as not to perform beautification on the region corresponding to the portrait feature in the image to be processed. Color processing, so as to achieve the protection of local details.
  • beauty processing is not performed on the region corresponding to the portrait feature, after the beauty processing of the image to be processed is completed, the effect of the region corresponding to the portrait feature will still remain consistent with the previous effect. In this way, by performing beautification processing on the area corresponding to the portrait feature through different protection strategies, the protection of the area corresponding to the portrait feature is realized.
  • the terminal acquires at least two portrait features obtained by feature recognition of a portrait in the image to be processed, and performs beauty processing and/or on the region corresponding to the at least one portrait feature No beautification processing is performed on the area corresponding to the at least one portrait feature, so as to implement personalized beautification processing for the user according to the user characteristics, thereby improving the user experience.
  • the performing feature recognition on the portrait in the to-be-processed image and acquiring at least two portrait features in the to-be-processed image includes: inputting the to-be-processed image after training The feature recognition neural network model for obtaining at least two portrait features in the to-be-processed image output from the trained feature recognition neural network model; wherein the feature recognition neural network model is based on historical images and corresponding historical portraits Features obtained through training. Understandably, the terminal may pre-store a feature recognition neural network model obtained by training historical images and corresponding historical portrait features using a neural network algorithm. When the image to be processed is used as the input of the feature recognition neural network model, The output of the feature recognition neural network model is the corresponding facial feature recognition result.
  • the input of the image to be processed into the trained feature recognition neural network model may be the input of the face data in the image to be processed into the trained feature recognition neural network model.
  • the establishment and training process of the feature recognition neural network model can refer to the prior art, and will not be repeated here.
  • the terminal performing feature recognition on the portrait in the image to be processed, and acquiring at least two portrait features in the image to be processed includes:
  • Detecting whether the portrait in the image to be processed includes a preset feature identifier, and obtaining a corresponding detection result
  • the terminal detects whether the portrait in the image to be processed includes a preset feature identifier, and obtains a corresponding detection result, so as to obtain at least two portrait features in the image to be processed according to the detection result.
  • the preset feature identification refers to an identification that can calibrate a certain feature of the user. For example, taking the feature as gender as an example, for males, the gender identification can be beard, throat, etc.; for women, The gender identification can be eyebrow moles, facial accessories such as nose decorations and veils, etc.
  • the gender corresponding to the portrait in the image to be processed can be determined It is male; and when it is detected that the image to be processed contains female identification such as eyebrow mole or nose decoration, it can be determined that the gender corresponding to the portrait in the image to be processed is female.
  • the feature as age as an example, for the elderly and young people, the age can be distinguished by detecting whether there are wrinkles on the face, etc. In this way, by detecting whether the image to be processed contains a preset feature identifier, and determining at least two portrait features in the image to be processed according to the detection result, it is possible to quickly and accurately identify the features of the person in the image to be processed .
  • the terminal performs feature recognition on the portrait in the image to be processed, and before acquiring at least two portrait features in the image to be processed, the method may further include: detecting the number of portraits in the image to be processed Whether the preset quantity condition is satisfied; if it is satisfied, the step of performing feature recognition on the portrait in the image to be processed and acquiring at least two portrait features in the image to be processed is performed.
  • the preset quantity condition may be one or more portraits.
  • each portrait may contain different features, so that different beautification processing solutions need to be adopted for different portraits.
  • the image to be processed may contain different male and female faces at the same time, and different portrait features are acquired for faces of different genders, and a corresponding beauty processing method is adopted for each region corresponding to the portrait feature.
  • the terminal may pre-store the corresponding relationship between the different portrait features and the corresponding beauty solutions.
  • the corresponding beauty solutions may be Dan Fengyan Beauty treatment; if the race is Asian, the corresponding beauty plan can be yellow skin beauty treatment, etc.
  • the performing beauty processing on the region corresponding to the at least one portrait feature and/or not performing beauty processing on the region corresponding to the at least one portrait feature includes at least one of the following processing: if the face shape is a round face , Then perform face-lifting and beautifying processing on the face area of the portrait in the image to be processed; if the eyes are single eyelids, perform the beautifying processing on the eye area of the portrait in the image to be processed; if the age is the elderly, Wrinkle-preserving processing is performed on the facial area of the portrait in the image to be processed; if the age is young, the lip color rejuvenation processing is performed on the lip area of the portrait in the image to be processed; if the gender is male, the lip color is not processed.
  • the beard area of the portrait in the image to be processed is subjected to beautification processing, or corresponding beautification processing; if the gender is female, no beautification processing is performed on the eyebrow area and/or the nose area of the portrait in the image to be processed , Or corresponding beauty treatment.
  • the following takes the feature as gender as an example to describe in detail the process of performing beauty processing according to the feature recognition result.
  • the beauty processing is performed on the region corresponding to the at least one portrait feature and/or the region corresponding to the at least one portrait feature is different.
  • Performing beauty processing may be performing feature region recognition on the face in the image to be processed, and obtaining at least one feature region of the face in the face; wherein, the feature region of the face includes at least one feature region of five sense organs Determine the gender corresponding area in the face according to the gender and the at least one facial feature area; protect the gender corresponding area when performing beauty processing on the image to be processed.
  • the terminal may use the existing facial feature region recognition method to perform feature region recognition on the face in the image to be processed, so as to obtain at least one facial feature region in the face, and the face
  • the feature area includes at least one feature area of the five sense organs.
  • the facial feature regions may include facial feature regions such as eyes, mouth, nose, ears, and eyebrows, as well as non facial feature regions such as chin and forehead. Since the position of the region corresponding to the portrait feature is fixed and can be determined correspondingly by the region of the facial feature, it can be determined that the portrait feature in the face corresponds to the result of the recognition of the portrait feature and the at least one facial feature region. area.
  • the terminal can pass through the mouth after determining the characteristic areas of the nose and mouth in the face, that is, the positions of the nose and mouth in the face.
  • the lower area and the area between the upper part of the mouth and the lower part of the nose obtain the corresponding beard area.
  • the characteristics of beards are dark, such as black, which is lower in brightness than the skin area.
  • the brightness of pixels in the area under the mouth and between the area above the mouth and under the nose can be smaller than
  • the pixels with the average value of the pixels in the face area and the difference between the two are greater than the preset first threshold are used as pixels representing the beard, and the beard area is obtained.
  • the terminal performing beauty processing on the area corresponding to the at least one portrait feature and/or not performing beauty processing on the area corresponding to the at least one portrait feature includes:
  • the terminal may determine multiple candidate portrait feature corresponding regions according to the feature recognition result, but the user may not need to use each candidate portrait feature corresponding region as the target portrait feature corresponding region for beautification. Processing, but only one or a few candidate portrait feature corresponding regions need to be used as the target portrait feature region for beauty processing. Therefore, after determining at least one portrait feature corresponding region, the terminal can display the at least one portrait feature corresponding region. For a feature corresponding area, the user selects one or several portrait feature corresponding areas from it as the target portrait feature corresponding area for beautification processing.
  • the terminal displays the at least one portrait feature corresponding area, it can simultaneously control the at least one portrait feature corresponding area to be in a selectable state, such as protruding and displaying each of the portrait feature corresponding areas Borders and so on.
  • the terminal displays at least one portrait feature corresponding region, and the user selects a certain one or several portrait feature corresponding regions from the region for beautification processing, which is flexible in operation and further improves the user experience.
  • said adopting the beauty solution corresponding to the at least one portrait feature to perform beauty processing on the region corresponding to the at least one portrait feature includes:
  • a selection of the at least one beauty solution is received, and beauty processing is performed on the region corresponding to the at least one portrait feature according to the selected beauty solution.
  • the terminal may store at least one beauty solution. Accordingly, when the terminal needs to perform beauty processing on an area corresponding to a certain portrait feature, it can output the At least one beauty solution corresponding to the at least one portrait feature, and the user selects one or more beauty solutions therefrom to perform beauty processing on the region corresponding to the at least one portrait feature. For example, taking the feature as the face shape and the face shape as a round face as an example, the terminal may recommend various beauty solutions such as face-lifting and beautifying treatment, freckles beautifying treatment, whitening and beautifying treatment to the user, and the user can perform according to needs Choose the corresponding beauty plan. In this way, by providing users with a variety of beauty solutions to choose from, they are flexible and convenient, and the user experience is further improved.
  • the method further includes:
  • the state of the target area in the image to be processed after the beautification processing is restored to the state before the beautification processing.
  • the terminal may receive the user's input on the to-be-processed image after the beautification process, so as to determine that the user is on the beautification-processed image to be processed.
  • the selected target area further restores the state of the target area in the to-be-processed image after the beautification processing to the state before the beautification processing.
  • the terminal may display each area in the portrait after performing beautification processing on the image to be processed, for example, using a frame or highlighting to mark each area to facilitate the user's selection.
  • the terminal may also determine the area selected by the sliding operation trajectory as the target area according to the sliding operation trajectory input by the user. In this way, the terminal determines the target area that does not require beautification processing according to the user's selection, so as to protect the target area, improve the flexibility of use, and further improve the user experience.
  • the terminal includes: a processor 110 and a memory 111 for storing a computer program that can run on the processor 110;
  • the processor 110 illustrated in FIG. 2 is not used to refer to the number of processors 110 as one, but only used to refer to the positional relationship of the processor 110 with respect to other devices.
  • the processor 110 The number can be one or more; similarly, the memory 111 illustrated in FIG. 2 has the same meaning, that is, it is only used to refer to the positional relationship of the memory 111 with respect to other devices.
  • the number of the memory 111 can be one or more.
  • the processor 110 is configured to implement the image processing method applied to the foregoing terminal when running the computer program.
  • the terminal may further include: at least one network interface 112.
  • the various components in the terminal are coupled together through the bus system 113.
  • the bus system 113 is used to implement connection and communication between these components.
  • the bus system 113 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 113 in FIG. 2.
  • the memory 111 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Magnetic Random Access Memory (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface memory, CD-ROM, or CD-ROM, Compact Disc Read-Only Memory); Magnetic surface storage can be disk storage or tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • RAM Random Access Memory
  • many forms of RAM are available, such as static random access memory (SRAM, Static Random Access Memory), Synchronous Static Random Access Memory (SSRAM, Synchronous Static Random Access Memory), Dynamic Random Access Memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory, Enhanced Synchronous Dynamic Random Access Memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronous connection dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, Direct Rambus Random Access Memory).
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data
  • the memory 111 in the embodiment of the present application is used to store various types of data to support the operation of the terminal.
  • Examples of these data include: any computer programs used to operate on the terminal, such as operating systems and applications; contact data; phone book data; messages; pictures; videos, etc.
  • the operating system contains various system programs, such as a framework layer, a core library layer, and a driver layer, which are used to implement various basic services and process hardware-based tasks.
  • Applications can include various applications, such as media players (Media Player), browser (Browser), etc., used to implement various application services.
  • the program that implements the method of the embodiment of the present application may be included in the application program.
  • this embodiment also provides a computer storage medium in which a computer program is stored.
  • the computer storage medium may be a magnetic random access memory (FRAM, ferromagnetic random access memory) , Read Only Memory (ROM, Read Only Memory), Programmable Read Only Memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), flash memory (Flash Memory), magnetic surface memory, optical disk, or CD-ROM (Compact Disc Read-Only Memory); it can also be a variety of devices including one or any combination of the above-mentioned memories, such as mobile phones, computers, Tablet devices, personal digital assistants, etc.
  • FRAM magnetic random access memory
  • ROM Read Only Memory
  • PROM Programmable Read Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory Flash Memory
  • FIG. 3 is a schematic diagram of a specific flow of an image processing method provided by an embodiment of the application, including the following steps:
  • Step S201 Switch to the beauty mode
  • the terminal correspondingly switches the photographing mode of the camera application to the beauty mode.
  • Step S202 Read the data of the preview frame image
  • the terminal reads the data of the preview frame image displayed on the preview interface of the camera application.
  • Step S203 face detection
  • the terminal performs face detection on the preview frame image to obtain the number of faces.
  • Step S204 Judge whether the number of faces is 1, if yes, go to step S205, otherwise go to step S208;
  • the terminal judges whether the number of faces in the preview frame image is 1, and if so, execute step S205, otherwise, execute step S208.
  • Step S205 intercept the face area data
  • the terminal detects that the number of faces in the preview frame image is 1, it intercepts the face area and converts it into face data in a specified format, and then extracts the feature points of the face, such as eyes, nose, and mouth. , Chin, ears and other main features.
  • Step S206 the gender recognition module performs gender recognition according to the face area data
  • a gender recognition module may be provided in the terminal, and the terminal inputs the face area data obtained in step S205 to the gender recognition module, and the gender recognition module can be regarded as a neural network model.
  • the process of performing gender recognition based on face region data in this embodiment can be seen in FIG. 4, which includes the following steps:
  • Step S301 Input the formatted face image data
  • Step S302 convolution calculation
  • a convolution calculation is performed on a convolution layer of the neural network model.
  • the neural network model contains 4 convolutional layers as an example, and only one convolutional layer is convolutional calculation at a time. That is to say, the first convolutional layer of the neural network model performs the convolution calculation on the formatted person.
  • the face image data is subjected to convolution calculation.
  • Step S303 batch standardization
  • batch normalization processing is performed on the features obtained by convolution in step S302, so that the obtained feature data distribution conforms to the normal distribution, thereby accelerating the convergence speed of the model and increasing the generalization ability of the model.
  • Step S304 feature activation
  • Step S305 Maximum pooling
  • the obtained new feature map is used as the input data of the next convolutional layer.
  • Step S306 global average pooling
  • Step S307 Output the probability array of gender classification.
  • the probability array of gender classification is obtained after global average pooling, where the index with a larger array value is the final gender result of the gender recognition module, for example, the array index of male is 0, and the array index of female is 1.
  • Step S207 The gender recognition module delivers the gender recognition result and facial feature points to the beauty module
  • the gender recognition module sends the gender type and facial feature point information of the face obtained through neural network algorithm calculation to the beauty module.
  • a beautification module may be provided in the terminal to perform beautification processing on the image.
  • Step S208 the beautification module adapts the corresponding beautification parameters
  • the beautification module selects a set of default beautification parameters to perform beautification processing on the human face.
  • the beautification module divides the protection area of the corresponding gender according to the gender information obtained in step S207 and combined with the facial feature point information.
  • the beautification module processes the corresponding protection area, Select the corresponding beauty parameters to protect the protected area.
  • Step S209 the beautification module processes the image according to the adapted corresponding beautification parameters.
  • the beautification module performs beautification processing on the face according to the selected default beautification parameters.
  • the beauty module processes the corresponding protection area, it adopts a gradual processing scheme to smooth the edges of the protection area and reduce the processing of other parts of the protection area to achieve partial detail protection Effect.
  • Figure 5 it is a schematic diagram after beautification protection is performed on the beard area of men (that is, the area will reduce the beautification of the skin).
  • Figure 6 it is the eyebrow mole and nose decoration area of women.
  • the screen of the terminal can display the image obtained by the beautification processing in real time, so that the user can see the effect of the beautification processing in real time.
  • the Tensorflow framework based on deep learning, the gender recognition algorithm model trained through a large number of face image samples, and through TensorFlow
  • the toco tool converts the server-side model to the Tflite model, and then transplants it to the Android terminal to realize offline real-time gender recognition on the mobile terminal.
  • beard protection will be added to the beauty of men
  • eyebrow moles and nose decorations will be added to the beauty of women, thereby achieving both men and women.
  • Differentiated smart beauty effect It can be summarized that real-time gender recognition can be realized offline on the mobile terminal, and the beauty parameters of the corresponding gender can be adapted to beautify the face.
  • the image processing method, terminal, and computer storage medium of the present application acquire at least two portrait features obtained by feature recognition of a portrait in an image to be processed, and perform beauty processing and/or on an area corresponding to the at least one portrait feature No beautification processing is performed on the area corresponding to the at least one portrait feature, so as to implement personalized beautification processing for the user according to the user characteristics, thereby improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image, un terminal et un support de stockage informatique. Le procédé de traitement d'image s'applique au terminal et comprend les étapes consistant à : obtenir une image à traiter ; réaliser une reconnaissance de caractéristiques sur un portrait de ladite image pour obtenir au moins deux caractéristiques de portrait de ladite image ; et réaliser un traitement d'embellissement du visage sur une région correspondante de l'au moins une caractéristique de portrait, et/ou ne réaliser aucun traitement d'embellissement du visage sur la région correspondante de l'au moins une caractéristique de portrait. Selon le procédé de traitement d'image, le terminal, et le support de stockage informatique fournis par la présente invention, le terminal obtient au moins deux caractéristiques de portrait obtenues par réalisation d'une reconnaissance de caractéristiques sur un portrait d'une image à traiter, et réalise un traitement d'embellissement du visage sur une région correspondante de l'au moins une caractéristique de portrait et/ou ne réalise aucun traitement d'embellissement du visage sur la région correspondante de l'au moins une caractéristique de portrait, de façon à mettre en œuvre un traitement d'embellissement du visage personnalisé d'un utilisateur en fonction de caractéristiques d'utilisateur, ce qui permet d'améliorer l'expérience d'utilisation de l'utilisateur.
PCT/CN2020/104638 2019-12-16 2020-07-24 Procédé de traitement d'image, terminal et support de stockage informatique WO2021120626A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911296529.1 2019-12-16
CN201911296529.1A CN111161131A (zh) 2019-12-16 2019-12-16 一种图像处理方法、终端及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2021120626A1 true WO2021120626A1 (fr) 2021-06-24

Family

ID=70557199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104638 WO2021120626A1 (fr) 2019-12-16 2020-07-24 Procédé de traitement d'image, terminal et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN111161131A (fr)
WO (1) WO2021120626A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572955A (zh) * 2021-06-25 2021-10-29 维沃移动通信(杭州)有限公司 图像处理方法、装置及电子设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161131A (zh) * 2019-12-16 2020-05-15 上海传英信息技术有限公司 一种图像处理方法、终端及计算机存储介质
CN111784611B (zh) * 2020-07-03 2023-11-03 厦门美图之家科技有限公司 人像美白方法、装置、电子设备和可读存储介质
CN112565601B (zh) * 2020-11-30 2022-11-04 Oppo(重庆)智能科技有限公司 图像处理方法、装置、移动终端及存储介质
CN114973727B (zh) * 2022-08-02 2022-09-30 成都工业职业技术学院 一种基于乘客特征的智能驾驶方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274354A (zh) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和移动终端
CN107578380A (zh) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 一种图像处理方法、装置、电子设备及存储介质
CN108012081A (zh) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 智能美颜方法、装置、终端和计算机可读存储介质
CN108229278A (zh) * 2017-04-14 2018-06-29 深圳市商汤科技有限公司 人脸图像处理方法、装置和电子设备
US10303933B2 (en) * 2016-07-29 2019-05-28 Samsung Electronics Co., Ltd. Apparatus and method for processing a beauty effect
CN111161131A (zh) * 2019-12-16 2020-05-15 上海传英信息技术有限公司 一种图像处理方法、终端及计算机存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303933B2 (en) * 2016-07-29 2019-05-28 Samsung Electronics Co., Ltd. Apparatus and method for processing a beauty effect
CN108229278A (zh) * 2017-04-14 2018-06-29 深圳市商汤科技有限公司 人脸图像处理方法、装置和电子设备
CN107274354A (zh) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和移动终端
CN107578380A (zh) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 一种图像处理方法、装置、电子设备及存储介质
CN108012081A (zh) * 2017-12-08 2018-05-08 北京百度网讯科技有限公司 智能美颜方法、装置、终端和计算机可读存储介质
CN111161131A (zh) * 2019-12-16 2020-05-15 上海传英信息技术有限公司 一种图像处理方法、终端及计算机存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572955A (zh) * 2021-06-25 2021-10-29 维沃移动通信(杭州)有限公司 图像处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN111161131A (zh) 2020-05-15

Similar Documents

Publication Publication Date Title
WO2021120626A1 (fr) Procédé de traitement d'image, terminal et support de stockage informatique
CN110929651B (zh) 图像处理方法、装置、电子设备及存储介质
US10438329B2 (en) Image processing method and image processing apparatus
US10977873B2 (en) Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
CN105825486B (zh) 美颜处理的方法及装置
WO2020019904A1 (fr) Procédé et appareil de traitement d'image, dispositif informatique et support de stockage
CN104754218B (zh) 一种智能拍照方法及终端
WO2022078041A1 (fr) Procédé d'entraînement de modèle de détection d'occlusion et procédé d'embellissement d'image faciale
CN107958439B (zh) 图像处理方法及装置
CN108921856B (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN107730448B (zh) 基于图像处理的美颜方法及装置
CN108876732A (zh) 人脸美颜方法及装置
CN110909654A (zh) 训练图像的生成方法及装置、电子设备和存储介质
CN114175113A (zh) 提供头像的电子装置及其操作方法
EP3328062A1 (fr) Procédé et dispositif de photosynthèse
CN112712470A (zh) 一种图像增强方法及装置
KR20170097884A (ko) 이미지를 처리하기 위한 방법 및 그 전자 장치
KR20180109217A (ko) 얼굴 영상 보정 방법 및 이를 구현한 전자 장치
CN114007099A (zh) 一种视频处理方法、装置和用于视频处理的装置
CN109325908A (zh) 图像处理方法及装置、电子设备和存储介质
CN111723803A (zh) 图像处理方法、装置、设备及存储介质
CN113850726A (zh) 图像变换方法和装置
CN113741681A (zh) 一种图像校正方法与电子设备
CN116048244A (zh) 一种注视点估计方法及相关设备
CN114187166A (zh) 图像处理方法、智能终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20903979

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20903979

Country of ref document: EP

Kind code of ref document: A1