CN111798367A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111798367A
CN111798367A CN201910282440.3A CN201910282440A CN111798367A CN 111798367 A CN111798367 A CN 111798367A CN 201910282440 A CN201910282440 A CN 201910282440A CN 111798367 A CN111798367 A CN 111798367A
Authority
CN
China
Prior art keywords
image
information
face
attribute information
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282440.3A
Other languages
Chinese (zh)
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282440.3A priority Critical patent/CN111798367A/en
Publication of CN111798367A publication Critical patent/CN111798367A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment. The method comprises the following steps: acquiring a first image; determining a face image from the first image, and determining attribute information corresponding to the face image; acquiring a second image according to the attribute information; and carrying out image beautification processing on the face image in the first image by using the second image. The embodiment of the application can improve the automation degree of the image beautifying processing of the electronic equipment.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
Background
With the continuous development of image processing technology, users can use electronic equipment to beautify images. For example, the electronic device may attach or fuse some pre-set images to another image, thereby making the other image more interesting. However, in the related art, the electronic device is less automated when attaching or fusing a preset image to another image to beautify the other image.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a storage medium and an electronic device, which can improve the automation degree of the image beautification processing of the electronic device.
An embodiment of the present application provides an image processing method, including:
acquiring a first image;
determining a face image from the first image, and determining attribute information corresponding to the face image;
acquiring a second image according to the attribute information;
and carrying out image beautification processing on the face image in the first image by using the second image.
An embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring a first image;
the determining module is used for determining a face image from the first image and determining attribute information corresponding to the face image;
the second acquisition module is used for acquiring a second image according to the attribute information;
and the processing module is used for performing image beautification processing on the face image in the first image by using the second image.
The embodiment of the application provides a storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed on a computer, the computer is enabled to execute the flow in the image processing method provided by the embodiment of the application.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the image processing method provided by the embodiment of the present application by calling the computer program stored in the memory.
In the embodiment of the application, the electronic device can automatically acquire the second image according to the attribute information corresponding to the face image in the first image to be beautified, and beautify the face image in the first image by using the second image. Because the embodiment can automatically acquire the second image for beautifying without manually selecting the image for beautifying by the user, the embodiment can reduce manual operation in the face image beautifying processing and improve the automation degree of the image beautifying processing.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic diagram of a panoramic sensing architecture of an electronic device provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 3 is another schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 4 to fig. 6 are scene schematic diagrams of an image processing method according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 9 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a data collection method according to an embodiment of the present application. The data collection method can be applied to electronic devices. A panoramic perception framework is arranged in the electronic equipment. The panorama sensing architecture is an integration of hardware and software for implementing the data collection method in an electronic device.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, a plurality of algorithms can be included in the panoramic perception architecture, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a cyclic neural network.
It is understood that the execution subject of the embodiment of the present application may be an electronic device such as a smart phone or a tablet computer.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application, where the flowchart may include:
in 101, a first image is acquired.
With the continuous development of image processing technology, users can use electronic equipment to beautify images. For example, the electronic device may attach or fuse some pre-set images to another image, thereby making the other image more interesting. However, in the related art, the electronic device is less automated when attaching or fusing a preset image to another image to beautify the other image. For example, when the beautification processing is performed on the image a, the user needs to select the image B for the beautification processing, and then the image B is attached to or fused with the image a by the electronic device, so that the beautification processing is performed on the image a, that is, the user needs to perform more manual operations, and the degree of automation is low.
In 101 of the embodiment of the present application, for example, the electronic device may first acquire a first image to be beautified. Wherein the first image may be an image containing a human face.
At 102, a face image is determined from the first image, and attribute information corresponding to the face image is determined.
For example, after acquiring a first image to be beautified, the electronic device may determine a face image from the first image. Then, the electronic device may determine attribute information corresponding to the face image.
It should be noted that the attribute information may be related attributes of the user corresponding to the face image determined according to the face image, such as gender, age, emotion, and the like, and it should be understood that the examples herein are not limited to the present embodiment.
In 103, a second image is acquired based on the attribute information.
For example, after determining attribute information corresponding to a face image in a first image, the electronic device may obtain a second image according to the attribute information. That is, the image used for the beautification processing of the first image.
At 104, the face image in the first image is subjected to image beautification processing by using the second image.
For example, after the second image is acquired, the electronic device may automatically perform image beautification processing on the face image in the first image according to the second image. For example, the electronic device may attach or fuse the second image to the face image in the first image, thereby beautifying the face image.
For example, the second image acquired by the electronic device according to the attribute information corresponding to the face image includes a pair of hand-drawn cat ears and a pair of cat whiskers which are also hand-drawn. Then, the electronic device can attach the cat ear in the hand-drawing style in the second image to the head of the face image of the first image, and attach the cat hair in the hand-drawing style to the mouth side of the face image of the first image, so that the face in the first image has a more pleasing visual effect, that is, the face image in the first image is beautified.
It can be understood that, in the embodiment of the application, the electronic device may automatically acquire the second image according to the attribute information corresponding to the face image in the first image to be beautified, and perform beautification processing on the face image in the first image by using the second image. Because the embodiment can automatically acquire the second image for beautifying without manually selecting the image for beautifying by the user, the embodiment can reduce manual operation in the face image beautifying processing and improve the automation degree of the image beautifying processing.
It should be noted that the image processing method provided by this embodiment may be applied to an intelligent service layer in the panoramic sensing architecture shown in fig. 1. The electronic equipment can collect data through the information perception layer, the collected data can be input into the data processing layer to be processed, and the data processed by the data processing layer can be input into the feature extraction layer to be subjected to feature extraction, so that feature data are obtained. The scene modeling layer can model the feature data, so that the current scene is identified. The modeled data may be input to an intelligent services layer, which may provide intelligent services to a user of the electronic device based on the data. The image processing method provided by the embodiment can enable the electronic device to automatically acquire the image for face beautification (the image for face beautification can be related to the current scene identified by the scene modeling layer and the like), and the user does not need to manually select the image for beautification, so that the embodiment can better provide intelligent service for the user.
Referring to fig. 3, fig. 3 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
in 201, an electronic device acquires a first image.
For example, the electronic device may first acquire a first image to be beautified. Wherein the first image may be an image containing a human face.
At 202, the electronic device determines a face image from the first image, and determines attribute information corresponding to the face image, where the attribute information at least includes gender information, age information, face information, hair style information, emotion information, and scene information corresponding to the face image.
For example, after acquiring a first image to be beautified, the electronic device may determine a face image from the first image. Then, the electronic device may determine attribute information corresponding to the face image. The attribute information corresponding to the face image may at least include gender information, age information, face information, hair style information, emotion information of a user corresponding to the face image, and scene information of a scene where the user is located.
In some embodiments, the first image obtained in the process 201 may be a picture (taken in advance) selected by a user from an album of the electronic device, a picture just taken by a camera application of the electronic device, or an image frame obtained by a real-time preview function of the camera application of the electronic device. In flow 202, the electronic device can determine a facial image from the first image using a convolutional neural network that has been learned and trained. For example, the electronic device may input the first image into the convolutional neural network, and the convolutional neural network may output the position of the face in the first image, and the face image may be determined from the first image by the position of the face in the first image. In one embodiment, the convolutional Neural Network may be a lightweight Neural Network model (also referred to as a Mini Neural Network). The light weight neural network model refers to the neural network model with less required parameters and less calculation cost. When the lightweight neural network model is subjected to learning training, samples used for training can be images containing faces, 4 coordinate position information (x, y, w, h) of borders (bounding boxes) of the faces and face confidence degrees c, and positions of the faces in the images are output by a softmax regression method. Wherein, (x, y, w, h) respectively corresponds to the upper left corner coordinate (x, y) and the offset (w, h) of the lower right corner coordinate relative to the upper left corner coordinate of the frame. And the face confidence c represents the probability that the position corresponding to the coordinate position information (x, y, w, h) is the face position. Wherein, the value range of the face confidence c is [0,1 ].
In some embodiments, the process of determining attribute information corresponding to the face image in 202 may include: the face image is used as the input of the multi-task convolutional neural network model after learning training, and the output of the multi-task convolutional neural network model can comprise the sex information, the age information, the face information, the hair style information and the emotion information corresponding to the face image. The sex information is male or female, the age information is a numerical value, the face information can be Chinese face, melon seed face, round face and the like, the hair style information can be flat head, curly hair, long hair, short hair, bang and the like, and the emotion information can be joy, anger, sadness, calmness and the like. In this case, different information may be represented by different numbers, for example, gender of a male may be represented by number 1, and gender of a female may be represented by number 2. The face of Chinese character in the face information can be represented by the number 1, the face of melon seed can be represented by the number 2, the round face can be represented by the number 3, and so on. The platform in the hair style information may be represented by the numeral 1, the long hair by the numeral 2, the short hair by the numeral 3, the bang by the numeral 4, and so on. Distraction in the emotional information may be represented by the number 2, valentine may be represented by the number 2, sadness may be represented by the number 3, calmness may be represented by the number 4, and so on. Then, for example, the face image in the first image is input to the multitask convolutional neural network model, and the output results are that the gender information is 1, the age information is 30, the face information is 1, the hair style information is 1, and the emotion information is 1. The output result shows that the user corresponding to the face image is a male user, the age of the user is 30, the face shape is a Chinese face, the hairstyle is a platform, and the emotion is happy.
In one embodiment, the context information may also be identified using a neural network model trained through learning. For example, the electronic device may input the first image into the learning-trained neural network model, and the output of the neural network model may be a shooting scene corresponding to the first image, so as to obtain corresponding scene information.
In another embodiment, if the first image is an image currently captured by a user using a camera application in the electronic device or an image frame acquired through a real-time preview function in the camera application, the electronic device may acquire a sound segment in the current environment through a microphone, and analyze the sound segment to determine a scene where the user is currently located, so as to obtain scene information (e.g., quiet or loud). For example, the electronic device may convert a sound signal collected by a microphone into a spectrum energy map through a Fast Fourier Transform (FFT) or Mel Frequency Cepstrum Coefficient (MFCC) of a discrete Fourier transform, and perform feed-forward calculation on spectrum data through a convolutional neural network or a cyclic neural network to obtain scene information where a user is located.
At 203, the electronic device obtains style information that is pre-selected information regarding beautification style of the image.
For example, after determining the attribute information corresponding to the face image, the electronic device may obtain style information, which may be information about an image beautification style selected in advance by the user. For example, style information such as a picture style, an oil painting style, and a wash style is preset in the electronic device. When a user uses a camera in an electronic device, the electronic device may ask the user what style to use to beautify the image. For example, if the user selects the style of the sketch, the style information obtained by the electronic device is the style of the sketch 203.
The picture style is an effect of presenting a hand-drawn picture in a visual effect of an image for beautifying the first image. The oil painting style is an effect in which an image for beautifying the first image presents an oil painting in visual effect. The quadratic style is an effect of rendering the ink-wash painting on the visual effect of the image for beautifying the first image, and the like.
At 204, the electronic device obtains user category information.
For example, after acquiring attribute information and style information corresponding to a face image, the electronic device may acquire user category information.
In one embodiment, the user category information may include information such as game preference users, cartoon preference users, self-timer preference users, and the like. The user who is a category of game hobby users is a user who is hobby to play a game. The users of the category of the cartoon preference users are users who like to watch the cartoon. The users who like the self-timer users are users who like the self-timer, and the like.
In some embodiments, the electronic device may obtain the user category information by:
the electronic equipment acquires application starting behavior historical data;
and according to the historical data of the application starting behavior, the electronic equipment determines the user category information.
For example, the electronic device may determine the user category information by the type of application it is installed in. For example, applications have a corresponding category in an application store (application download platform). The electronic device can obtain application opening behavior historical data of application opened by a user, the application opening behavior historical data can include categories corresponding to the opened application on an application downloading platform, the application opening behavior historical data is used as input and input into a learning-trained recurrent neural network model or a classification model (such as a Bayesian classification model) and the output of the learning-trained model is user category information corresponding to the user of the electronic device.
For example, if the user a frequently opens the application of the game category, the historical data of the application opening behavior of the user a is input into the algorithm model which is trained by learning, the classification result output by the algorithm model is a game hobby user, and the electronic device can determine the game hobby user as the user category information of the user a. For another example, if the user b frequently turns on the camera and the application of the beauty category, the historical data of the application turning-on behavior of the user b is input into the algorithm model after learning training, the classification result output by the algorithm model is a self-timer hobby user, and the electronic device can determine the self-timer hobby user as the user category information of the user b, and the like.
In 205, the electronic device converts each attribute information, style information, and user category information into a corresponding feature tensor.
At 206, the electronic device merges the converted feature tensors to obtain a target feature tensor.
For example, 205 and 206 may include:
after the attribute information, the style information and the user category information corresponding to the face image are acquired, the electronic device can convert all the attribute information, the style information and the user category information into corresponding feature tensors.
For example, the attribute information corresponding to the face image in the first image includes gender information, age information, face information, hair style information, emotion information, and scene information of a scene where the face image is located, the style information is a picture style, and the user category information is a comic hobby user, so that the electronic device can convert the gender information into a corresponding feature tensor t1, convert the age information into a corresponding feature tensor t2, convert the face information into a corresponding feature tensor t3, convert the hair style information into a corresponding feature tensor t4, convert the emotion information into a corresponding feature tensor t5, convert the scene information into a corresponding feature tensor t6, convert the picture style information into a corresponding feature tensor t7, and convert the comic hobby user category information into a corresponding feature tensor t 8. Then, the electronic device may perform merging processing on the converted feature tensors to obtain a merged feature tensor T, where the feature tensor T is the target feature tensor. For example, the target feature tensor can be expressed as T ═ { T1, T2, T3, T4, T5, T6, T7, T8 }.
In 207, the electronic device performs clustering processing on the target feature tensor, and determines a target data cluster corresponding to the target feature tensor.
For example, after obtaining the target feature tensor T, the electronic device may perform clustering processing on the target feature tensor T, so as to determine a target data cluster corresponding to the target feature tensor T. For example, the electronic device may perform clustering operation on the target feature tensor T by using a clustering algorithm such as K-MEANS, K-medoid, Clara, Clarans, and the like, so as to determine a target data cluster corresponding to the target feature tensor T.
It should be noted that clustering refers to a process of dividing a set of physical or abstract objects into a plurality of classes composed of similar objects. The cluster generated by clustering is a collection of a set of data objects that are similar to objects in the same cluster and distinct from objects in other clusters. Therefore, after the target feature tensor is subjected to clustering processing, a target data cluster corresponding to the target feature tensor can be obtained.
At 208, the electronic device searches an image corresponding to the target data cluster from a preset image database, where different data clusters and images corresponding to the data clusters are stored in the preset image database.
For example, after obtaining the data cluster corresponding to the target feature tensor, the electronic device may search an image corresponding to the target data cluster from a preset image database. Different data clusters and images corresponding to the data clusters are stored in the preset image database.
For example, the preset image database stores data clusters H, J, K, L, M, N, where the images corresponding to data cluster H are p1 and p2, the images corresponding to data cluster J are p3 and p4, the images corresponding to data cluster K are p5 and p6, the images corresponding to data cluster L are p7 and p8, the images corresponding to data cluster M are p9 and p10, and the images corresponding to data cluster N are p11 and p 12. For example, if the target data cluster determined by clustering the target feature tensor T is H, the electronic device may search for an image corresponding to the data cluster H in a preset image database.
In 209, the electronic device determines the searched image corresponding to the target data cluster as the second image.
For example, the electronic device finds that the images corresponding to the target data cluster H include p1, p2, then the electronic device may determine either one of p1, p2 as the second image. The second image is used for beautifying the face image in the first image.
In 210, the electronic device performs image beautification processing on the face image in the first image by using the second image.
For example, if the electronic device determines the image p1 as the second image, the electronic device may perform an image beautification process on the face image in the first image using the image p 1.
For example, the image p1 is a cartoon of a painted-book style cat ear and cat hair, the electronic device may attach the cartoon of the painted-book style cat ear to the head of the face image in the first image and attach the cartoon of the painted-book style cat hair to the mouth side of the face image in the first image, thereby making the first image look more lovely. Namely, the beautification processing is carried out on the face image in the first image.
It can be understood that, in this embodiment, the electronic device may obtain gender information, age information, face information, hair style information, emotion information, and scene information corresponding to the face image in the first image to be beautified, and style information and user category information for beautification. Then, the electronic device can perform clustering processing on the information, automatically acquire a second image for beautifying processing according to the clustering processing, and perform beautifying processing on the face image in the first image by using the second image. Therefore, the embodiment can reduce manual operation in the face image beautification processing and improve the automation degree of the image beautification processing.
In some embodiments, in the process of 208, if the electronic device finds a plurality of images corresponding to the target data cluster from the preset image database, the electronic device may determine any one of the images as the second image corresponding to the target data cluster.
Referring to fig. 4 to 6, fig. 4 to 6 are schematic scene diagrams of an image processing method according to an embodiment of the present application.
For example, a user opens a camera application of the electronic device. The user takes a self-portrait image (first image) using the camera application of the electronic device. The user then clicks on the "Sticker" function. At this time, the electronic device may acquire the first image, and an interface of the electronic device may be as shown in fig. 4.
Then, the electronic device may determine a face image from the first image, and determine gender information, age information, face information, hair style information, and emotion information of the user corresponding to the face image. For example, using the deep-learned neural network model, the electronic device determines that the gender of the user corresponding to the face image is male, the age of the user is 30, the face shape is an elliptical face, the hairstyle is flat, and the emotion of the user is happy. And, the electronic device may collect a sound segment of the current surrounding environment through the microphone, as shown in fig. 5, and analyze the current scene through the sound segment. For example, the electronic device analyzes that the current scene is a quiet scene.
After the information is acquired, the electronic device can convert the information into corresponding feature tensor information. For example, the electronic device converts gender information into the feature tensor t1, converts age information into the feature tensor t2, converts face information into the feature tensor t3, converts hair style information into the feature tensor t4, converts emotion information into the feature tensor t5, and converts scene information into the feature tensor t 6.
Then, the electronic device may perform merging processing on the converted feature tensors T1, T2, T3, T4, T5, and T6 to obtain the target feature tensor T. For example, the target feature tensor T ═ { T1, T2, T3, T4, T5, T6 }. After the target feature tensor T is obtained, the electronic device may perform clustering processing on the target feature tensor T, so as to determine a data cluster corresponding to the target feature tensor, that is, a target data cluster. Then, the electronic device may search a preset image database for an image corresponding to the target data cluster. Different data clusters and images corresponding to the data clusters are stored in the preset image database.
For example, the electronic device finds the images corresponding to the target data clusters from the preset image database, including w1, w2, w3, w4, w5 and w 6. Then, the electronic device may randomly select one image from the 6 images w1, w2, w3, w4, w5, w6 as a second image for the beautification process. For example, the electronic device can determine image w1 as the second image, image w1 including a sketch-style cat ear and cat whisker. Then, the electronic device can attach the painted-book-style cat ear to the head of the face image in the first image and attach the painted-book-style cat hair to the mouth side of the face image in the first image, so that the first image looks more lovely. I.e. the face image in the first image is beautified, as shown in fig. 6.
In another embodiment, when a plurality of images corresponding to the target data cluster are found from the preset image database, the electronic device may also beautify the face images one by using the images corresponding to the target data cluster, and display the beautified images to the user for viewing, and the user selects which beautified image is to be specifically adopted. For example, the electronic device may first beautify the face image using the image w1 and display the beautified image r1 for the user to view, and if the user likes the beautification effect, the user may choose to use the beautified image r 1. If the user does not like the beautification effect, the electronic device can firstly beautify the face image by using the image w2 and display the beautified image r2 for the user to view. If the user still does not like the beautification effect by using the w2 image, the electronic device can beautify the face image by using another image until the user selects the satisfactory beautification effect image or the beautification effects of all the second images are displayed.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 300 may include: a first obtaining module 301, a determining module 302, a second obtaining module 303, and a processing module 304.
A first obtaining module 301, configured to obtain a first image.
A determining module 302, configured to determine a face image from the first image, and determine attribute information corresponding to the face image.
A second obtaining module 303, configured to obtain a second image according to the attribute information.
A processing module 304, configured to perform an image beautification process on the face image in the first image by using the second image.
In one embodiment, the determining module 302 may be configured to:
and determining attribute information corresponding to the face image, wherein the attribute information at least comprises gender information, age information, face information, hair style information, emotion information and scene information corresponding to the face image.
In one embodiment, the second obtaining module 303 is further configured to: obtaining style information, wherein the style information is information about beautifying style of the image selected in advance.
Then, the second obtaining module 303 may be configured to: and acquiring a second image according to the attribute information and the style information.
In an embodiment, the second obtaining module 303 may be further configured to: and acquiring user category information.
Then, the second obtaining module 303 may be configured to: and acquiring a second image according to the attribute information, the style information and the user category information.
In one embodiment, the second obtaining module 303 may be configured to:
converting each attribute information, the style information and the user category information into corresponding feature tensors;
merging the converted feature tensors to obtain a target feature tensor;
clustering the target characteristic tensor to determine a target data cluster corresponding to the target characteristic tensor;
searching an image corresponding to the target data cluster from a preset image database, wherein different data clusters and images corresponding to the data clusters are stored in the preset image database;
and determining the searched image corresponding to the target data cluster as a second image.
In one embodiment, the second obtaining module 303 may be configured to:
acquiring historical data of application starting behaviors;
and determining user category information according to the application starting behavior historical data.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute the flow in the image processing method provided by this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the image processing method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include components such as a sensor 401, a memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The sensors 401 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, and the like.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a first image;
determining a face image from the first image, and determining attribute information corresponding to the face image;
acquiring a second image according to the attribute information;
and carrying out image beautification processing on the face image in the first image by using the second image.
Referring to fig. 9, an electronic device 500 may include a sensor 501, a memory 502, a processor 503, a display 504, a speaker 505, a microphone 506, and the like.
The sensor 501 may include a gyro sensor (e.g., a three-axis gyro sensor), an acceleration sensor, and the like.
The memory 502 may be used to store applications and data. Memory 502 stores applications containing executable code. The application programs may constitute various functional modules. The processor 503 executes various functional applications and data processing by running an application program stored in the memory 502.
The processor 503 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 502 and calling the data stored in the memory 502, thereby performing overall monitoring of the electronic device.
The display 504 may be used to display images or text, etc. The speaker 505 may be used to play sound signals, etc. The microphone 506 may be used to collect sound signals in the environment, etc.
In this embodiment, the processor 503 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 502 according to the following instructions, and the processor 503 runs the application programs stored in the memory 502, so as to execute:
acquiring a first image; determining a face image from the first image, and determining attribute information corresponding to the face image; acquiring a second image according to the attribute information; and carrying out image beautification processing on the face image in the first image by using the second image.
In one embodiment, when the processor 503 performs the determining of the attribute information corresponding to the face image, it may perform: and determining attribute information corresponding to the face image, wherein the attribute information at least comprises gender information, age information, face information, hair style information, emotion information and scene information corresponding to the face image.
In one embodiment, before the acquiring the second image according to the attribute information, the processor 503 may further perform: obtaining style information, wherein the style information is information about beautifying style of the image selected in advance.
Then, when the processor 503 executes the acquiring of the second image according to the attribute information, it may execute: and acquiring a second image according to the attribute information and the style information.
In one embodiment, before the acquiring the second image according to the attribute information and the style information, the processor 503 may further perform: acquiring user category information;
then, when the processor 503 executes the acquiring of the second image according to the attribute information and the style information, it may execute: and acquiring a second image according to the attribute information, the style information and the user category information.
In one embodiment, when the processor 503 executes the acquiring of the second image according to the attribute information, the style information and the user category information, it may execute: converting each attribute information, the style information and the user category information into corresponding feature tensors; merging the converted feature tensors to obtain a target feature tensor; clustering the target characteristic tensor to determine a target data cluster corresponding to the target characteristic tensor; searching an image corresponding to the target data cluster from a preset image database, wherein different data clusters and images corresponding to the data clusters are stored in the preset image database; and determining the searched image corresponding to the target data cluster as a second image.
In one embodiment, when the processor 503 executes the obtaining of the user category information, it may execute: acquiring historical data of application starting behaviors; and determining user category information according to the application starting behavior historical data.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing apparatus provided in the embodiment of the present application and the image processing method in the above embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be run on the image processing apparatus, and a specific implementation process thereof is described in the embodiment of the image processing method in detail, and is not described herein again.
It should be noted that, for the image processing method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the image processing method described in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, comprising:
acquiring a first image;
determining a face image from the first image, and determining attribute information corresponding to the face image;
acquiring a second image according to the attribute information;
and carrying out image beautification processing on the face image in the first image by using the second image.
2. The image processing method according to claim 1, wherein the determining the attribute information corresponding to the face image comprises:
and determining attribute information corresponding to the face image, wherein the attribute information at least comprises gender information, age information, face information, hair style information, emotion information and scene information corresponding to the face image.
3. The image processing method according to claim 2, further comprising, before said acquiring a second image according to the attribute information: obtaining style information, wherein the style information is information about an image beautifying style selected in advance;
the obtaining of the second image according to the attribute information includes: and acquiring a second image according to the attribute information and the style information.
4. The image processing method according to claim 3, further comprising, before said obtaining a second image based on the attribute information and the style information: acquiring user category information;
the obtaining a second image according to the attribute information and the style information includes: and acquiring a second image according to the attribute information, the style information and the user category information.
5. The image processing method according to claim 4, wherein acquiring a second image based on the attribute information, the genre information, and the user category information comprises:
converting each attribute information, the style information and the user category information into corresponding feature tensors;
merging the converted feature tensors to obtain a target feature tensor;
clustering the target characteristic tensor to determine a target data cluster corresponding to the target characteristic tensor;
searching an image corresponding to the target data cluster from a preset image database, wherein different data clusters and images corresponding to the data clusters are stored in the preset image database;
and determining the searched image corresponding to the target data cluster as a second image.
6. The image processing method according to claim 4, wherein the acquiring user category information includes:
acquiring historical data of application starting behaviors;
and determining user category information according to the application starting behavior historical data.
7. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a first image;
the determining module is used for determining a face image from the first image and determining attribute information corresponding to the face image;
the second acquisition module is used for acquiring a second image according to the attribute information;
and the processing module is used for performing image beautification processing on the face image in the first image by using the second image.
8. The image processing apparatus of claim 7, wherein the determination module is configured to:
and determining attribute information corresponding to the face image, wherein the attribute information at least comprises gender information, age information, face information, hair style information, emotion information and scene information corresponding to the face image.
9. A storage medium having stored thereon a computer program, characterized in that the computer program, when executed on a computer, causes the computer to execute the method according to any of claims 1 to 6.
10. An electronic device comprising a memory, a processor, wherein the processor is configured to perform the method of any of claims 1 to 6 by invoking a computer program stored in the memory.
CN201910282440.3A 2019-04-09 2019-04-09 Image processing method, image processing device, storage medium and electronic equipment Pending CN111798367A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282440.3A CN111798367A (en) 2019-04-09 2019-04-09 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282440.3A CN111798367A (en) 2019-04-09 2019-04-09 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111798367A true CN111798367A (en) 2020-10-20

Family

ID=72805750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282440.3A Pending CN111798367A (en) 2019-04-09 2019-04-09 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111798367A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114143454A (en) * 2021-11-19 2022-03-04 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
WO2022151663A1 (en) * 2021-01-15 2022-07-21 北京市商汤科技开发有限公司 Access control machine interaction method and apparatus, access control machine assembly, electronic device, and medium
CN115936972A (en) * 2022-09-27 2023-04-07 阿里巴巴(中国)有限公司 Image generation method, remote sensing image style migration method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017016160A1 (en) * 2015-07-30 2017-02-02 北京奇虎科技有限公司 Classification-based storage method for target picture, and corresponding terminal
CN107347138A (en) * 2017-06-30 2017-11-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and terminal
CN107545536A (en) * 2017-08-17 2018-01-05 上海展扬通信技术有限公司 The image processing method and image processing system of a kind of intelligent terminal
CN108229674A (en) * 2017-02-21 2018-06-29 北京市商汤科技开发有限公司 The training method and device of cluster neural network, clustering method and device
CN108921941A (en) * 2018-07-10 2018-11-30 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109086680A (en) * 2018-07-10 2018-12-25 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017016160A1 (en) * 2015-07-30 2017-02-02 北京奇虎科技有限公司 Classification-based storage method for target picture, and corresponding terminal
CN108229674A (en) * 2017-02-21 2018-06-29 北京市商汤科技开发有限公司 The training method and device of cluster neural network, clustering method and device
CN107347138A (en) * 2017-06-30 2017-11-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and terminal
CN107545536A (en) * 2017-08-17 2018-01-05 上海展扬通信技术有限公司 The image processing method and image processing system of a kind of intelligent terminal
CN108921941A (en) * 2018-07-10 2018-11-30 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109086680A (en) * 2018-07-10 2018-12-25 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022151663A1 (en) * 2021-01-15 2022-07-21 北京市商汤科技开发有限公司 Access control machine interaction method and apparatus, access control machine assembly, electronic device, and medium
CN114143454A (en) * 2021-11-19 2022-03-04 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN114143454B (en) * 2021-11-19 2023-11-03 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN115936972A (en) * 2022-09-27 2023-04-07 阿里巴巴(中国)有限公司 Image generation method, remote sensing image style migration method and device
CN115936972B (en) * 2022-09-27 2024-03-22 阿里巴巴(中国)有限公司 Image generation method, remote sensing image style migration method and device

Similar Documents

Publication Publication Date Title
KR102428920B1 (en) Image display device and operating method for the same
KR20190034021A (en) Method and apparatus for recognizing an object
CN111798367A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111797854B (en) Scene model building method and device, storage medium and electronic equipment
CN111814475A (en) User portrait construction method and device, storage medium and electronic equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111797302A (en) Model processing method and device, storage medium and electronic equipment
CN111797851A (en) Feature extraction method and device, storage medium and electronic equipment
CN115170819A (en) Target identification method and device, electronic equipment and medium
CN111796926A (en) Instruction execution method and device, storage medium and electronic equipment
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111797148A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111798019B (en) Intention prediction method, intention prediction device, storage medium and electronic equipment
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN111797873A (en) Scene recognition method and device, storage medium and electronic equipment
CN111797862A (en) Task processing method and device, storage medium and electronic equipment
CN111797986A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111797867A (en) System resource optimization method and device, storage medium and electronic equipment
CN111797856A (en) Modeling method, modeling device, storage medium and electronic equipment
CN111796979A (en) Data acquisition strategy determining method and device, storage medium and electronic equipment
WO2020207297A1 (en) Information processing method, storage medium, and electronic device
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
CN111796663B (en) Scene recognition model updating method and device, storage medium and electronic equipment
CN114758334A (en) Object registration method and device
CN111797869A (en) Model training method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination