CN111860372A - Artificial intelligence-based expression package generation method, device, equipment and storage medium - Google Patents

Artificial intelligence-based expression package generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN111860372A
CN111860372A CN202010724846.5A CN202010724846A CN111860372A CN 111860372 A CN111860372 A CN 111860372A CN 202010724846 A CN202010724846 A CN 202010724846A CN 111860372 A CN111860372 A CN 111860372A
Authority
CN
China
Prior art keywords
target
key point
original
template
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010724846.5A
Other languages
Chinese (zh)
Inventor
熊军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202010724846.5A priority Critical patent/CN111860372A/en
Publication of CN111860372A publication Critical patent/CN111860372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an artificial intelligence based expression package generation method, an artificial intelligence based expression package generation device, artificial intelligence based expression package generation equipment and a storage medium, wherein the artificial intelligence based expression package generation method comprises the following steps: acquiring an original portrait containing a head portrait; detecting an original human figure by adopting a human face detection model to obtain an original key point diagram; acquiring a target expression packet template, and performing screenshot processing on the original key point diagram based on the target expression packet template to acquire a target key point diagram; correcting the target key point diagram based on the target expression packet template to obtain a target correction diagram; carrying out adaptation processing on the target correction image according to the target expression packet template to obtain a standard portrait; inputting the standard portrait image into a pre-trained image segmentation model to obtain a target portrait image; and combining the target expression packet template with the target human image to generate a target expression packet corresponding to the target expression packet template. The method and the device are used for intelligently generating the target expression package.

Description

Artificial intelligence-based expression package generation method, device, equipment and storage medium
Technical Field
The invention relates to the field of image processing, in particular to an expression package generation method, device, equipment and storage medium based on artificial intelligence.
Background
Today, more and more people like to adopt emoticons when chatting. Although a great amount of expression package generation software is available in the market, the existing expression package generation software often needs to focus and scratch for a long time when generating expression packages, and the operation is complex, or the existing expression package generation process has a single background and a poor expression package generation effect.
Disclosure of Invention
The embodiment of the invention provides an artificial intelligence based expression package generation method, an artificial intelligence based expression package generation device, artificial intelligence based expression package generation equipment and a storage medium, and aims to solve the problems of complex operation and poor expression package generation effect.
An artificial intelligence-based emoticon generation method comprises the following steps:
acquiring an original portrait containing a head portrait;
detecting the original human figure by adopting a human face detection model to obtain an original key point diagram;
acquiring a target expression packet template, and performing screenshot processing on the original key point diagram based on the target expression packet template to acquire a target key point diagram;
correcting the target key point diagram based on the target expression packet template to obtain a target correction diagram;
performing adaptation processing on the target correction image according to the target expression packet template to obtain a standard portrait;
inputting the standard human figure into a pre-trained image segmentation model to obtain a target human figure;
and combining the target expression packet template with the target human image to generate a target expression packet corresponding to the target expression packet template.
An artificial intelligence-based emoticon generation apparatus, comprising:
the original portrait acquisition module is used for acquiring an original portrait containing a head portrait;
the original key point diagram acquisition module is used for detecting the original human image diagram by adopting a human face detection model to acquire an original key point diagram;
the target key point diagram acquisition module is used for acquiring a target expression package template, and performing screenshot processing on the original key point diagram based on the target expression package template to acquire a target key point diagram;
the target correction image acquisition module is used for correcting the target key point diagram based on the target expression package template to acquire a target correction image;
the standard portrait acquisition module is used for carrying out adaptation processing on the target correction image according to the target expression package template to acquire a standard portrait;
the target human figure acquisition module is used for inputting the standard human figure into a pre-trained image segmentation model to acquire a target human figure;
and the target expression package generating module is used for combining the target expression package template and the target human image to generate a target expression package corresponding to the target expression package template.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the artificial intelligence based emoticon generation method when executing the computer program.
A computer-readable storage medium, storing a computer program which, when executed by a processor, implements the steps of the artificial intelligence based emoticon generation method described above.
According to the method, the device, the computer equipment and the storage medium for generating the expression package based on the artificial intelligence, the original human figure is detected by adopting the human face detection model, the original key point diagram is obtained, the human face characteristics and the human face outline of the original human figure can be rapidly determined, and a basis is provided for subsequently generating the target expression package. And acquiring a target expression packet template, performing screenshot processing on the original key point diagram based on the target expression packet template, and acquiring a target key point diagram to cut off the background and reduce noise, so that the subsequent segmentation difficulty can be reduced, the subsequent segmentation precision is ensured, and the condition of mistaken segmentation is avoided. And correcting the target key point diagram based on the target expression package template to obtain a target correction diagram, ensuring that the angle of the target correction diagram is the same as the angle of the portrait in the target expression package template, being beneficial to accelerating the generation speed of the subsequent target expression package based on artificial intelligence and ensuring the accuracy of automatically generating the target expression package template. And carrying out adaptation processing on the target correction image according to the target expression package template to obtain a standard portrait, wherein the portrait in the standard portrait is consistent with the size of the portrait to be replaced in the expression package template, so that the effect of a subsequently generated target expression package is better and more vivid, and the target expression package is generated seamlessly. And inputting the standard human figure into a pre-trained image segmentation model to obtain a target human figure so as to improve the image segmentation efficiency and realize accurate segmentation of the standard human figure. And combining the target expression package template with the target human image map to generate a target expression package corresponding to the target expression package template, so as to intelligently generate the target expression package.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of an emotion package generation method based on artificial intelligence in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for generating emoticon based on artificial intelligence according to an embodiment of the present invention;
FIG. 3 is another flow chart of a method for generating an emoticon based on artificial intelligence according to an embodiment of the invention;
FIG. 4 is another flow chart of a method for generating an emoticon based on artificial intelligence according to an embodiment of the invention;
FIG. 5 is another flow chart of a method for generating an emoticon based on artificial intelligence according to an embodiment of the invention;
FIG. 6 is another flow chart of a method for generating an emoticon based on artificial intelligence according to an embodiment of the invention;
FIG. 7 is another flow chart of a method for generating an emoticon based on artificial intelligence according to an embodiment of the invention;
FIG. 8 is a schematic diagram of an artificial intelligence based emoticon generation apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for generating the expression package based on the artificial intelligence provided by the embodiment of the invention can be applied to the application environment shown in fig. 1. Specifically, the method for generating the expression package based on artificial intelligence is applied to an expression package generating system based on artificial intelligence, wherein the expression package generating system based on artificial intelligence comprises a client and a server shown in fig. 1, and the client and the server are in communication through a network and are used for intelligently generating a target expression package. The client is also called a user side, and refers to a program corresponding to the server and providing local services for the client. The client may be installed on, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, an artificial intelligence based method for generating an expression package is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s201, acquiring an original portrait containing the head portrait.
Wherein, the original human figure is an image which is acquired by an acquisition module on the computer equipment and contains the head portrait. The original portrait image includes an avatar and a background, the background being noise in the original portrait image other than the avatar. Noise is a factor that causes interference to the head image, and for example, noise is a natural environment in the original human image and the like. In the embodiment, a client can acquire an original portrait image containing a head portrait through an acquisition module on computer equipment and upload the original portrait image to a server so that the server can acquire the original portrait image; alternatively, the server may crawl the original portrait image containing the avatar from a photo website. The acquisition modules include, but are not limited to, camera shooting and local upload.
S202, detecting the original human image by adopting a human face detection model to obtain an original key point diagram.
The face detection model refers to a pre-trained model for detecting face features and face contours in an original human image, wherein the face features include but are not limited to eyebrows, eyes, a nose, a mouth and the like.
In this embodiment, the face detection model is obtained by training using dlib detection algorithm, and is used for fast detection to obtain an original key point diagram. The dlib detection algorithm specifically comprises an HOG feature and a regression tree algorithm, wherein HOG is an abbreviation of histogram of gradient direction, and means a histogram of gradient direction, and the histogram of gradient direction is a description operator which can detect an object and is based on shape edge features. The regression tree algorithm is an ert (ensemble of regression trees) cascade regression tree algorithm, that is, a regression tree method based on gradient boosting learning. The cascade regression tree algorithm uses cascade regression factors, needs to use a series of calibrated face pictures as a training set, and then generates a model.
The original key point image is an image which detects the face characteristics and the face contour in the original human image by using the human face detection model and identifies the face characteristics and the face contour by using the original key points. It should be noted that each original keypoint includes point coordinates for subsequent correction processing. In the embodiment, the face detection model is adopted to detect the original human figure, so that the face characteristics and the face contour of the original human figure can be quickly detected, and a basis is provided for subsequently generating the target expression package.
S203, acquiring a target expression packet template, and performing screenshot processing on the original key point diagram based on the target expression packet template to acquire a target key point diagram.
The target expression package template refers to a template which is pre-stored in the data platform and contains the portrait expression and is selected by the user. The target key point diagram refers to a picture obtained by removing the background from the original key point diagram. The screenshot processing refers to a processing method of screenshot on an original key point diagram and removing part of background. In the embodiment, screenshot processing is performed on the original key point diagram based on the target expression package template so as to cut off the background and reduce noise, so that subsequent segmentation difficulty can be reduced, the precision of subsequent segmentation is ensured, and the condition of mistaken segmentation is avoided.
As an example, when an original human image is collected by using a collection module, the expression of a user is identified, an expression identification result of the user is obtained, and a corresponding target expression package template is recommended to the user and the expression identification result according to the expression identification result of the user, so that the target expression package is generated intelligently. For example, when the emotion recognition result is happy, a more happy target emotion packet template is recommended to the user.
And S204, correcting the target key point diagram based on the target expression package template to obtain a target correction diagram.
The target correction graph is a picture obtained after processing the portrait in the target key point diagram according to the angle of the portrait to be replaced in the target expression package template. The correction processing refers to a processing process of adjusting the target key point diagram according to the angle of the portrait in the target expression package template. The to-be-replaced portrait refers to the portrait to be replaced in the target expression package template.
Specifically, the offset angle between the portrait in the target key point diagram and the portrait to be replaced is calculated according to the target expression package template, and the portrait in the target key point diagram is corrected according to the offset angle, so that the angle of the target correction diagram is ensured to be the same as the angle of the portrait in the target expression package template, the generation speed of the subsequent target expression package based on artificial intelligence is increased, and the accuracy of automatically generating the target expression package template is ensured. As an example, the to-be-replaced portrait refers to a face in the target expression package template. As another example, the to-be-replaced portrait refers to a face and a head in the target expression package template.
S205, carrying out adaptation processing on the target correction image according to the target expression package template to obtain a standard portrait.
The adaptation processing is a method for carrying out amplification processing or reduction processing on the target correction image according to the size of the portrait to be replaced in the target expression package template, so that the portrait in the target correction image is the same as the size of the portrait to be replaced. The standard portrait image is a picture obtained after the target correction image is subjected to adaptation processing. In this embodiment, the size of the portrait in the standard portrait is consistent with that of the portrait to be replaced in the target expression package template, so that the subsequently generated target expression package is better and more vivid, and the target expression package is generated seamlessly.
And S206, inputting the standard portrait image into a pre-trained image segmentation model to obtain a target portrait image.
The target portrait is an image which only contains the target portrait and is obtained by processing the standard portrait by adopting an image segmentation model, the interference of the background of the target portrait is eliminated, and the target portrait can be accurately synthesized with the target expression package template subsequently, so that the expression package is automatically generated.
The image segmentation model is an image segmentation model for the pixel level. In this embodiment, the depeplabv 3plus algorithm is adopted in the image segmentation model to improve the image segmentation efficiency and realize accurate segmentation of the standard human figure. The deeplabv3plus algorithm can precisely segment the boundary between the portrait and the background of the standard portrait by adding a simple and effective decoding module fine segmentation result, because the original key point diagram is subjected to screenshot processing in step S203, the proportion of the target portrait in the obtained target key point diagram is far greater than that of the background, therefore, the embodiment simplifies the deeplabv3plus algorithm, and particularly, the xception backbone network of the deeplabv3plus algorithm is replaced by the target shallow network consisting of a convolutional layer and a downsampling layer, so as to accelerate the operation speed.
And S207, combining the target expression packet template with the target human image to generate a target expression packet corresponding to the target expression packet template.
The target expression package is formed by replacing the target portrait with the portrait to be replaced in the target expression package template.
In this embodiment, the target expression package template and the target portrait image are superimposed together by Alpha mixing to form the target expression package, and in this embodiment, the following are used: the parameter formula x is Alpha/255, i.e. the pixel value of the Alpha mask corresponding to the target human figure is divided by 255, and the image composition formula: and (4) processing the target human image and the target expression package template to automatically synthesize a target expression package, so as to intelligently generate the target expression package. Wherein bg is the color corresponding to the target expression package template, and image is the color corresponding to the target human image. It is understood that in order to make the display effect of the synthesized target expression package more effective, the data precision needs to be improved in the actual operation, for example, x ═ double (alpha)/255.00.
According to the method for generating the expression package based on the artificial intelligence, the original human figure is detected by adopting the human face detection model, the original key point diagram is obtained, the human face characteristics and the human face contour of the original human figure can be rapidly determined, and a basis is provided for subsequently generating the target expression package. And acquiring a target expression packet template, performing screenshot processing on the original key point diagram based on the target expression packet template, and acquiring the target key point diagram to cut off the background and reduce noise, so that the subsequent segmentation difficulty can be reduced, the subsequent segmentation precision is ensured, and the condition of mistaken segmentation is avoided. And correcting the target key point diagram based on the target expression package template to obtain a target correction diagram, ensuring that the angle of the target correction diagram is the same as the angle of the portrait in the target expression package template, being beneficial to accelerating the generation speed of the subsequent target expression package based on artificial intelligence and ensuring the accuracy of automatically generating the target expression package template. And carrying out adaptation processing on the target correction image according to the target expression package template to obtain a standard portrait, wherein the portrait in the standard portrait is consistent with the size of the portrait to be replaced in the expression package template, so that the effect of a subsequently generated target expression package is better and more vivid, and the target expression package is generated seamlessly. And inputting the standard portrait image into a pre-trained image segmentation model to obtain a target portrait image so as to improve the image segmentation efficiency and realize accurate segmentation of the standard portrait image. And combining the target expression package template with the target portrait to generate a target expression package corresponding to the target expression package template, so as to intelligently generate the target expression package.
In an embodiment, as shown in fig. 3, step S203, that is, acquiring a target expression package template, and performing screenshot processing on an original key point diagram based on the target expression package template to acquire a target key point diagram, includes:
s301, if the target expression packet template is the facial expression packet template, generating a target key point frame according to original key points of the original key point diagram, and intercepting the original key point diagram by adopting a screenshot tool based on the target key point frame to obtain the target key point diagram.
The facial expression packet template is an expression packet template only containing faces, so that the faces of the original key point diagram are only intercepted when the target expression packet is generated. The original key points refer to points corresponding to the face features and the face contours in the original key point diagram.
The target key point frame refers to the positions of key areas of the face, including eyebrows, eyes, nose, mouth and the like, which are positioned in the original key point diagram. It should be noted that, in this example, the target keypoint box is matched with the facial expression package template.
Specifically, the target emoticon template is selected by the client, so that the corresponding target emoticon template can be flexibly generated according to different clients. When the target expression packet template is the facial expression packet template, only the facial area of the original key point diagram needs to be intercepted, so that a target key point frame is generated according to original portrait key points on the original key point diagram, a screenshot tool is called to intercept the target key point frame to obtain the target key point diagram, the target key point diagram only contains the facial area, the subsequent segmentation processing is simple, convenient and accurate, the interception processing is carried out on the original key point diagram to reduce the background, the noise is also reduced, and the subsequent segmentation processing is simple, convenient and accurate.
S302, if the target expression package template is the head expression package template, generating a key point frame to be processed according to original key points of the original key point diagram, amplifying the key point frame according to a preset proportion to obtain the target key point frame comprising the head, and based on the target key point frame, intercepting the original key point diagram by adopting a screenshot tool to carry out intercepting processing to obtain the target key point diagram.
The head expression package template refers to that the target expression package template comprises a human face and a head. Therefore, only the face and the head of the original key point diagram need to be intercepted when the target expression packet is generated.
The to-be-processed keypoint box is a box generated from the original keypoints in the original keypoint diagram.
The target keypoint box refers to a box containing a face and a head. Since the target expression package in this example is a head expression package template and the original key points detected by the face detection model do not include a head, the key point frame to be processed needs to be enlarged according to a preset scale so that the target key point frame includes a face and a head, so that the target key point diagram fits the head expression package template.
The preset proportion is a preset value and is used for amplifying the key point frame to be processed.
Specifically, when the target expression package template is a head expression package template, since the original key points detected by the face detection model do not include a head, the key point frame to be processed needs to be enlarged according to a preset proportion, the target key point frame including the face and the head is obtained, and the screenshot tool is called to intercept the target key point frame to obtain the target portrait key image, so that the target key point image includes the face and the head. In this example, the original key point diagram is intercepted to reduce the background, i.e., reduce the noise, so that the subsequent segmentation processing is simple and accurate.
In the method for generating an expression package based on artificial intelligence provided by this embodiment, when the target expression package template is the facial expression package template, a target key point frame is generated according to the original key points of the original key point diagram, and based on the target key point frame, the original key point diagram is intercepted by using a screenshot tool to obtain the target key point diagram. When the target expression packet template is a head expression packet template, generating a key point frame to be processed according to original key points of an original key point diagram, amplifying the key point frame according to a preset proportion, acquiring the target key point frame comprising the head, intercepting the original key point diagram by adopting a screenshot tool based on the target key point frame, acquiring the target key point diagram, and intercepting the original key point diagram to reduce the background, namely reduce the noise, so that the subsequent segmentation processing is simple, convenient and accurate.
In an embodiment, as shown in fig. 4, in step S204, performing a rectification process on the target key point diagram based on the target expression package template to obtain a target rectification diagram, including:
s401, obtaining the point coordinates of the template key points of the portrait to be replaced in the target expression package template.
The template key points refer to the points of the face features or the face contour corresponding to the to-be-replaced portrait.
Specifically, the target expression package template is input into the face detection model, a template key point diagram corresponding to the target expression package template is output, and coordinates of two template key points are obtained from the template key point diagram. In this example, coordinates of the key points of the template are obtained for determining the face offset angles of the key belt point map of the target facial expression packet and the target portrait in the following. The face offset angle refers to an offset angle of the portrait in the target key point diagram relative to the substitute portrait in the target expression package template.
S402, obtaining point coordinates of the target key points in the target key point diagram, and determining the face offset angle of the target key point diagram according to the coordinates of the template key points and the point coordinates of the target key points.
The target key points are points indicating face features and face contours in the target key point diagram, and it should be noted that the target key points are the same as the original key points. In order to ensure accuracy, the template key points and the target key points are points corresponding to the same human face features, for example, if the template key points are points corresponding to the eyes of the portrait to be replaced, the target key points are correspondingly points corresponding to the eyes of the portrait in the target portrait key points.
Specifically, point coordinates of two template key points and point coordinates corresponding to the two target key points are obtained, a first straight line is determined according to the coordinates of the two template key points, a second straight line is determined according to the point coordinates of the two target key points, and an included angle between the first straight line and the second straight line is determined as a face offset angle. It should be noted that the target emotion bag template and the target key point map use the same face detection model, and the template key point and the target key point are determined in the same coordinate system, so as to ensure the accuracy of the correction processing.
And S403, according to the face offset angle, correcting the target key point diagram to obtain a target correction diagram.
The rectification processing is processing for rotating the target key point diagram. The target correction map is a map obtained by processing a target key point map. In this example, the target key point diagram is corrected to ensure that the angle between the portrait in the target correction diagram and the portrait to be replaced in the target expression package template is consistent, and then the target expression package can be accurately generated.
In the method for generating an expression package based on artificial intelligence provided by this embodiment, coordinates of template key points of a portrait to be replaced in a target expression package template are obtained, so as to subsequently determine a face offset angle of a key band point diagram of the target expression package and the target portrait. And acquiring point coordinates corresponding to the target key points in the target key point diagram, and determining the face offset angle of the target key point diagram according to the coordinates of the template key points and the point coordinates of the target key points so as to ensure the accuracy of correction processing. And according to the face offset angle, correcting the target key point diagram to obtain a target correction diagram so as to ensure that the angle of the portrait in the target correction diagram is consistent with that of the portrait to be replaced in the target expression package template.
In an embodiment, as shown in fig. 5, in step S205, performing an adaptation process on the target facebook based on the target emoticon template to obtain a target key point diagram, including:
s501, acquiring the length and the height of the portrait to be replaced in the target expression package template.
In this example, when the target emoticon template is a facial emoticon template, the length and height of the portrait to be replaced refer to the length and height of the face; and when the target expression packet template is the head expression packet template, the length and height of the portrait to be replaced refer to the length and height of the face and the head.
Specifically, a plurality of original expression package templates are stored in the system in advance, the original expression package templates carry template information, a user screens out a target expression package template from the original expression package templates according to own preference, and the length and the height of a portrait to be replaced are quickly determined according to the template information corresponding to the target expression package template, so that a target expression package can be quickly generated in the subsequent process. The template information comprises template labels and the length and the height of the portrait to be replaced in the original expression package template. The template label is a label corresponding to the original expression package template, the template label is a face label or a head label, the face label indicates that the original expression package template is the face expression package template, and the head label indicates that the original expression package template is the head expression package template.
And S502, based on the length and the height of the portrait to be replaced, carrying out adaptation processing on the target correction image by using the image processing model to obtain a target key point diagram.
The image processing model is a model for enlarging or reducing the target correction map. The target key point diagram is an image obtained by enlarging or reducing the target correction chart. In this example, after the length and the height of the portrait to be replaced are obtained, the length and the height of the portrait in the target correction image are determined according to the target key point frame determined in step S301 or S302, then the relative proportion of the portrait in the target correction image and the portrait to be replaced is calculated, the target correction image is amplified or reduced by using an image processing model according to the relative proportion to obtain a target key point image, so that the portrait in the target key point image is consistent with the length and the height of the portrait to be replaced, the target portrait with the size consistent with that of the portrait to be replaced is ensured to be segmented subsequently, and technical support is provided for subsequently generating a target expression package.
The method for generating the expression package based on the artificial intelligence provided by the embodiment obtains the length and the height of the portrait to be replaced in the target expression package template, so that the target expression package can be quickly generated in the following process. And based on the length and height of the portrait to be replaced, performing adaptation processing on the target correction image by using the image processing model to obtain a target key point diagram, ensuring that the target portrait with the same size as the portrait to be replaced is divided subsequently, and providing technical support for subsequently generating the target expression package.
In an embodiment, as shown in fig. 6, step S206, inputting the standard human figure image into a pre-trained image segmentation model to obtain a target human figure, includes:
s601, inputting the standard portrait into a pre-trained target shallow network, and processing the standard portrait based on a convolution layer and a down-sampling layer in the target shallow network to obtain a thumbnail.
The thumbnail refers to a standard human image which is processed by a target shallow network to obtain an image. And processing the standard portrait image by adopting a convolution layer and a down-sampling layer so as to extract characteristics of the standard portrait image, compress the image and accelerate the processing speed.
Specifically, the standard human figure is input into the target shallow network, convolution layers in the target shallow network are utilized to collect convolution characteristics of the standard human figure, a convolution characteristic graph is obtained, a downsampling layer is adopted to process the convolution characteristic graph, a thumbnail corresponding to the convolution characteristic graph is obtained, dimension reduction compression is carried out on the convolution characteristic graph through dimension reduction compression, and the subsequent operation speed is accelerated.
Since the original key point diagram is subjected to screenshot processing, the proportion of the target portrait in the obtained target key point diagram is far greater than that of the background, so that the embodiment simplifies the deplab 3plus algorithm, specifically, the concept backbone network of the deplab v3plus algorithm is converted into the target shallow network composed of the convolution layer and the down-sampling layer, so as to accelerate the operation speed.
And S602, processing the thumbnail by adopting hole convolution and global pooling to obtain the face characteristic information, and obtaining the target figure from the standard figure according to the face characteristic information.
The face feature information comprises boundary information between a face and a background, and the face and the background can be segmented according to the face feature information.
Since the downsampling layer is adopted in S601, the pixel rate of the thumbnail decreases and local information is lost, and in order to extract more context information and ensure that image segmentation can be accurately performed subsequently, the thumbnail needs to be processed by using a hollow convolution and global pooling to enlarge a visual field and acquire more face feature information, so as to acquire more image context information and accurately segment a human image and a background. And the decoding module is used for finely dividing the portrait and the background to obtain a target portrait.
In the method for generating an emoticon based on artificial intelligence provided by this embodiment, a standard portrait image is input into a pre-trained target shallow network, and the standard portrait image is processed based on a convolution layer and a down-sampling layer in the target shallow network to obtain a thumbnail. And processing the thumbnail by adopting hole convolution and global pooling to obtain face characteristic information, and obtaining a target figure from the standard figure according to the face characteristic information.
In one embodiment, as shown in fig. 7, step S201, namely acquiring an original human image containing an avatar, includes:
s701, acquiring an initial image containing a head portrait, and judging whether the pixel value of the initial image is not less than a preset pixel threshold value.
Wherein the initial picture is a picture containing an avatar uploaded by a client. The preset pixel threshold is a preset pixel value and is used for judging whether the initial image can be used or not.
Specifically, an initial image is shot by using a camera module carried by a client, a pixel value corresponding to the shot initial image is compared with a preset pixel threshold value, and whether the pixel value corresponding to the shot initial image is not less than the preset pixel value is judged. And taking the initial image with the pixel value not less than the preset pixel threshold as an original human image, and excluding the initial image with the pixel value less than the preset pixel threshold to ensure that the subsequently generated target human image is clear. As another example, the initial graph may also be filtered by the user from a locally stored picture.
And S702, if the pixel value of the initial image is not less than the preset pixel threshold value, determining the initial image as an original human image.
In this example, when the pixel value of the initial image is not less than the preset pixel threshold, the initial image is determined as the original human image to ensure that the original human image is a clear image, so as to ensure that a clear target expression package can be obtained by subsequently processing the original human image.
And S703, if the pixel value of the initial image is smaller than the preset pixel threshold value, generating reminding information and sending the reminding information to the client, and repeatedly executing the step of obtaining the initial image containing the head portrait.
The reminding information is used for reminding the client that the uploaded initial image pixels are not enough and the initial image needs to be uploaded again. It can be understood that, when the pixel value of the initial map is less than the preset pixel threshold value, the initial map is not clear enough, so that the reminding information sending client is generated to enable the user client to upload the initial map again, so as to ensure that the target emotion bag which is clear enough can be generated at one time.
The method for generating an expression package based on artificial intelligence provided by this embodiment obtains an initial image including a head portrait, and determines whether a pixel value of the initial image is not less than a preset pixel threshold value, so as to ensure that a subsequently generated target human image is clear. And when the pixel value of the initial image is not less than the preset pixel threshold value, determining the initial image as an original human image to ensure that the original human image is a clear image, so as to ensure that the clear target expression package can be obtained by subsequently processing the original human image. And when the pixel value of the initial image is smaller than the preset pixel threshold value, generating reminding information and sending the reminding information to the client, and repeatedly executing the step of obtaining the initial image containing the head portrait so as to enable the user client to upload the initial image again, thereby ensuring that a sufficiently clear target human image can be generated at one time.
In one embodiment, in step S201, after obtaining the original human figure containing the head portrait, the method for generating an emoticon based on artificial intelligence further includes: and acquiring the number of the original human figures, if the number of the original human figures is larger than a preset number threshold, executing a multithreading processing mechanism, calling at least two threads to respectively execute the detection of the original human figures by adopting a face detection model, and acquiring an original key point diagram.
The preset number threshold is a preset threshold of an original human figure, and is used for judging whether a multithreading processing mechanism is started or not. The multithread processing mechanism is a mechanism for processing the original portrait images by adopting at least two threads, and in the present example, the original portrait images are processed by adopting a corresponding number of threads according to the number of the original portrait images, so as to accelerate the processing speed of the original images. For example, if the number of the original human images is smaller than the preset number threshold, the number of the original human images adopting 1 thread is larger than the preset number threshold; if the number of the original human images is 2 times of the preset number threshold, the number of the original human images adopting the 2 threads is larger than the preset number threshold.
Specifically, when the number of the original human images is greater than a preset number threshold, in order to accelerate the processing, the number of threads needed is determined according to the number of the original human images, and the corresponding number of threads are called to respectively perform the detection of the original human images by adopting a face detection model, so that an original key point diagram is obtained.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, an artificial intelligence-based emoticon generating device is provided, and the artificial intelligence-based emoticon generating device corresponds to the artificial intelligence-based emoticon generating method in the above embodiment one to one. As shown in fig. 8, the artificial intelligence-based expression package generation apparatus includes an original portrait acquisition module 801, an original key point diagram acquisition module 802, a target key point diagram acquisition module 803, a target rectification diagram acquisition module 804, a standard portrait acquisition module 805, a target portrait acquisition module 806, and a target expression package generation module 807. The functional modules are explained in detail as follows:
an original portrait image acquisition module 801, configured to acquire an original portrait image including an avatar.
An original key point diagram obtaining module 802, configured to detect an original human figure by using a human face detection model, and obtain an original key point diagram.
And a target key point diagram obtaining module 803, configured to obtain a target expression package template, and perform screenshot processing on the original key point diagram based on the target expression package template to obtain a target key point diagram.
And a target correction map obtaining module 804, configured to perform correction processing on the target key point diagram based on the target expression package template, so as to obtain a target correction map.
And a standard portrait image acquisition module 805, configured to perform adaptation processing on the target correction image according to the target expression package template, to acquire a standard portrait image.
And a target portrait acquiring module 806, configured to input the standard portrait into a pre-trained image segmentation model to acquire a target portrait.
And a target expression package generating module 807, configured to combine the target expression package template and the target human figure, and generate a target expression package corresponding to the target expression package template.
Preferably, the target key point diagram obtaining module 803 includes: a first unit and a second unit.
A first unit: and if the target expression packet template is the facial expression packet template, generating a target key point frame according to the original key points of the original key point diagram, and intercepting the original key point diagram by adopting a screenshot tool based on the target key point frame to obtain the target key point diagram.
A second unit: and if the target expression package template is the head expression package template, generating a key point frame to be processed according to the original key points of the original key point diagram, amplifying the key point frame according to a preset proportion, acquiring the target key point frame comprising the head, and based on the target key point frame, intercepting the original key point diagram by adopting a screenshot tool to intercept, so as to acquire the target key point diagram.
Preferably, the target correctional graph obtaining module 804 includes: the device comprises a template key point coordinate acquisition unit, a face offset angle acquisition unit and a correction processing unit.
And the point coordinate acquisition unit of the template key points is used for acquiring the point coordinates of the template key points of the portrait to be replaced in the target expression package template.
And the face offset angle acquisition unit is used for acquiring the point coordinates of the target key points in the target key point diagram and determining the face offset angle of the target key point diagram according to the coordinates of the template key points and the point coordinates of the target key points.
And the correction processing unit is used for correcting the target key point diagram according to the face offset angle to acquire a target correction diagram.
Preferably, the standard human figure obtaining module 805 includes: a portrait to be replaced length and height acquisition unit and an adaptation processing unit.
And the length and height acquisition unit of the portrait to be replaced is used for acquiring the length and height of the portrait to be replaced in the target expression package template.
And the adaptation processing unit is used for carrying out adaptation processing on the target correction image by using the image processing model based on the length and the height of the portrait to be replaced to obtain a standard portrait.
Preferably, the target expression package generating module 807 includes:
and the thumbnail acquiring unit is used for inputting the standard portrait into a pre-trained target shallow network, processing the standard portrait based on a convolution layer and a down-sampling layer in the target shallow network, and acquiring the thumbnail.
And the face feature information acquisition unit is used for processing the thumbnail by adopting hole convolution and global pooling to acquire face feature information, and acquiring a target figure from the standard figure according to the face feature information.
Preferably, the original human figure obtaining module 801 includes: the device comprises a judging unit, an original human figure determining unit and a reminding information generating unit.
And the judging unit is used for acquiring an initial image containing the head image and judging whether the pixel value of the initial image is not less than a preset pixel threshold value.
And the original human image determining unit is used for determining the initial image as the original human image if the pixel value of the initial image is not less than the preset pixel threshold value.
And the reminding information generating unit is used for generating reminding information and sending the reminding information to the client if the pixel value of the initial image is smaller than the preset pixel threshold value, and repeatedly executing the acquisition of the initial image containing the head portrait.
Preferably, after the original human figure obtaining module 801, the artificial intelligence based emotion package generating apparatus further includes: and a number acquisition unit for the original human figure.
And the original human figure quantity obtaining unit is used for obtaining the quantity of the original human figures, if the quantity of the original human figures is larger than a preset quantity threshold value, a multithreading processing mechanism is executed, at least two threads are called to respectively execute the detection of the original human figures by adopting a human face detection model, and an original key point diagram is obtained.
For specific limitations of the artificial intelligence based emoticon generation apparatus, reference may be made to the above limitations of the artificial intelligence based emoticon generation method, which will not be described herein again. All or part of the modules in the artificial intelligence based expression package generating device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store the original emoticon templates. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an artificial intelligence based emoticon generation method.
In an embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the method for generating an expression package based on artificial intelligence in the foregoing embodiments are implemented, for example, steps S201 to S207 shown in fig. 2 or steps shown in fig. 3 to fig. 7, which are not described herein again to avoid repetition. Alternatively, the processor implements the functions of each module/unit in the embodiment of the expression package generating device based on artificial intelligence when executing the computer program, for example, the functions of the original human figure obtaining module 801, the original key point diagram obtaining module 802, the target key point diagram obtaining module 803, the target corrected figure obtaining module 804, the standard human figure obtaining module 805, the target human figure obtaining module 806, and the target expression package generating module 807 shown in fig. 8, and are not described herein again to avoid repetition.
In an embodiment, a computer-readable storage medium is provided, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the method for generating an expression package based on artificial intelligence in the foregoing embodiments, such as steps S201 to S207 shown in fig. 2 or steps shown in fig. 3 to fig. 7, which are not described herein again to avoid repetition. Alternatively, the processor implements the functions of each module/unit in the embodiment of the expression package generating device based on artificial intelligence when executing the computer program, for example, the functions of the original human figure obtaining module 801, the original key point diagram obtaining module 802, the target key point diagram obtaining module 803, the target corrected figure obtaining module 804, the standard human figure obtaining module 805, the target human figure obtaining module 806, and the target expression package generating module 807 shown in fig. 8, and are not described herein again to avoid repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An expression package generation method based on artificial intelligence is characterized by comprising the following steps:
acquiring an original portrait containing a head portrait;
detecting the original human figure by adopting a human face detection model to obtain an original key point diagram;
acquiring a target expression packet template, and performing screenshot processing on the original key point diagram based on the target expression packet template to acquire a target key point diagram;
correcting the target key point diagram based on the target expression packet template to obtain a target correction diagram;
performing adaptation processing on the target correction image according to the target expression packet template to obtain a standard portrait;
inputting the standard human figure into a pre-trained image segmentation model to obtain a target human figure;
and combining the target expression packet template with the target human image to generate a target expression packet corresponding to the target expression packet template.
2. The method for generating the emotion packets based on artificial intelligence of claim 1, wherein the step of obtaining a target emotion packet template, and the step of performing screenshot processing on the original keyword point diagram based on the target emotion packet template to obtain a target keyword point diagram comprises:
if the target expression packet template is a facial expression packet template, generating a target key point frame according to the original key points of the original key point diagram, and intercepting the original key point diagram by adopting a screenshot tool based on the target key point frame to obtain a target key point diagram;
if the target expression packet template is a head expression packet template, generating a key point frame to be processed according to original key points of the original key point diagram, amplifying the key point frame according to a preset proportion to obtain a target key point frame comprising the head, and based on the target key point frame, intercepting the original key point diagram by adopting a screenshot tool to intercept so as to obtain a target key point diagram.
3. The method for generating an expression package based on artificial intelligence according to claim 1, wherein the correcting the target key point diagram based on the target expression package template to obtain a target correction diagram comprises:
acquiring point coordinates of template key points of the portrait to be replaced in the target expression package template;
acquiring point coordinates of target key points in the target key point diagram, and determining a face offset angle of the target key point diagram according to the coordinates of the template key points and the point coordinates of the target key points;
and correcting the target key point diagram according to the face offset angle to obtain a target correction diagram.
4. The artificial intelligence-based emoticon generation method of claim 1, wherein the target correction graph is adapted based on the target emoticon template to obtain a standard human figure, and the method comprises the following steps:
acquiring the length and the height of a portrait to be replaced in the target expression package template;
and based on the length and the height of the portrait to be replaced, carrying out adaptation processing on the target correction image by using an image processing model to obtain a standard portrait.
5. The method for generating an expression package based on artificial intelligence according to claim 1, wherein the step of inputting the standard human image into a pre-trained image segmentation model to obtain a target human image comprises:
inputting the standard human image map into a pre-trained target shallow network, and processing the standard human image map based on a convolution layer and a down-sampling layer in the target shallow network to obtain a thumbnail;
and processing the thumbnail by adopting a hollow convolution and global pooling to obtain face characteristic information, and obtaining a target figure from a standard figure according to the face characteristic information.
6. The artificial intelligence based emoticon generating method of claim 1, wherein the obtaining of the original human figure containing the avatar comprises:
acquiring an initial image containing a head portrait, and judging whether the pixel value of the initial image is not less than a preset pixel threshold value;
if the pixel value of the initial image is not smaller than a preset pixel threshold value, determining the initial image as an original human image;
and if the pixel value of the initial image is smaller than a preset pixel threshold value, generating reminding information and sending the reminding information to the client, and repeatedly executing the step of obtaining the initial image containing the head portrait.
7. The artificial intelligence based emoticon generating method of claim 1, wherein after the obtaining of the original human figure including the avatar, the artificial intelligence based emoticon generating method further comprises:
and acquiring the number of the original human figure, if the number of the original human figure is greater than a preset number threshold, executing a multithreading processing mechanism, calling at least two threads to respectively execute the detection of the original human figure by adopting a human face detection model, and acquiring an original key point diagram.
8. An expression package generating device based on artificial intelligence is characterized by comprising:
the original portrait acquisition module is used for acquiring an original portrait containing a head portrait;
the original key point diagram acquisition module is used for detecting the original human image diagram by adopting a human face detection model to acquire an original key point diagram;
the target key point diagram acquisition module is used for acquiring a target expression package template, and performing screenshot processing on the original key point diagram based on the target expression package template to acquire a target key point diagram;
the target correction image acquisition module is used for correcting the target key point diagram based on the target expression package template to acquire a target correction image;
the standard portrait acquisition module is used for carrying out adaptation processing on the target correction image according to the target expression package template to acquire a standard portrait;
the target human figure acquisition module is used for inputting the standard human figure into a pre-trained image segmentation model to acquire a target human figure;
and the target expression package generating module is used for combining the target expression package template and the target human image to generate a target expression package corresponding to the target expression package template.
9. A computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the artificial intelligence based emoticon generation method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the artificial intelligence based emoticon generation method according to any one of claims 1 to 7.
CN202010724846.5A 2020-07-24 2020-07-24 Artificial intelligence-based expression package generation method, device, equipment and storage medium Pending CN111860372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010724846.5A CN111860372A (en) 2020-07-24 2020-07-24 Artificial intelligence-based expression package generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010724846.5A CN111860372A (en) 2020-07-24 2020-07-24 Artificial intelligence-based expression package generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111860372A true CN111860372A (en) 2020-10-30

Family

ID=72949586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010724846.5A Pending CN111860372A (en) 2020-07-24 2020-07-24 Artificial intelligence-based expression package generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111860372A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560730A (en) * 2020-12-22 2021-03-26 电子科技大学中山学院 Facial expression recognition method based on Dlib and artificial neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560730A (en) * 2020-12-22 2021-03-26 电子科技大学中山学院 Facial expression recognition method based on Dlib and artificial neural network

Similar Documents

Publication Publication Date Title
US11200404B2 (en) Feature point positioning method, storage medium, and computer device
US11182903B2 (en) Image mask generation using a deep neural network
US11455729B2 (en) Image processing method and apparatus, and storage medium
EP3882809A1 (en) Face key point detection method, apparatus, computer device and storage medium
CN110569721A (en) Recognition model training method, image recognition method, device, equipment and medium
WO2021169102A1 (en) Text image processing method and apparatus, and computer device and storage medium
WO2020252917A1 (en) Fuzzy face image recognition method and apparatus, terminal device, and medium
CN112102340B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
WO2021012382A1 (en) Method and apparatus for configuring chat robot, computer device and storage medium
CN111383232B (en) Matting method, matting device, terminal equipment and computer readable storage medium
CN111968134B (en) Target segmentation method, device, computer readable storage medium and computer equipment
US20200380690A1 (en) Image processing method, apparatus, and storage medium
CN110796663B (en) Picture clipping method, device, equipment and storage medium
WO2022002262A1 (en) Character sequence recognition method and apparatus based on computer vision, and device and medium
WO2022194079A1 (en) Sky region segmentation method and apparatus, computer device, and storage medium
CN111583280B (en) Image processing method, device, equipment and computer readable storage medium
CN111860372A (en) Artificial intelligence-based expression package generation method, device, equipment and storage medium
US11483463B2 (en) Adaptive glare removal and/or color correction
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN112101106B (en) Face key point determining method, device and storage medium
CN111754521B (en) Image processing method and device, electronic equipment and storage medium
CN111461971B (en) Image processing method, device, equipment and computer readable storage medium
CN110781056A (en) Screen detection method and device, computer equipment and storage medium
CN110599467B (en) Method and device for detecting non-beam limiter area, computer equipment and storage medium
TWI819438B (en) Image recognition device and image recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination