CN112907702A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112907702A
CN112907702A CN202011438518.5A CN202011438518A CN112907702A CN 112907702 A CN112907702 A CN 112907702A CN 202011438518 A CN202011438518 A CN 202011438518A CN 112907702 A CN112907702 A CN 112907702A
Authority
CN
China
Prior art keywords
target
face
map
image
appearance attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011438518.5A
Other languages
Chinese (zh)
Inventor
蔡昆京
刘欢
李子静
黄逸琛
薛恩鹏
陈政力
许可欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011438518.5A priority Critical patent/CN112907702A/en
Publication of CN112907702A publication Critical patent/CN112907702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to an image processing method, an image processing device, computer equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a target image; the target image comprises a target face; carrying out face appearance attribute detection on a target face in a target image to obtain a face appearance attribute combination; acquiring a target map corresponding to the human setting classification of the target face based on the face appearance attribute combination; and displaying the target map corresponding to the target face in the target image. By the method, when the target map is acquired, the target map of the human setting classification representing the human face can be acquired and displayed according to the appearance attribute of the human face, so that the acquired target map is more accurate, the user operation step of selecting the map is simplified, and the efficiency of the human face map is improved.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
With the rapid development of multimedia, users often have a demand for mapping face images when performing image processing.
In the related art, when a user performs face mapping on an image, the user needs to select and drag/paste a face position in a face image from a plurality of candidate maps displayed in a mapping interface to complete face mapping.
In the method, a user needs to manually select one map from a large number of candidate maps and complete the map by multi-step operation, so that the operation steps of the user are complicated, and the efficiency of face mapping is influenced.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, computer equipment and a storage medium, which can improve the efficiency of face mapping, and the technical scheme is as follows:
in one aspect, an image processing method is provided, and the method includes:
acquiring a target image; the target image comprises a target face;
carrying out face appearance attribute detection on the target face in the target image to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes;
acquiring a target map corresponding to the human setting classification of the target face based on the face appearance attribute combination;
and displaying the target map corresponding to the target face in the target image.
In another aspect, an image processing method is provided, the method including:
displaying an image display interface, wherein the image display interface comprises an image editing triggering control and a target image; the target image comprises a target face;
responding to the received touch operation based on the image editing trigger control, and displaying an image editing interface, wherein the image editing interface comprises a mapping candidate area and an image preview area, the target image is displayed in the image preview area, and the mapping candidate area comprises a mapping automatic acquisition control;
in response to receiving a selection operation based on the automatic mapping acquisition control, displaying a target mapping corresponding to the target face in the target image, wherein the target mapping is a mapping of a personal classification corresponding to the target face acquired based on the face attribute of the target face.
In another aspect, there is provided an image processing apparatus, the apparatus including:
the target image acquisition module is used for acquiring a target image; the target image comprises a target face;
a face appearance attribute detection module, configured to perform face appearance attribute detection on the target face in the target image to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes;
the target map obtaining module is used for obtaining a target map corresponding to the human setting classification of the target face based on the face appearance attribute combination;
and the target map display module is used for displaying the target map corresponding to the target face in the target image.
In a possible implementation manner, the target map obtaining module includes:
the matching degree obtaining sub-module is used for obtaining the matching degree of the target face and each map based on the face appearance attribute combination and the appearance attribute labels of each map; the appearance attribute label of each map in each map is set based on the personal classification corresponding to each map;
and the target map determining submodule is used for determining the target map from each map based on the matching degree of the target face and each map.
In one possible implementation manner, the target map determining sub-module is configured to:
and obtaining the map with the highest matching degree corresponding to the target face from the maps as the target map.
In one possible implementation manner, the target map determining sub-module includes:
a map adding unit, configured to add, to the candidate map set, a map, of the maps, of which matching degrees corresponding to the target face satisfy a specified condition; the specified conditions include: the matching degree corresponding to the target face is greater than a matching degree threshold value, at least one item of the previous m bits is arranged according to the sequence from the large matching degree corresponding to the target face to the small matching degree corresponding to the target face, and m is an integer greater than or equal to 2;
a target map determining unit, configured to randomly determine a map from the candidate map set as the target map; or, the time stamp corresponding to the current time is obtained; determining the target map from the set of candidate maps based on the timestamp; or, the method is used for displaying a map selection interface based on the candidate map set, the map selection interface includes options corresponding to the maps in the candidate map set, and in response to a selection operation of a target option in the map selection interface, the map corresponding to the target option is determined as the target map.
In a possible implementation manner, the face appearance attribute detection module is configured to input the target image into a face appearance attribute detection model, and obtain a face appearance attribute combination output by the face appearance attribute model;
the face appearance attribute detection model is obtained by training a sample image and a face appearance attribute combination label corresponding to the sample image; the sample image is an image containing a human face.
In one possible implementation, the apparatus further includes:
the face detection module is used for carrying out face detection on the target image before the target image is displayed corresponding to the target face in the target image by the target map display module, and determining the face position of the target face in the target image;
the face frame construction module is used for constructing a face frame based on the face position;
and the target map display module is used for displaying the target map corresponding to the target face in the target image based on the face frame.
In one possible implementation manner, the target map display module includes:
the position acquisition submodule is used for respectively acquiring the central position of the face frame and the central position of the target map;
and the target map display submodule is used for displaying the target map at a position where the center position of the face frame is aligned with the center position of the target map.
In a possible implementation manner, the target map display module further includes:
the area obtaining sub-module is used for respectively obtaining the area of the face frame and the area of the target image before the target mapping sub-module displays the target mapping at the position where the center position of the face frame is aligned with the center position of the target mapping;
the map adjusting submodule is used for adjusting the size of the target map based on the area of the face frame and the area of the target image;
and the target map display submodule is used for displaying the adjusted target map at a position where the center position of the face frame is aligned with the center position of the target map.
In one possible implementation manner, the map adjusting submodule includes:
a first height adjustment unit, configured to adjust the height of the target map to the height of the face frame in response to that the area of the face frame occupies the area of the target image and is smaller than a specified threshold;
a first scaling obtaining unit, configured to obtain a first scaling of a height of the target map, where the first scaling is a ratio of the height of the target map before scaling to the height of the target map after scaling;
a first height adjustment unit for adjusting a width of the target map based on the first scaling.
In one possible implementation manner, the map adjusting submodule includes:
a second height adjusting unit, configured to adjust the height of the target map to n times the height of the face frame in response to that the area of the face frame occupies the area of the target image and is greater than the specified threshold, where 0< n < 1;
a second scaling obtaining unit, configured to obtain a second scaling of the height of the target map, where the second scaling is a ratio of the height of the target map before scaling to the height of the target map after scaling;
a second width adjustment unit for adjusting the width of the target map based on the second scaling.
In another aspect, there is provided an image processing apparatus, the apparatus including:
the image display interface display module is used for displaying an image display interface, and the image display interface comprises an image editing triggering control and a target image; the target image comprises a target face;
the image editing interface display module is used for displaying an image editing interface, the image editing interface comprises a mapping candidate area and an image preview area, the target image is displayed in the image preview area, and the mapping candidate area comprises a mapping automatic acquisition control;
and the target map display module is used for displaying a target map corresponding to the target face in the target image, wherein the target map is a map of a personal setting classification corresponding to the target face, which is obtained based on the face attribute of the target face.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the above-mentioned image processing method.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, the computer program being loaded and executed by a processor to implement the above-mentioned image processing method.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the image processing method provided in the above-described various alternative implementations.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the image processing method provided by the embodiment of the application, the face appearance attribute of the target face in the target image is detected to obtain the combination corresponding to the appearance attribute of the target face, the target map of the personal setting classification capable of representing the target face is obtained based on the face appearance attribute combination, and the target map is displayed corresponding to the target face, so that when the target map is obtained, the target map of the personal setting classification capable of representing the face can be obtained and displayed according to the face appearance attribute, the accuracy of the obtained target map is ensured, meanwhile, the user operation step of selecting the map is simplified, and the efficiency of face mapping is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 illustrates a schematic structural diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 illustrates a flow chart of an image processing method shown in an exemplary embodiment of the present application;
FIG. 3 illustrates a flow chart of an image processing method shown in an exemplary embodiment of the present application;
FIG. 4 illustrates a terminal interface diagram according to an exemplary embodiment of the present application;
FIG. 5 illustrates a terminal interface diagram according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram illustrating the construction of a face box according to an exemplary embodiment of the present application;
FIG. 7 is a diagram illustrating resizing of a target map, according to an exemplary embodiment of the present application;
FIG. 8 is a diagram illustrating resizing of a target map, according to an exemplary embodiment of the present application;
FIG. 9 illustrates a flow chart of an image processing method shown in an exemplary embodiment of the present application;
FIG. 10 is a diagram illustrating an image processing method according to an exemplary embodiment of the present application;
FIG. 11 illustrates a block diagram of an image processing apparatus according to an exemplary embodiment of the present application;
fig. 12 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present application;
FIG. 13 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The embodiment of the application provides an image processing method, which can improve the accuracy of human-set testing and increase the display effect of a testing result. For ease of understanding, several terms referred to in this application are explained below.
1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
2) Computer Vision technology (CV) is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, Three-Dimensional object reconstruction, 3D (Three-Dimensional) technology, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and further include common biometric technologies such as face Recognition and fingerprint Recognition.
3) Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
4) The face detection is a technology for finding out the corresponding positions of all faces in an image as a target, and the output of a corresponding algorithm is the coordinates of a face circumscribed rectangle in the image, and possibly also comprises information of postures such as inclination angles and the like.
The application provides a method for face mapping, which can obtain a target mapping corresponding to a personal setting classification of a target face in a target image based on the face appearance attribute of the target face, and display the target mapping reflecting the personal setting classification in a face area of the target image, so as to realize the purpose of shielding the face of a person by using the mapping representing the personal setting attribute in a specified occasion, and enable a user to directly obtain the personal setting attribute of a target object in the target image based on the personal setting mapping. In one possible implementation, the image processing method may be executed by a terminal, and in this embodiment, the terminal may be a computing device having a display screen. For example, the terminal may be a mobile terminal such as a smart phone, a tablet computer, an electronic book reader, or the terminal may also be an intelligent wearable device such as a smart watch, or the terminal may also be a fixed terminal such as an all-in-one computer.
Fig. 1 shows a schematic structural diagram of a terminal according to an exemplary embodiment of the present application, and as shown in fig. 1, the terminal includes a main board 110, an external input/output device 120, a memory 130, an external interface 140, a touch system 150, and a power supply 160.
The main board 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (e.g., a display screen), a sound playing component (e.g., a speaker), a sound collecting component (e.g., a microphone), various keys, and the like.
The memory 130 has program codes and data stored therein.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated into a display component or a key of the external input/output device 120, and the touch system 150 is used to detect a touch operation performed by a user on the display component or the key.
The power supply 160 is used to power the various other components in the terminal.
In the embodiment of the present application, the processor in the motherboard 110 may generate the interface content by executing or calling the program code and data stored in the memory, and display the generated interface content through the external input/output device 120. In the process of displaying the interface content, a touch operation performed when the user interacts with the interface may be detected by the capacitive touch system 150, and a key or other operations performed when the user interacts with the interface may also be detected by the external output/input device 120, such as a gesture operation, a voice operation, and so on.
Fig. 2 shows a flowchart of an image processing method according to an exemplary embodiment of the present application, where the image processing method may be executed by a computer device, and the computer device may be implemented as the terminal shown in fig. 1, or the computer device may be implemented as a server connected to the terminal shown in fig. 1, or the computer device may include the terminal and the server. As shown in fig. 2, the image processing method includes:
step 210, acquiring a target image; the target image includes a target face.
In a possible implementation manner, the target image is an image uploaded by a user and containing a human face, or the target image may also be an image containing a human face obtained by the user in real time through an image acquisition device of the terminal, such as a camera or a camera acquisition component, or the target image may be a frame of image containing a human face in a video.
In a possible implementation manner, the target image may include at least one target face, where the target face is a face whose face quality in the target image meets a specified condition, for example, the specified condition may be that the sharpness of the face reaches a sharpness threshold, and so on.
For convenience of description, in the embodiment of the present application, an example in which a target image includes a target face is used to describe the present application.
Step 220, performing face appearance attribute detection on a target face in the target image to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes.
In one possible implementation, the face appearance attribute detection may be face attribute detection on a target face in a target image, for example, the face attribute may include attributes of gender, hair, eyes, face shape, emotion, and the like of the face.
In a possible implementation manner, the computer device detects the face appearance attribute of the target face in the target image by using a face appearance attribute detection technology, where the face detection technology may be a face appearance attribute detection technology, or may also be a technology that calculates the eye size by using the position of a face key point, performs a face shape test by using a convolutional neural network such as Facenet, performs an expression detection by using a face expression recognition algorithm that combines LBP (Local Binary pattern) and Local sparse representation, and the like, and the method for performing face appearance attribute detection is not limited in the present application.
In the embodiment of the application, various types of face appearance attributes obtained based on the face appearance attribute recognition of the target face form a face appearance attribute combination, and the face appearance attribute combination comprises at least two face appearance attributes.
In a possible implementation manner, the face appearance attribute in the face appearance attribute set may also be a face appearance attribute of a designated aspect set according to an actual situation, for example, only a combination of the face attribute, the hair attribute, and the gender attribute is determined as a face appearance attribute combination; the above description of the types of the face appearance attributes included in the face appearance attribute combination is only illustrative, and the present application does not limit the types and the number of the face appearance attributes included in the face appearance attribute combination.
And step 230, combining and acquiring a target map corresponding to the human setting classification of the target face based on the face appearance attributes.
The target map is used for representing the human classification of the target object corresponding to the target face. The personal classification can include hiring, fleeing princess, staying up all night, cool cap, milk cool, nocturnal, wude, maiden, disuse, sweet wildboy, interhuman barbie, net lesson, breeder, sister, king of rice, dark black OG, high grade gentle, drunkard, security team leader, bear fondant, milky tea killer, carbohydrates, passing children, personal restrictions, active disuse, and the like.
And 240, displaying the target map corresponding to the target face in the target image.
In a possible implementation manner, in order to achieve the purpose of performing facial occlusion by using a target map representing a person setting classification, the target map is displayed on a face area of a target face so as to meet the requirement that a user concentrates attention on the wearing and the displaying characters of a target object in a target image under the condition of not looking at the face, and the person setting classification of the target object is visually expressed through the target map representing the person setting classification, so that the efficiency of the user for acquiring the person setting classification information of the target object is improved.
In one possible implementation, a target map representing the personal device classification is displayed on the target image without face occlusion, wherein the target map can be displayed at any position of the target image based on the user's movement operation.
To sum up, in the image processing method provided in the embodiment of the present application, a face appearance attribute detection is performed on a target face in a target image to obtain an appearance attribute combination corresponding to the target face, a target chartlet capable of representing a personal setting classification of the target face is obtained based on the face appearance attribute combination corresponding to the face appearance attribute of the target face, and the target chartlet is displayed corresponding to the target face, so that when the target chartlet is obtained, the target chartlet representing the personal setting classification of the face can be obtained and displayed according to the face appearance attribute, and while ensuring more accuracy of the obtained target chartlet, user operation steps for selecting the chartlet are simplified, and the efficiency of face chartlet is improved.
In order to improve the accuracy of matching the face appearance attribute combination with the target map, in one possible implementation manner, each map in each map corresponds to an appearance attribute tag, and the computer device may obtain the target image based on the matching degree of the appearance attribute tag of each map and the appearance attribute combination obtained based on the target face. The embodiment of the application is described by taking the target map as an example to be displayed on the target face; fig. 3 shows a flowchart of an image processing method according to an exemplary embodiment of the present application, where the image processing method may be executed by a computer device, and the computer device may be implemented as the terminal shown in fig. 1, or the computer device may be implemented as a server connected to the terminal shown in fig. 1, or the computer device may include the terminal and the server. As shown in fig. 3, the image processing method includes:
step 310, acquiring a target image; the target image includes a target face.
In one possible implementation, the target face may include a front face image, a side face image, a face image with partial occlusion; illustratively, the partially occluded face image may be a mask occluded face image, a glasses or sunglasses occluded face image, or the like.
Step 320, performing face appearance attribute detection on a target face in the target image to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes.
In a possible implementation manner, when performing face appearance attribute detection on a target face to obtain a face appearance attribute combination, effective face area detection needs to be performed on the target face first to determine whether a face appearance attribute can be detected from the target face, and the process includes:
obtaining an effective face area in a target face;
and in response to the fact that the proportion of the effective face area to the target face area is larger than the effective face proportion threshold value, carrying out face attribute detection on the target face in the target image to obtain a face appearance attribute combination.
That is, when the ratio of the effective face area in the target face to the target face area is greater than the effective face ratio threshold, determining that the face appearance attribute can be detected from the target face; and when the proportion of the effective face area in the target face to the target face area is smaller than the effective face proportion threshold, determining that the face appearance attribute cannot be detected from the target face.
In one possible implementation manner, in response to that the proportion of the effective face area in the target face to the target face area is smaller than an effective face proportion threshold, face detection failure prompt information is displayed; the face detection failure prompt information is used for indicating that the face attribute detection cannot be performed on the target face.
In one possible implementation, in response to the ratio of the effective face area to the target face area being greater than the effective face ratio threshold, face detection success prompt information is displayed, where the face detection success prompt information is used to indicate that face attribute detection is being performed on the target face.
In a possible implementation manner, a prompt box of the face area detection result is displayed in a terminal interface, and prompt information of the face area detection result is displayed in the prompt box; the prompt message of the face area detection result comprises: prompt information of face detection success and prompt information of face detection failure. In response to determining that the face appearance attribute cannot be detected from the target face, displaying face detection failure prompt information in the face detection result prompt box; and in response to determining that the face appearance attribute can be acquired from the target face, displaying face detection success prompt information in the face area detection result prompt box. Fig. 4 shows a terminal interface schematic diagram shown in an exemplary embodiment of the present application, as shown in fig. 4, taking a target image 410 as an image uploaded by a user and including a target face 420, and taking an effective face threshold of 30% as an example, in the target image 410, an area of the target face 420 blocked by a non-face pattern 430 is greater than 70%, that is, a proportion of an effective face area to an area of the target face is less than 30%, it is determined that an appearance attribute of the face cannot be detected from the target face 420 in the target image 410, and a face detection result prompt box 440 shown in a terminal interface displays a face detection failure prompt message "no face detected, this chartlet cannot be used", or fig. 5 shows a terminal interface schematic diagram shown in an exemplary embodiment of the present application, as shown in fig. 5, in a target image 510, an area of the target face 520 blocked by other patterns 530 is less than 70%, that is, the ratio of the area of the target face is greater than 30%, it is determined that the face appearance attribute can be detected from the target face 520 of the target image 510, and a face detection success prompt message "detecting your face cheer |" is displayed in a face detection result prompt box 540 displayed in the terminal interface! The AI is analyzing the color value and measuring your exclusive label! ".
In a possible implementation manner, in order to improve the accuracy of obtaining the face appearance attribute, a face appearance attribute detection model may be trained, and the trained face appearance attribute detection model is used to process a target face to obtain a corresponding face appearance attribute, which is implemented as follows:
inputting the target image into a human face appearance attribute detection model to obtain a human face appearance attribute combination output by a human face appearance attribute model;
the face appearance attribute detection model is obtained by training a sample image and a face appearance attribute combination label corresponding to the sample image; the sample image is an image containing a human face.
In a possible implementation manner, the training process of the face appearance attribute detection model may be implemented as:
inputting the sample image into a face appearance attribute model to obtain a predicted face appearance attribute combination output by the face appearance attribute model;
combining the face appearance attribute combination label corresponding to the sample image and the predicted face appearance attribute to input a loss function to obtain a loss function value;
updating parameters of the face appearance attribute model based on the loss function values;
and repeating the process until the loss function value corresponding to the face appearance attribute model is converged.
The attribute models corresponding to different face appearance attributes can be obtained by training sample images and different face appearance attribute labels corresponding to the sample images.
In an exemplary approach, the face appearance attribute combination label of the sample image may be set by a developer.
In another exemplary scenario, the sample image may be an image in a history mapping operation of the user, and correspondingly, the face appearance attribute combination label may be an appearance attribute label of a mapping in a history mapping selection record of the user.
For example, in an application program of a terminal supporting a map, each map is provided with a corresponding appearance attribute tag; in the history map pasting operation, a user determines maps corresponding to history images through operations such as manual acquisition or map replacement automatically acquired by computer equipment, and the like, and after the terminal acquires the confirmation operation of the user, the image confirmed by the user is associated with the map; when the server acquires the sample image and the face appearance attribute label combination corresponding to the sample object, the terminal can send the historical operation image and the appearance attribute label of the chartlet corresponding to the historical operation image to the server, so that the server acquires the historical operation image as the sample image, acquires the appearance attribute label of the chartlet corresponding to the historical operation image as the face appearance attribute combination label corresponding to the sample image, and trains the face appearance attribute model.
In another exemplary scenario, the sample image is an image in a history mapping operation of a designated group, the designated group being a cluster of users having the same or similar mapping selection habits as the users; correspondingly, the face appearance attribute combination label is an appearance attribute label of a map in a historical map selection record of the user cluster; the images in the history map operation correspond to the appearance attribute labels of the maps in the history map selection record.
For example, the server records the selection times or the selection frequency of each user for each map, the users with the repetition rate of selecting the maps larger than a specified threshold value form a specified group, and the users in the specified group are determined to be the users with the same or similar map selection habits; when the server acquires the sample image and the face appearance attribute combination label corresponding to the sample image, the historical operation image of each user in the designated group is acquired as the sample image, and the appearance attribute label of the chartlet corresponding to each historical operation image determined by each user is acquired as the face appearance attribute label corresponding to each historical operation image.
In a possible implementation manner, the face appearance attribute detection model is a multi-classification model, that is, a face image is input into the face appearance attribute detection model, so that a plurality of face appearance attributes corresponding to the face image can be obtained.
Or, in another possible implementation manner, the face appearance attribute detection model includes attribute models respectively corresponding to different face appearance attributes, the same face image may be input into the attribute models corresponding to different face appearance attributes, and a plurality of face appearance attributes, that is, a face appearance attribute combination, corresponding to the face image are obtained based on an output result of the attribute models corresponding to the different face appearance attributes.
In one possible implementation, the human face appearance attributes include at least two of gender (gender), age (age), expression (expression), charm (beauty), glasses (glass), hair style (hair), mask (mask), five sense organs, and posture (pitch, roll, yaw).
Step 330, obtaining the matching degree of the target face and each map based on the face appearance attribute combination and the appearance attribute labels of each map; the appearance attribute label of each of the individual maps is set based on the personal classification corresponding to each map.
In one possible implementation manner, the matching degree of the target face and each of the maps is calculated based on the number of appearance attribute labels of each of the maps, i.e. P ═ a/b ×. 100%, where P represents the matching degree, a represents the number of combinations of the appearance attribute labels of each of the maps and the face appearance attribute, and b represents the number of appearance attribute labels of each of the maps.
Assume that the face appearance attribute combination includes: hairstyle-midsplit/inch, eye-small eye/red-crested eye, face type-melon seed face, mood-happy/normal; the appearance attribute label corresponding to the map A is as follows: hairstyle-explosive head, eyes-small eyes, face-melon seed face, emotion-vitality; then, if the two attributes of the eyes and the face are matched, the matching degree between the face appearance attribute combination and the map A can be calculated to be 50%; the appearance attribute label corresponding to the map B is as follows: the matching degree between the face appearance attribute combination and the map B can be calculated to be 75 percent.
And 340, determining a target map from each map based on the matching degree of the target face and each map.
The target map is used for representing the human classification of the target object corresponding to the target face.
In a possible implementation manner, a map with the highest matching degree corresponding to a target face in each map is obtained as a target map.
That is, after the matching degrees of the appearance attribute labels of the multiple maps and the appearance attribute combination of the target face are obtained, the computer device may sort the multiple maps in the order from high to low of the matching degrees, and select one with the highest matching degree as the target map.
In a possible implementation manner, a matching degree threshold is set in the computer device, and the process of determining the target map from each map is implemented as follows:
adding the maps, of which the matching degrees corresponding to the target face meet the specified conditions, into the candidate map set; the specified conditions include: the matching degree corresponding to the target face is greater than a threshold value of the matching degree, and at least one item of the previous m bits is arranged according to the sequence from the large matching degree corresponding to the target face to the small matching degree, wherein m is an integer greater than or equal to 2;
randomly determining a map from the candidate map set as a target map; or acquiring a timestamp corresponding to the current time; determining a target map from the set of candidate maps based on the timestamps; or displaying a map selection interface based on the candidate map set, wherein the map selection interface comprises options corresponding to the maps in the candidate map set, and determining the map corresponding to the target option as the target map in response to the selection operation of the target option in the map selection interface.
That is, when determining whether to add the first map to the candidate set, there may be three cases: 1) Adding the first map into the candidate map set in response to the fact that the matching degree of the first map corresponding to the target face is larger than the matching degree threshold value; 2) responding to the first map which is arranged at the top m bits according to the sequence from large to small of the matching degree corresponding to the target face, and adding the first map into the candidate map set; 3) in response to that the matching degree of the first map corresponding to the target face is larger than a matching degree threshold value, and the first map is arranged at the top m positions in the descending order of the matching degree corresponding to the target face, adding the first map into the candidate map set; the first map is any one of the maps;
based on the above case 3), an example is given: and if the matching degree of the first map corresponding to the target face is smaller than the matching degree threshold, not adding the first map into the candidate map set, or if the matching degree of the first map corresponding to the sequence number of the sequence number arranged from large to small according to the matching degree corresponding to the target face is larger than the matching degree threshold, but the first map is arranged after the first m bits in the sequence of the matching degree corresponding to the target face from large to small, not adding the first map into the candidate map set. Such as: the matching degree of the map A corresponding to the target face is 50%, and if the threshold value of the matching value is 75%, the first map can be determined to be a non-target map; the matching degree of the tile B corresponding to the target face is 75%, and the matching degree of the tile B is m front in the arrangement sequence of the matching degrees from large to small, the tile B can be added to the candidate tile set, but if the matching degree of the tile B corresponding to the target face is 75%, but the matching degree of the tile B is m front in the arrangement sequence of the matching degrees from large to small, the tile B is not added to the candidate tile set.
In one possible implementation manner, when determining a target image from a candidate map set by using a timestamp, acquiring a timestamp corresponding to a current time, wherein the timestamp may be acquired in a day unit, or the event timestamp may be acquired in a week unit; so as to achieve the purpose of testing the daily or weekly luck potential.
Taking the time stamp as an example, the time stamp is acquired in days, and the days corresponding to the time stamp are acquired as the sequence number of the arrangement sequence of the target map in the candidate map set; for example, if today is number 6, the 6 th map sorted according to the specified order in the candidate map set is selected as the target map.
In one possible implementation, in response to that the timestamp is greater than the maximum number of the maps in the candidate map set, a remainder is performed to obtain a sequence number of the target map, for example, 28 today, but the candidate target map only contains 10 maps, and then the remainder is performed to obtain a sequence number of the target map as 8, and then obtain the 8 th ranked map in the candidate map set as the target map.
In a possible implementation manner, the maps in the candidate map set can be sequentially displayed on a map selection interface, and a target map is determined from the candidate target maps based on the selection operation of the user on options corresponding to the maps contained in the map selection interface; the tile selection interface may be displayed on the terminal interface after the set of candidate tiles is determined.
Alternatively, in another possible implementation, the map selection interface is presented in response to a trigger operation on the target map. That is, after the computer device determines the target maps based on the matching degree, other candidate target maps in the candidate map set are displayed on the map display interface based on the trigger operation on the target maps, and then the computer device may select one map from the candidate target maps to replace the determined target map based on the selection operation of the user.
In a possible implementation manner, the target map is obtained through the face appearance attribute combination based on a corresponding relationship between the face appearance attribute combination and the target map, where the corresponding relationship may be a matching rule between the face appearance attribute combination and the target map, and the matching rule is stored in the computer device, so that the terminal may match the face appearance attribute combination and the target map based on the matching rule. For example, a combination of attributes including a round-eye attribute, a round face attribute, a long hair attribute, and a face appearance attribute of gender male may be associated with an object map representing a category of people who have had a cool cap.
In a possible implementation manner, the target map may be one of map images stored locally by the computer device and corresponding to the personal classification, or the target map may be one of map images acquired by the computer device from the server based on the obtained personal classification of the target face; the target map is used to represent the human classification of the target face.
In a possible implementation manner, each map may further have a person label, the person label of each map is unique, and the target map may be determined based on a correspondence between the face appearance attribute combination and the person label.
In one possible implementation, the correspondence between the face appearance attribute combination and the personalized tag is related to the type and number of the face appearance attributes contained in the face appearance attribute combination: the more the types and the number of the face appearance attributes contained in the face appearance attribute combination are, the fewer the corresponding people set labels are, and the more accurate the people set labels are; the less the types and the number of the face appearance attributes contained in the face appearance attribute combination, the more and more people set labels corresponding to the face appearance attribute combination are, and the more and more extensive the face appearance attribute combination is. For example, the face appearance attributes included in the face appearance attribute combination are short hair and male, and the corresponding person set tags can be set tags of "worker", "network lessee", "security team leader" and the like, but if the face appearance attributes included in the face appearance attribute combination are short hair, male, glasses, and 18 years old, the corresponding person set tags can be reduced to "network lessee".
In a possible implementation manner, when the number of the person-set labels determined based on the face appearance attribute combination is more than one, the maps corresponding to the at least one person-set label are respectively obtained, that is, at least one candidate target map is obtained.
In one possible implementation manner, at least one candidate target map is displayed in a terminal interface, and a target map is obtained from the at least one candidate target map based on a selection operation of a user.
Alternatively, in a possible implementation, one target map is randomly acquired from at least one candidate target map.
In a possible implementation manner, in order to improve the accuracy of matching the face appearance attribute combination and the target chartlet, a person-set label generation model may be trained, the trained person-set label generation model is used to process the face appearance attribute combination to obtain a corresponding person-set label, and then the target chartlet is obtained according to the person-set label, which is expressed as:
inputting the face appearance attribute combination into a human set label generation model to obtain a target human set label corresponding to the face appearance attribute combination output by the human set label generation model;
acquiring a target map based on the target person label;
the human set label generation model is obtained through human face appearance attribute combination samples and human set label training corresponding to the human face appearance attribute combination samples.
In one possible implementation, the training process for generating the model for the person tag may be:
inputting the face appearance attribute combination sample into a human set label generation model to obtain a predicted human set label output by the human set label generation model;
inputting the human face appearance attribute combination sample corresponding human label and the predicted human label into a loss function to obtain a loss function value;
updating parameters of the human tag generation model based on the loss function values;
and repeating the process until the loss function value corresponding to the human-set label generation model is converged.
In one possible implementation, the face appearance attribute combination sample and the corresponding human label may be set by a developer.
And 350, performing face detection on the target image, and determining the face position of the target face in the target image.
In a possible implementation manner, determining the face position of the target face in the target image may refer to acquiring position coordinates of the target face based on feature points of the target image, where the feature points may refer to edge points of the target face, or the feature points may also be vertices of a specified graph that can include the target face, for example, the specified image may be in a shape of a rectangle, a square, or the like. In the embodiment of the present application, the description will be given taking as an example that the feature points are each vertex of a designated figure that can include a target face.
In a possible implementation manner, the step of determining the face position may be performed before the step 320, or may be performed in synchronization with the step 320, which is described as an example in this application.
The face detection may be performed by using a designated face detector, such as a Dlib face detector, an Opencv CNN face detector, a face detection neural convolution network similar to MTCNN, and the like.
And step 360, constructing a face frame based on the face position.
In a possible implementation manner, a face frame is constructed according to the obtained coordinates of each feature point, fig. 6 shows a schematic view of constructing the face frame shown in an exemplary embodiment of the present application, as shown in fig. 6, a target image 610 includes a target face 620, a computer device performs face detection on the target image, obtains a position of the target face in the target image, and constructs a face frame 630 based on the position of the target face, it should be noted that the face frame shown in the embodiment of the present application may be displayed on a terminal interface, or the face frame may not be displayed on the terminal interface.
Step 370, displaying the target map corresponding to the target face in the target image based on the face frame.
In one possible implementation manner, the steps are implemented as follows:
and respectively acquiring the central position of the face frame and the central position of the target map.
And displaying the target map at a position where the center position of the face frame is aligned with the center position of the target map.
In order to achieve the effect of displaying the target map on the target face and ensure that the target map is consistent with the position of the target face, in the embodiment of the present application, the display position of the target map on the target face is determined based on a posting method that aligns the center position of the face frame with the center position of the target map.
In a possible implementation manner, the target map may include a text type map and an image type map, that is, the representation form of the content in the target map may be text or image.
In a possible implementation manner, the posting direction of the target map can be changed by a user through rotation of a touch operation on the target map, assuming that the default initial direction of the target map of the computer device is the first direction, and the user changes the display direction of the target map to the second direction by rotating the target map by an arbitrary angle through the rotation operation on the target map.
In one possible implementation, the target map has an initial size, that is, an initial height and width, and the initial sizes of the different maps may be different, in order to make the target map more adaptive to the target face, in one possible implementation, before displaying the target map at a position where the center position of the face frame is aligned with the center position of the target map, the method further includes:
respectively acquiring the area of a face frame and the area of a target image;
adjusting the size of the target map based on the area of the face frame and the area of the target image;
displaying the target map at a position where the center position of the face frame is aligned with the center position of the target map, and implementing as follows:
and displaying the adjusted target map at a position where the center position of the face frame is aligned with the center position of the target map.
In one possible implementation, the size of the target map is adjusted based on scaling the target map equally.
In one possible implementation manner, in response to that the area of the face frame accounts for less than a specified threshold value in the area of the target image, the height of the target map is adjusted to be the height of the face frame;
obtaining a first scaling of the height of the target map, wherein the first scaling is the proportion of the height of the target map before scaling to the height of the target map after scaling;
the width of the target map is adjusted based on the first scaling.
In one possible implementation, the height and width of the target map are scaled with the largest one of the height and width of the face frame as a reference; that is, if the largest one of the height and the width of the face frame is the height, the height of the target map is scaled based on the height of the face frame, and then the width of the target map is scaled based on the scaling of the height of the target map; if the maximum length of the height and the width of the face frame is the width, the width of the target map is zoomed based on the width of the face frame, and then the height of the target map is zoomed based on the zoom ratio of the width of the target map. In the embodiment of the present application, the largest one of the height and the width of the face frame is taken as an example for explanation.
FIG. 7 is a diagram illustrating resizing of a target map according to an exemplary embodiment of the present application, and it is assumed that the height of the target map 710 is h as shown in FIG. 7sWidth of wsThe position of the center pixel of the face frame 720 in the image is (x)r,yr) Height and width are each hrAnd wrThe height and width of the target map 710 are hsAnd ws. Firstly, calculating the ratio r of the area of the face frame to the area of the target image (w)r*hr) If the proportion is smaller than the specified threshold value (the threshold value is taken to be 0.3), the target face accounts for a small proportion in the target image, at the moment, based on the consideration of the aesthetic degree, the target map can be properly adapted to the target face, and the height h of the target map is adjustedsAdjusted to the height h of the face framerScaling (first scaling) z based on the target map height1=hs/hrWidth w of the target mapsScaling the image to obtain an adjusted target map 730.
In one possible implementation, in response to that the area of the face frame occupies the area of the target image and is greater than a specified threshold, adjusting the height of the target map to be n times the height of the face frame, where 0< n < 1;
obtaining a second scaling of the height of the target map, wherein the second scaling is the proportion of the height of the target map before scaling to the height of the target map after scaling;
the width of the target map is adjusted based on the second scaling.
Fig. 8 is a schematic diagram illustrating resizing of a target map according to an exemplary embodiment of the present application, and as shown in fig. 8, corresponding to fig. 7, a ratio r (w) of an area of a face frame to an area of a target image is calculatedr*hr) If the proportion is larger than a specified threshold value (the threshold value can be taken as 0.3), the target face accounts for a larger proportion in the target image, at the moment, based on the consideration of aesthetic degree, the area of the target map can be properly reduced corresponding to the target face, and the height h of the target map is reducedsAdjusted to the height h of the face framerN times, e.g. 0.7 times, scaling (second scaling) z based on the target map height2=hs/n*hrWidth w of the target mapsScaling the image to obtain an adjusted target map 830.
In a possible implementation manner, based on Alpha channel mixing technology, a target map is displayed on a target face in a corresponding target image, and a target image posted with the target map is obtained:
RGBoutput of=RGBPicture paster×αPicture paster+RGBOriginal drawing×(1-αPicture paster)
Wherein, RGBOutput ofColour values, RGB, representing an output target imagePicture pasterColor values, alpha, representing a target mapPicture pasterAlpha value, RGB, representing a target mapOriginal drawingRepresenting the color values of the original target image.
In a possible implementation manner, the target map may be displayed based on a target face in a target image corresponding to other technologies, for example, the target map may be displayed in a manner of direct posting, cedar fusion, and the like, corresponding to the target face.
To sum up, in the image processing method provided in the embodiment of the present application, a face appearance attribute detection is performed on a target face in a target image to obtain an appearance attribute combination corresponding to the target face, a target chartlet of a person classification capable of representing the target face is obtained based on the face appearance attribute combination corresponding to the target face, and the target chartlet is displayed corresponding to the target face, so that when the target chartlet is obtained, the target chartlet of the person classification representing the face can be obtained and displayed according to the face appearance attribute, while ensuring more accuracy of the obtained target chartlet, user operation steps for selecting the chartlet are simplified, and the efficiency of face chartlet is improved.
Fig. 9 shows a flowchart of an image processing method according to an exemplary embodiment of the present application, which may be executed by a terminal, which may be implemented as the terminal shown in fig. 1, as shown in fig. 9, and the image processing method includes:
step 910, displaying an image display interface, wherein the image display interface comprises an image editing triggering control and a target image; the target image includes a target face.
Step 920, in response to receiving a touch operation based on an image editing trigger control, displaying an image editing interface, where the image editing interface includes a mapping candidate area and an image preview area, where a target image is displayed in the image preview area, and the mapping candidate area includes a mapping automatic acquisition control.
In response to receiving the selection operation of the automatic mapping-based acquisition control, a target mapping is displayed corresponding to the target face in the target image, and the target mapping is a mapping of the human classification corresponding to the target face acquired based on the face attribute of the target face.
In one possible implementation manner, in response to that the proportion of the effective face area to the target face area is smaller than an effective face proportion threshold, displaying face detection failure prompt information on an image editing interface, wherein the face detection failure prompt information is used for indicating that face attribute detection cannot be performed on the target face;
and responding to the fact that the proportion of the effective face area to the target face area is larger than the effective face proportion threshold value, displaying face detection success prompt information on the image editing interface, wherein the face detection success prompt information is used for indicating that face attribute detection is being carried out on the target face.
Fig. 10 is a schematic diagram illustrating an image processing method according to an exemplary embodiment of the present application, which may be implemented in a terminal, as shown in fig. 10, an image presentation interface is displayed in a terminal interface, the image display interface includes an image editing trigger control 1001 and a target image 1002, and in response to receiving a touch operation based on the image editing trigger control (map) 1001, displays the image editing interface, the image editing interface includes a map candidate area 1010 and an image display area 1020, wherein the map candidate area 1010 includes a map automatic acquisition control 1011 and at least one candidate map 1012, the target image 1002 is displayed in the image display area 1020, the target image can be a frame of face image in a video, or a face image uploaded by a user, or a face image acquired by terminal image acquisition equipment in real time; in a possible implementation manner, when the terminal detects that the target image 1002 exists, face detection is automatically performed on the target image; alternatively, in another possible implementation manner, after receiving a selection operation based on a map automatic acquisition control (called "color value evaluation" control) 1011, the terminal performs face detection on the target image 1002 to determine the face position of the target face, in fig. 10, for example, the terminal performs face detection on the target image 1002 in response to receiving a selection operation based on a map automatic acquisition control (called "color value evaluation" control) 1011, if the ratio of the effective target face area in the target image to the target face area is smaller than an effective face ratio threshold, face detection failure prompt information (not shown in the figure) is displayed in the image display area 1020, if the ratio of the effective target face area in the target image to the target face area is larger than the effective face ratio threshold, face detection success prompt information 1022 is displayed in the image display area 1020, and performing face appearance attribute detection on the target face to obtain a face appearance attribute combination of the target face, obtaining a target map 1023 based on the face appearance attribute combination, zooming the target map 1023, and displaying the processed target map 1024 on the position of the target face in the target image in a center alignment mode.
Fig. 11 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present application, which may be applied to a computer device, as shown in fig. 11, and includes:
a target image obtaining module 1110, configured to obtain a target image; the target image comprises a target face;
a face appearance attribute detection module 1120, configured to perform face appearance attribute detection on a target face in a target image, so as to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes;
a target map obtaining module 1130, configured to obtain a target map corresponding to the personal setting classification of the target face based on the face appearance attribute combination;
and a target map display module 1140, configured to display a target map corresponding to a target face in the target image.
In one possible implementation, the target map obtaining module 1130 includes:
the matching degree obtaining sub-module is used for obtaining the matching degree of the target face and each map based on the face appearance attribute combination and the appearance attribute labels of each map; the appearance attribute label of each map in each map is set based on the personal classification corresponding to each map;
and the target map determining submodule is used for determining a target map from each map based on the matching degree of the target face and each map.
In one possible implementation, the target map determining sub-module is configured to:
and obtaining the map with the highest matching degree corresponding to the target face from all maps as the target map.
In one possible implementation, the target map determining sub-module includes:
the map adding unit is used for adding maps, of which the matching degrees corresponding to the target human faces meet the specified conditions, into the candidate map set; the specified conditions include: the matching degree corresponding to the target face is greater than a threshold value of the matching degree, at least one item of the previous m bits is arranged according to the sequence from the large matching degree corresponding to the target face to the small matching degree corresponding to the target face, and m is an integer greater than or equal to 2;
the target map determining unit is used for randomly determining a map from the candidate map set to be used as a target map; or, the time stamp corresponding to the current time is obtained; determining a target map from the set of candidate maps based on the timestamps; or, the method is used for displaying the map selection interface based on the candidate map set, the map selection interface comprises options corresponding to the maps in the candidate map set, and the map corresponding to the target option is determined as the target map in response to the selection operation of the target option in the map selection interface.
In a possible implementation manner, the face appearance attribute detection module 1120 is configured to input the target image into the face appearance attribute detection model, and obtain a face appearance attribute combination output by the face appearance attribute model;
the face appearance attribute detection model is obtained by training a sample image and a face appearance attribute combination label corresponding to the sample image; the sample image is an image containing a human face.
In one possible implementation, the apparatus further includes:
a face detection module, configured to perform face detection on the target image before the target map display module 1140 displays the target map corresponding to the target face in the target image, and determine a face position of the target face in the target image;
the face frame construction module is used for constructing a face frame based on the face position;
the target map display module 1140 is configured to display a target map corresponding to a target face in a target image based on the face frame.
In one possible implementation, the target map display module 1140 includes:
the position acquisition submodule is used for respectively acquiring the central position of the face frame and the central position of the target map;
and the target map display submodule is used for displaying the target map at a position where the center position of the face frame is aligned with the center position of the target map.
In one possible implementation, the target map display module 1140 further includes:
the area obtaining sub-module is used for respectively obtaining the area of the face frame and the area of the target image before the target mapping sub-module displays the target mapping at the position where the center position of the face frame is aligned with the center position of the target mapping;
the map adjusting submodule is used for adjusting the size of the target map based on the area of the face frame and the area of the target image;
the target map display submodule is used for displaying the adjusted target map at a position where the center position of the face frame is aligned with the center position of the target map.
In one possible implementation, the map adjusting submodule includes:
the first height adjusting unit is used for adjusting the height of the target map to the height of the face frame in response to the fact that the area of the face frame accounts for the area of the target image and is smaller than a specified threshold value;
a first scaling obtaining unit, configured to obtain a first scaling of a height of the target map, where the first scaling is a ratio of the height of the target map before scaling to the height of the target map after scaling;
and the first height adjusting unit is used for adjusting the width of the target map based on the first scaling.
In one possible implementation, the map adjusting submodule includes:
a second height adjusting unit, which is used for adjusting the height of the target map to be n times of the height of the face frame in response to the fact that the area of the face frame accounts for the area of the target image and is larger than a specified threshold value, wherein n is more than 0 and less than 1;
a second scaling obtaining unit, configured to obtain a second scaling of the height of the target map, where the second scaling is a ratio of the height of the target map before scaling to the height of the target map after scaling;
and the second width adjusting unit is used for adjusting the width of the target map based on the second scaling.
To sum up, the image processing apparatus provided in the embodiment of the present application is applied to a computer device, and performs face appearance attribute detection on a target face in a target image to obtain an appearance attribute combination corresponding to the target face, obtains a target chartlet of a human setting classification capable of representing the target face corresponding to the face appearance attribute of the target face based on the face appearance attribute combination, and displays the target chartlet corresponding to the target face, so that when the target chartlet is obtained, the target chartlet of the human setting classification representing the face can be obtained and displayed according to the face appearance attribute, while ensuring more accuracy of the obtained target chartlet, user operation steps for selecting the chartlet are simplified, and the efficiency of face chartlet is improved.
Fig. 12 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present application, which may be applied to a computer device, as shown in fig. 12, and includes:
the image display interface display module 1210 is configured to display an image display interface, where the image display interface includes an image editing trigger control and a target image; the target image comprises a target face;
the image editing interface display module 1220 is configured to display an image editing interface, where the image editing interface includes a mapping candidate region and an image preview region, where a target image is displayed in the image preview region, and the mapping candidate region includes a mapping automatic acquisition control;
and a target map display module 1230, configured to display a target map corresponding to the target face in the target image, where the target map is a map of the personal classification corresponding to the target face obtained based on the face attribute of the target face.
To sum up, the image processing apparatus provided in the embodiment of the present application is applied to a computer device, and performs face appearance attribute detection on a target face in a target image to obtain an appearance attribute combination corresponding to the target face, obtains a target chartlet of a human setting classification capable of representing the target face corresponding to the face appearance attribute of the target face based on the face appearance attribute combination, and displays the target chartlet corresponding to the target face, so that when the target chartlet is obtained, the target chartlet of the human setting classification representing the face can be obtained and displayed according to the face appearance attribute, while ensuring more accuracy of the obtained target chartlet, user operation steps for selecting the chartlet are simplified, and the efficiency of face chartlet is improved.
Fig. 13 is a block diagram illustrating the structure of a computer device 1300 according to an example embodiment. The computer device 1300 may be the terminal shown in fig. 1, such as a smartphone, tablet, or desktop computer. Computer device 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, computer device 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented based on at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the methods provided by the method embodiments herein.
In some embodiments, computer device 1300 may also optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1305 may be one, providing the front panel of the computer device 1300; in other embodiments, the display 1305 may be at least two, respectively disposed on different surfaces of the computer device 1300 or in a folded design; in still other embodiments, the display 1305 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 1300 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The Location component 1308 is used to locate the current geographic Location of the computer device 1300 for navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1309 is used to supply power to the various components in the computer device 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the computer apparatus 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the computer device 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to collect a 3D motion of the user with respect to the computer device 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1313 may be disposed on the side bezel of the computer device 1300 and/or underneath the display screen 1305. When the pressure sensor 1313 is disposed on the side frame of the computer device 1300, a user's holding signal to the computer device 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the computer device 1300. When a physical key or vendor Logo is provided on the computer device 1300, the fingerprint sensor 1314 may be integrated with the physical key or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is reduced. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
The proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of the computer device 1300. The proximity sensor 1316 is used to capture the distance between the user and the front face of the computer device 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the computer device 1300 gradually decreases; the display 1305 is controlled by the processor 1301 to switch from the breath-screen state to the light-screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the computer device 1300 is gradually increasing.
Those skilled in the art will appreciate that the architecture shown in FIG. 13 is not intended to be limiting of the computer device 1300, and may include more or fewer components than those shown, or some components may be combined, or arranged based on different components.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
In an exemplary embodiment, a computer readable storage medium is also provided, for storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement all or part of the steps of the above-mentioned image processing method. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any one of the embodiments of fig. 2, fig. 3 or fig. 9.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. An image processing method, characterized in that the method comprises:
acquiring a target image; the target image comprises a target face;
carrying out face appearance attribute detection on the target face in the target image to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes;
acquiring a target map corresponding to the human setting classification of the target face based on the face appearance attribute combination;
and displaying the target map corresponding to the target face in the target image.
2. The method according to claim 1, wherein the obtaining of the target map corresponding to the human-defined classification of the target face based on the combination of the face appearance attributes comprises:
acquiring the matching degree of the target face and each chartlet based on the face appearance attribute combination and the appearance attribute labels of each chartlet; the appearance attribute label of each map in each map is set based on the personal classification corresponding to each map;
and determining the target map from each map based on the matching degree of the target face and each map.
3. The method of claim 2, wherein determining the target map from the respective maps based on the degree of matching between the target face and the respective maps comprises:
and obtaining the map with the highest matching degree corresponding to the target face from the maps as the target map.
4. The method of claim 3, wherein determining the target map from the respective maps based on the degree of matching between the target face and the respective maps comprises:
adding the maps, of which the matching degrees corresponding to the target face meet the specified conditions, into a candidate map set; the specified conditions include: the matching degree corresponding to the target face is greater than a matching degree threshold value, at least one item of the previous m bits is arranged according to the sequence from the large matching degree corresponding to the target face to the small matching degree corresponding to the target face, and m is an integer greater than or equal to 2;
randomly determining a map from the candidate map set as the target map; or acquiring a timestamp corresponding to the current time; determining the target map from the set of candidate maps based on the timestamp; or displaying a map selection interface based on the candidate map set, wherein the map selection interface comprises options corresponding to the maps in the candidate map set, and determining the map corresponding to the target option as the target map in response to the selection operation of the target option in the map selection interface.
5. The method according to claim 1, wherein the performing the face appearance attribute detection on the target face in the target image to obtain a face appearance attribute combination comprises:
inputting the target image into a human face appearance attribute detection model to obtain a human face appearance attribute combination output by the human face appearance attribute model;
the face appearance attribute detection model is obtained by training a sample image and a face appearance attribute combination label corresponding to the sample image; the sample image is an image containing a human face.
6. The method of claim 1, wherein before presenting the target map corresponding to the target face in the target image, the method further comprises:
carrying out face detection on the target image, and determining the face position of the target face in the target image;
constructing a face frame based on the face position;
the displaying the target map corresponding to the target face in the target image comprises:
and displaying the target map corresponding to the target face in the target image based on the face frame.
7. The method of claim 6, wherein the presenting the target map corresponding to the target face in the target image based on the face frame comprises:
respectively acquiring the central position of the face frame and the central position of the target map;
and displaying the target map at a position where the center position of the face frame is aligned with the center position of the target map.
8. The method of claim 7, wherein before the presenting the target map at a location where a center position of the face frame aligns with a center position of the target map, the method further comprises:
respectively acquiring the area of the face frame and the area of the target image;
adjusting the size of the target map based on the area of the face frame and the area of the target image;
the displaying the target map at a position where a center position of the face frame is aligned with a center position of the target map includes:
and displaying the adjusted target map at a position where the center position of the face frame is aligned with the center position of the target map.
9. The method of claim 8, wherein the resizing the target map based on the area of the face frame and the area of the target image comprises:
in response to the fact that the area of the face frame accounts for the area of the target image and is smaller than a specified threshold value, adjusting the height of the target map to be the height of the face frame;
obtaining a first scaling of the height of the target map, wherein the first scaling is a ratio of the height of the target map before scaling to the height of the target map after scaling;
adjusting a width of the target map based on the first scaling.
10. The method of claim 8, wherein the resizing the target map based on the area of the face frame and the area of the target image comprises:
in response to the fact that the area of the face frame accounts for the area of the target image and is larger than the designated threshold value, adjusting the height of the target map to be n times of the height of the face frame, wherein 0< n < 1;
obtaining a second scaling of the height of the target map, wherein the second scaling is a ratio of the height of the target map before scaling to the height of the target map after scaling;
adjusting a width of the target map based on the second scaling.
11. An image processing method, characterized in that the method comprises:
displaying an image display interface, wherein the image display interface comprises an image editing triggering control and a target image; the target image comprises a target face;
responding to the received touch operation based on the image editing trigger control, and displaying an image editing interface, wherein the image editing interface comprises a mapping candidate area and an image preview area, the target image is displayed in the image preview area, and the mapping candidate area comprises a mapping automatic acquisition control;
in response to receiving a selection operation based on the automatic mapping acquisition control, displaying a target mapping corresponding to the target face in the target image, wherein the target mapping is a mapping of a personal classification corresponding to the target face acquired based on the face attribute of the target face.
12. An image processing apparatus, characterized in that the apparatus comprises:
the target image acquisition module is used for acquiring a target image; the target image comprises a target face;
a face appearance attribute detection module, configured to perform face appearance attribute detection on the target face in the target image to obtain a face appearance attribute combination; the face appearance attribute combination comprises at least two face appearance attributes;
the target map obtaining module is used for obtaining a target map corresponding to the human setting classification of the target face based on the face appearance attribute combination;
and the target map display module is used for displaying the target map corresponding to the target face in the target image.
13. An image processing apparatus, characterized in that the apparatus comprises:
the image display interface display module is used for displaying an image display interface, and the image display interface comprises an image editing triggering control and a target image; the target image comprises a target face;
the image editing interface display module is used for displaying an image editing interface, the image editing interface comprises a mapping candidate area and an image preview area, the target image is displayed in the image preview area, and the mapping candidate area comprises a mapping automatic acquisition control;
and the target map display module is used for displaying a target map corresponding to the target face in the target image, wherein the target map is a map of a personal setting classification corresponding to the target face, which is obtained based on the face attribute of the target face.
14. A computer device comprising a processor and a memory, said memory storing at least one instruction, at least one program, a set of codes, or a set of instructions, said at least one instruction, said at least one program, said set of codes, or set of instructions being loaded and executed by said processor to implement the image processing method according to any one of claims 1 to 11.
15. A computer-readable storage medium, in which at least one computer program is stored, which is loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 11.
CN202011438518.5A 2020-12-07 2020-12-07 Image processing method, image processing device, computer equipment and storage medium Pending CN112907702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011438518.5A CN112907702A (en) 2020-12-07 2020-12-07 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011438518.5A CN112907702A (en) 2020-12-07 2020-12-07 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112907702A true CN112907702A (en) 2021-06-04

Family

ID=76111422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011438518.5A Pending CN112907702A (en) 2020-12-07 2020-12-07 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112907702A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661214A (en) * 2022-02-18 2022-06-24 北京达佳互联信息技术有限公司 Image display method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114661214A (en) * 2022-02-18 2022-06-24 北京达佳互联信息技术有限公司 Image display method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110189340B (en) Image segmentation method and device, electronic equipment and storage medium
CN109308727B (en) Virtual image model generation method and device and storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
CN110807361A (en) Human body recognition method and device, computer equipment and storage medium
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN113569614A (en) Virtual image generation method, device, equipment and storage medium
CN111242090A (en) Human face recognition method, device, equipment and medium based on artificial intelligence
CN113395542A (en) Video generation method and device based on artificial intelligence, computer equipment and medium
CN112036331A (en) Training method, device and equipment of living body detection model and storage medium
CN112578971B (en) Page content display method and device, computer equipment and storage medium
CN110796005A (en) Method, device, electronic equipment and medium for online teaching monitoring
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN113706678A (en) Method, device and equipment for acquiring virtual image and computer readable storage medium
CN111836073B (en) Method, device and equipment for determining video definition and storage medium
CN113705302A (en) Training method and device for image generation model, computer equipment and storage medium
CN112135191A (en) Video editing method, device, terminal and storage medium
CN113570614A (en) Image processing method, device, equipment and storage medium
CN114741559A (en) Method, apparatus and storage medium for determining video cover
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN113821658A (en) Method, device and equipment for training encoder and storage medium
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN112528760A (en) Image processing method, image processing apparatus, computer device, and medium
CN113516665A (en) Training method of image segmentation model, image segmentation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40047315

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination