CN111696176B - Image processing method, image processing device, electronic equipment and computer readable medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111696176B
CN111696176B CN202010514535.6A CN202010514535A CN111696176B CN 111696176 B CN111696176 B CN 111696176B CN 202010514535 A CN202010514535 A CN 202010514535A CN 111696176 B CN111696176 B CN 111696176B
Authority
CN
China
Prior art keywords
image
target
processed
face
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010514535.6A
Other languages
Chinese (zh)
Other versions
CN111696176A (en
Inventor
李华夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202010514535.6A priority Critical patent/CN111696176B/en
Publication of CN111696176A publication Critical patent/CN111696176A/en
Application granted granted Critical
Publication of CN111696176B publication Critical patent/CN111696176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The disclosure provides an image processing method, an image processing device, an electronic device and a computer readable medium. The method comprises the following steps: extracting each face region image from the image to be processed; respectively carrying out expression recognition on each face area image to obtain a corresponding target emotion degree; performing target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images; and adding a target special effect at a position corresponding to the corresponding segmentation result in the image to be processed according to the target emotion degree corresponding to at least one face region image. According to the method, the target special effect is flexibly triggered in the target area by identifying the target emotion degree corresponding to the facial expression in the image, so that the special effect is consistent with the user state, the visual content of the special effect is more vivid, and the special effect experience of the user is improved.

Description

Image processing method, image processing device, electronic equipment and computer readable medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable medium.
Background
With the rapid development of computer technology and communication technology, the use of intelligent terminals is widely popularized, and more application programs are developed to facilitate and enrich the work and life of people. Currently, many applications are dedicated to providing more personalized visual special effects with better visual perception for intelligent terminal users, such as filter effects, sticker effects, deformation effects, and the like.
However, the existing visual special effects generally show a relatively fixed effect, for example, the special effect forms seen by users in different personal states are the same, and the special effect processing mode often causes the state of the users to be mismatched with the special effect, so that the interestingness of the visual special effect is reduced.
Disclosure of Invention
In order to overcome the above technical problems or at least partially solve the above technical problems, the following technical solutions are proposed:
in a first aspect, the present disclosure provides an image processing method, including:
extracting each face region image from the image to be processed;
respectively carrying out expression recognition on each face area image to obtain a corresponding target emotion degree;
performing target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images;
and adding a target special effect in a position corresponding to the corresponding segmentation result in the image to be processed according to the target emotion degree corresponding to at least one face region image.
In a second aspect, the present disclosure provides an image processing apparatus comprising:
the extraction module is used for extracting each face region image from the image to be processed;
the expression recognition module is used for performing expression recognition on each face area image to obtain a corresponding target emotion degree;
the segmentation module is used for carrying out target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images;
and the special effect adding module is used for adding a target special effect in a position corresponding to the corresponding segmentation result in the image to be processed according to the target emotion degree corresponding to at least one face area image.
In a third aspect, the present disclosure provides an electronic device, including:
a processor and a memory storing at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method as set forth in the first aspect of the disclosure.
In a fourth aspect, the present disclosure provides a computer readable medium for storing a computer instruction, a program, a set of codes or a set of instructions which, when run on a computer, causes the computer to perform the method as set forth in the first aspect of the present disclosure.
The image processing method, the image processing device, the electronic equipment and the computer readable medium provided by the disclosure are used for extracting each face region image from an image to be processed; performing expression recognition on each face region image to obtain a corresponding target emotion degree; performing target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images; according to the target emotion degree corresponding to at least one face area image, a special effect implementation mode of the target special effect is added to a position corresponding to a corresponding segmentation result in the image to be processed, and the target special effect is flexibly triggered in the target area by identifying the target emotion degree corresponding to the face expression in the image, so that the special effect is consistent with the user state, the visual content of the special effect is more vivid, and the special effect experience of the user is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more complete and thorough understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
An embodiment of the present disclosure provides an image processing method, as shown in fig. 1, the method including:
step S110: extracting each face region image from the image to be processed;
in the embodiment of the present disclosure, the image to be processed may be a face image, a human body image, a people stream image, and the like, but is not limited thereto, and according to different application scenarios, in other embodiments, the image to be processed may also be a landscape image, an object image, and the like.
Step S120: performing expression recognition on each face region image to obtain a corresponding target emotion degree;
in the embodiment of the disclosure, the expression recognition of the face area image can recognize the degree of the face expression in the face area image belonging to the target emotion, and output the recognized target emotion degree as the expression recognition result of the corresponding face area image.
The target emotion may be happiness, anger, fear, sadness, etc., but not limited thereto, and those skilled in the art may set the target emotion according to the actual situation and the special effect to be associated with, and learn the recognition capability of what degree the facial expression specifically belongs to the target emotion. As an example, a special effect of flame needs to be excited, a target emotion may be angry, and a recognition ability of a degree to which a facial expression is angry needs to be learned.
In the embodiment of the present disclosure, the degree of emotion may be represented by a numerical value, such as 0, 10, … …, 100, etc.; may also be expressed in terms of percentages, such as 0%, 50%, 100%, etc.; it may also be represented by a scale, such as zero, first, second, etc., again such as heavy, light, none, etc. And, the emotional degree may be classified into a plurality of degree types, for example, 11 types such as 0, 10, … …, 100, etc., and 3 types such as zero order, first order, second order, etc. The person skilled in the art may set the expression manner and the number of types of emotional programs according to actual situations, and the embodiment of the present disclosure is not limited herein.
Step S130: performing target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images;
the target area is a relevant area of a special effect to be activated in an image to be processed, and may include, but is not limited to, a hair area, an eye area, an eyebrow area, a nose area, a mouth area, a skin area, a hand area, and the like according to different application scenes. Wherein the segmentation of the target region may be achieved by learning the ability to segment the target region. As an example, when the facial expression is sufficiently angry, a flame effect needs to be excited in the hair region, and the target region of this example is the hair region, and the ability to segment the hair region needs to be learned. Further, the target region may also include a plurality of regions at the same time, and for example, when the facial expression is sufficiently angry, a flame effect needs to be excited in the hair region and the hand region at the same time, and then the target region of this example may include the hair region and the hand region at the same time, and the ability to segment the hair region and the hand region at the same time needs to be learned. A person skilled in the art can set a target region and learn the segmentation capability of the corresponding region according to the actual requirement of the special effect, and the embodiment of the present application is not limited herein. In other embodiments, the target area may be determined according to a request of a user after learning corresponding segmentation capabilities for a plurality of areas, and the segmentation capabilities of the corresponding areas may be invoked to segment the target area specified by the user.
It can be understood that each face in the image to be processed has a corresponding target region, that is, each face region image has a corresponding target region.
Step S140: and adding a target special effect in a position corresponding to the corresponding segmentation result in the image to be processed according to the target emotion degree corresponding to at least one face region image.
The target special effect is a characteristic needing to be excited related to a target emotion and a target area, and illustratively, when a flame special effect needs to be excited in a hair area when a human face expression is angry enough, the target special effect of the example is the flame special effect. According to different application scenarios, in other embodiments, the target special effect may also be an eyebrow jumping special effect in an eyebrow region, a flame special effect in a hand region, and the like, and those skilled in the art may expand the target special effect according to actual situations, and the embodiment of the present disclosure is not limited herein.
In the embodiment of the disclosure, each face in the image to be processed can determine whether to trigger the target special effect according to the corresponding target emotion degree. For example, a person whose expression is angry in the image to be processed may trigger a flame effect in his hair region, whereas a person whose expression is calm does not need to trigger. For at least one face needing to trigger the target special effect, different forms of target special effects can be triggered according to the corresponding target emotion degrees, for example, people with high anger degrees in the image to be processed can trigger a brighter flame special effect in a hair area, and the like. The embodiment of the present disclosure is not limited herein, and can be extended by those skilled in the art according to the actual situation.
According to the image processing method provided by the embodiment of the disclosure, the target special effect is flexibly triggered in the target area by identifying the target emotion degree corresponding to the facial expression in the image, so that the special effect is consistent with the user state, the visual content of the special effect is more vivid, and the special effect experience of the user is improved.
It can be understood by those skilled in the art that in the embodiment of the present disclosure, the number of faces in the image to be processed may be one or more.
Specifically, when the number of faces in the image to be processed is one, steps S110 to S140 may be performed as follows: extracting a face region image from the image to be processed; performing expression recognition on the face region image to obtain a target emotion degree; directly carrying out target area segmentation on the image to be processed to obtain a segmentation result of the target area; or detecting the head or the human body in the image to be processed, for example, by means of head detection or human body detection, and then segmenting the target region based on the head or the human body (the image to be processed including the head frame or the human body frame); and then adding a corresponding target special effect in a position corresponding to the segmentation result in the image to be processed according to the recognized target emotion degree.
Specifically, when the number of faces in the image to be processed is multiple, the face region image of each face needs to be extracted from the image to be processed in step S110, and for each face region image, steps S120 to S140 may be performed as follows: performing expression recognition on each face region image to respectively obtain the target emotion degree of each face; carrying out target area segmentation on the image to be processed to obtain a segmentation result of a target area corresponding to each face area image; the target region may be directly segmented for the image to be processed, or the head or the human body in the image to be processed may be separately segmented, for example, each head or each human body in the image to be processed is separated by head detection or human body detection, and then the target region is segmented based on each head or each human body (each image to be processed including different head frames or different human body frames); and then, according to the target emotion degree of each face, determining whether a target special effect is added at a position corresponding to the segmentation result in the image to be processed, and what kind of target special effect is added.
That is, step S130 may specifically include the steps of: performing head detection (or face detection or human body detection) on the image to be processed to obtain each head frame (or face frame or human body frame); and performing target area segmentation on the image to be processed based on each head frame (or face frame or body frame) to respectively obtain segmentation results of target areas corresponding to each face area image.
In the embodiment of the present disclosure, a feasible implementation manner is provided for step S110, and specifically, step S110 may include the steps of:
step S1101: positioning each face frame in the image to be processed through a pre-trained face detection network;
in the embodiment, the face detection network is trained in advance, the image to be processed is input into the pre-trained face detection network, and the face detection network can accurately position the face frame of each face in the image to be processed.
Further, in the case where the face detection is required in step S130, the face frames obtained in step S1101 may be used as they are.
In practical application, the position information of the positioned face frame can be output by the training face detection network. The position information of the face frame can be expressed as coordinate information of the face frame. For example, the face frame may be a rectangular frame, the coordinate information may be coordinates of four vertices of the face frame in a coordinate system, and the like, and other positioning manners may also be adopted, which is not limited herein.
Step S1102: and extracting each face region image from the image to be processed according to each face frame.
Because the face frame positioned by the face detection network can frame each face in the image to be processed, the region framed by each face frame is extracted from the image to be processed, and then each face region image is extracted.
In an optional implementation manner, each face frame may be expanded to a certain extent, for example, by 15%, and then the region framed by each expanded face frame is extracted from the image to be processed to obtain each face region image, so as to increase the edge space around each face and ensure the integrity of the face in each face region image.
In the embodiment of the present disclosure, a feasible implementation manner is provided for step S120, and specifically, the target emotion degree includes an attribute value of the target emotion degree, for example, the target emotion degree is divided by attribute values of 0, 10, … …, 100, and the like. Step S120 may include the steps of: and respectively classifying the target emotion degrees of the face region images through a pre-trained expression recognition network to obtain corresponding attribute values of the target emotion degrees.
In the embodiment, the expression recognition network is trained in advance, each face area image is input into the pre-trained expression recognition network, and the expression recognition network can accurately classify the target emotion degree to which each face area image belongs and output the attribute value corresponding to the classified image.
In the embodiment of the disclosure, the expression recognition network may be trained based on a classification network. In practical application, when the set target emotion degrees are only two, the expression recognition network can be trained to output categories according to a classification algorithm, namely whether the input face area image belongs to the target emotion or not; when the set target emotion degrees are only three, an attribute value can be output according to a regression algorithm of the training expression recognition network, in this case, a Loss function adopted by the training expression recognition network can adopt an absolute Loss function Smooth L1 Loss, namely, a difference value between a predicted value and a target value is determined during each training, and when the difference value is smaller than or equal to a threshold value, a Loss is calculated by using a square Loss function L2 Loss so as to optimize the expression recognition network; when the difference value is larger than the threshold value, an absolute value Loss function L1 Loss is used for Loss so as to optimize the expression recognition network.
Those skilled in the art can understand that target emotions (for example, anger, happiness, and the like), a representation mode (for example, attribute values, grades, and the like) and type numbers (for example, 3 degrees or 11 degrees, and the like) of emotion programs that can be classified by the expression recognition network are all determined by a training mode of the expression recognition network, and a suitable training mode can be adopted according to actual situations, and details are not repeated herein.
In the embodiment of the present disclosure, a feasible implementation manner is provided for step S130, and specifically, the segmentation result includes a mask image. Step S130 may include the steps of: and performing target area segmentation on the image to be processed through a pre-trained target area segmentation network to respectively obtain mask images corresponding to the face area images.
In this embodiment, the target region segmentation network is trained in advance to learn the ability to segment the target region. The image to be processed is input into a pre-trained target area segmentation network, and the target area segmentation network can accurately output a mask, namely a mask image, of a target area corresponding to each face area image.
Or, the target area segmentation network is trained in advance, each head or each human body (each image to be processed including different head frames or different human body frames) detected in the image to be processed is respectively input into the pre-trained target area segmentation network, and the target area segmentation network can accurately output the mask, namely the mask image, of the target area corresponding to each face area image.
For each face region image, the mask image may indicate whether each pixel in the image to be processed corresponds to a target region. For example, the mask image may be a binary image consisting of 0 and 1, with 1-value regions being the corresponding target regions and 0-value regions being the other regions.
It can be understood that how many related face region images of the image to be processed correspond to how many mask images to be output, and the size of each mask image is the same as that of the image to be processed, but the 1-value region of each mask image is different.
Then, for each face region image, based on the mask image, the position where the target special effect needs to be added, that is, the position in the image to be processed corresponding to the 1-valued region of the corresponding mask pattern, can be easily known.
In practical applications, the corresponding position may be in a corresponding target area, or may include the corresponding target area, or may be around, for example, above, the corresponding target area, and the embodiment of the disclosure is not limited herein.
It will be understood by those skilled in the art that the target area segmentation network can segment is determined by a training mode, and an appropriate training mode can be adopted according to actual situations to obtain a desired segmentation network of the target area, for example, when the target area is a hair area, a hair segmentation network can be trained.
In the embodiment of the present disclosure, a feasible implementation manner is provided for step S140, and specifically, step S140 may specifically include the steps of:
step S1401: according to the target emotion degree corresponding to at least one face region image, adjusting the target region in the corresponding segmentation result;
step S1402: and adding a target special effect at a position in the image to be processed corresponding to the corresponding adjusted segmentation result.
In the embodiment of the disclosure, in order to improve the vivid effect of the special effect, the size, shape or position of the target area may be adjusted in combination with the target emotion degree of the human face to match different forms of target special effects, for example, a person with high emotion anger degree in the image to be processed may trigger a higher flame special effect in the hair area, and the like. A person skilled in the art may set the adjustment mode of the target area according to an actual situation, and the embodiment of the present disclosure is not limited herein.
Specifically, in combination with the above-described segmentation result including the mask image, step S1401 may specifically be to perform contraction or expansion on a target area mask (i.e. a 1-value area) in the corresponding mask image according to a target emotion degree corresponding to at least one face area image, that is, to perform contraction or expansion on a target area corresponding to the face area image.
In practical application, an association relationship between the target emotion degree and the target area scaling mode can be established, and then the mode that the target area needs to be scaled is determined according to the association relationship, namely the target emotion degree corresponding to the face area image, so as to adjust the target area. Those skilled in the art can set the association relationship according to actual situations, and the embodiment of the present disclosure is not limited herein.
In the embodiment of the present disclosure, a feasible implementation manner is provided for the step of adding the target special effect to the position corresponding to the corresponding segmentation result in the image to be processed in step S140, and specifically includes: performing contour tracing processing on a corresponding target area in an image to be processed according to a segmentation result corresponding to at least one face area image; and adding the target special effect at the corresponding position of each contour tracing result.
Then, in step S1402, contour tracing processing needs to be performed on the corresponding target region in the image to be processed according to each adjusted segmentation result; and adding a target special effect at a corresponding position of each contour tracing result.
By performing outline delineation processing on the target area, error areas in the segmentation result can be inhibited, the accuracy of target area segmentation is improved, and the accuracy of positions to which target special effects are added is further improved.
For the embodiment of the present disclosure, the special processing instruction may be issued by the operation of the user on the terminal device. The terminal devices include, but are not limited to, mobile terminals, smart terminals, and the like, such as mobile phones, smart phones, tablet computers, notebook computers, personal digital assistants, portable multimedia players, navigation devices, and the like. It will be understood by those skilled in the art that the configuration according to the embodiments of the present disclosure can be applied to a fixed type terminal such as a digital television, a desktop computer, etc., in addition to elements particularly used for mobile purposes.
In the embodiment of the present disclosure, the execution subject of the image processing method may be the terminal device or an application installed on the terminal device. Specifically, after receiving a special effect processing instruction, the above-described embodiment is used for processing, and a special effect result is displayed on a display screen.
Or, the execution subject of the image processing method may be a server, and after receiving a processing instruction for a special effect sent by a terminal device, the execution subject performs processing by using any of the above embodiments, and sends a special effect result to the terminal device for display.
In practical applications, the number of the images to be processed may be one or more. When the number of the images to be processed is multiple, the images to be processed may also be videos to be processed. Wherein, for each frame of image in the video to be processed, any of the above embodiments can be used for processing.
According to the image processing method provided by the embodiment of the disclosure, the target special effect is flexibly triggered in the target area by identifying the target emotion degree corresponding to the facial expression in the image, so that the special effect is consistent with the user state, the visual content of the special effect is more vivid, and the special effect experience of the user is improved. For the condition that the image contains a plurality of faces, an interesting interaction effect can be realized, and the playability of special effects is improved. By way of example, two persons a and B standing together, a looking very angry at B, can trigger a flame effect in the hair region of a, i.e. an interesting interaction between a and B can be achieved by this effect implementation.
An embodiment of the present disclosure further provides an image processing apparatus, as shown in fig. 2, the apparatus 20 may include: an extraction module 201, an expression recognition module 202, a segmentation module 203, and a special effects addition module 204, wherein,
the extraction module 201 is configured to extract each face region image from the image to be processed;
the expression recognition module 202 is configured to perform expression recognition on each face area image to obtain a corresponding target emotion degree;
the segmentation module 203 is configured to perform target region segmentation on the image to be processed to obtain a segmentation result of a target region corresponding to each face region image;
the special effect adding module 204 is configured to add a target special effect at a position in the image to be processed corresponding to the corresponding segmentation result according to the target emotion degree corresponding to the at least one face region image.
In an optional implementation manner, when the extracting module 201 is configured to extract each face region image from the image to be processed, specifically configured to:
positioning each face frame in the image to be processed through a pre-trained face detection network;
and extracting each face region image from the image to be processed according to each face frame.
In an alternative implementation, the target emotional degree includes an attribute value of the target emotional degree;
the expression recognition module 202 is specifically configured to, when being configured to perform expression recognition on each face area image to obtain a corresponding target emotion degree:
and respectively classifying the target emotion degrees of the images of the face areas through a pre-trained expression recognition network to obtain the attribute values of the corresponding target emotion degrees.
In an alternative implementation, the segmentation result includes a mask image;
the segmentation module 203 is specifically configured to, when configured to perform target region segmentation on an image to be processed to obtain segmentation results of target regions corresponding to respective face region images, specifically:
and performing target area segmentation on the image to be processed through a pre-trained target area segmentation network to respectively obtain mask images corresponding to the face area images.
In an optional implementation manner, the special effect adding module 204 is specifically configured to, when the target special effect is added at a position corresponding to a corresponding segmentation result in an image to be processed according to a target emotion degree corresponding to at least one face region image, add the target special effect at the position corresponding to the corresponding segmentation result in the image to be processed:
adjusting the target region in the corresponding segmentation result according to the target emotion degree corresponding to at least one face region image;
and adding a target special effect at a position in the image to be processed corresponding to the corresponding adjusted segmentation result.
In an optional implementation manner, if the segmentation result includes a mask image, the special effect adding module 204 is specifically configured to, when being configured to adjust the target region in the corresponding segmentation result:
and contracting or expanding the target area mask in the corresponding mask image.
In an optional implementation manner, when the special effect adding module 204 is configured to add the target special effect at a position in the to-be-processed image corresponding to the corresponding segmentation result, specifically, it is configured to:
performing contour tracing processing on a corresponding target area in an image to be processed according to a segmentation result corresponding to at least one face area image;
and adding the target special effect at the corresponding position of each contour tracing result.
The image processing apparatus provided in the embodiment of the present disclosure may be specific hardware on the device, or software or firmware installed on the device, and the implementation principle and the generated technical effect are the same as those of the foregoing method embodiment, and for brief description, no part of the embodiment of the device is mentioned, and reference may be made to corresponding contents in the foregoing method embodiment, and details are not repeated here.
The image processing device provided by the embodiment of the disclosure flexibly triggers the target special effect in the target area by identifying the target emotion degree corresponding to the facial expression in the image, so that the special effect is consistent with the user state, the visual content of the special effect is more vivid, and the special effect experience of the user is improved.
Based on the same principle as the image processing method in the embodiment of the present disclosure, an embodiment of the present disclosure further provides an electronic device, which includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, at least one program, a code set, or a set of instructions is loaded by the processor and executes the method shown in any one of the above embodiments of the present disclosure.
Based on the same principle as the image processing method in the embodiments of the present disclosure, a computer-readable medium for storing a computer instruction, a program, a code set, or a set of instructions, which when run on a computer, causes the computer to perform the method shown in any one of the above-described embodiments of the present disclosure is also provided in the embodiments of the present disclosure.
Referring now to FIG. 3, a schematic diagram of an electronic device 30 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 301 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)302, a Random Access Memory (RAM)303 and a storage device 308 hereinafter, which are specifically shown as follows:
as shown in fig. 3, the electronic device 30 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 30 are also stored. The processing device 301, the ROM302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 30 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 illustrates an electronic device 30 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the image processing method shown in any of the above embodiments of the present disclosure.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the designation of a module or unit does not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides, according to one or more embodiments of the present disclosure, an image processing method including:
extracting each face region image from the image to be processed;
performing expression recognition on each face region image to obtain a corresponding target emotion degree;
performing target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images;
and adding a target special effect in a position corresponding to the corresponding segmentation result in the image to be processed according to the target emotion degree corresponding to at least one face region image.
In an optional implementation manner, extracting each face region image from the image to be processed includes:
positioning each face frame in the image to be processed through a pre-trained face detection network;
and extracting each face region image from the image to be processed according to each face frame.
In an alternative implementation, the target emotional degree includes an attribute value of the target emotional degree;
respectively carrying out expression recognition on each face area image to obtain a corresponding target emotion degree, wherein the method comprises the following steps:
and respectively classifying the target emotion degrees of the images of the face areas through a pre-trained expression recognition network to obtain the attribute values of the corresponding target emotion degrees.
In an alternative implementation, the segmentation result includes a mask image;
the method comprises the following steps of performing target area segmentation on an image to be processed to respectively obtain segmentation results of target areas corresponding to face area images, wherein the segmentation results comprise:
and performing target area segmentation on the image to be processed through a pre-trained target area segmentation network to respectively obtain mask images corresponding to the face area images.
In an optional implementation manner, adding a target special effect to a position corresponding to a corresponding segmentation result in an image to be processed according to a target emotion degree corresponding to at least one face region image, includes:
according to the target emotion degree corresponding to at least one face region image, adjusting the target region in the corresponding segmentation result;
and adding a target special effect at a position in the image to be processed corresponding to the corresponding adjusted segmentation result.
In an optional implementation manner, when the segmentation result includes a mask image, adjusting a target region in the corresponding segmentation result includes:
and contracting or expanding the target area mask in the corresponding mask image.
In an optional implementation manner, adding a target special effect to a position in the image to be processed corresponding to the corresponding segmentation result includes:
performing contour tracing processing on a corresponding target area in an image to be processed according to a segmentation result corresponding to at least one face area image;
and adding the target special effect at the corresponding position of each contour tracing result.
Example 2 provides the image processing apparatus of example 1, the apparatus including:
the extraction module is used for extracting each face region image from the image to be processed;
the expression recognition module is used for performing expression recognition on each face area image to obtain a corresponding target emotion degree;
the segmentation module is used for carrying out target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images;
and the special effect adding module is used for adding a target special effect at a position corresponding to the corresponding segmentation result in the image to be processed according to the target emotion degree corresponding to the at least one face area image.
In an optional implementation manner, when the extraction module is configured to extract each face region image from the image to be processed, the extraction module is specifically configured to:
positioning each face frame in the image to be processed through a pre-trained face detection network;
and extracting each face region image from the image to be processed according to each face frame.
In an alternative implementation, the target emotional degree includes an attribute value of the target emotional degree;
the expression recognition module is used for performing expression recognition on each face area image respectively to obtain a corresponding target emotion degree, and is specifically used for:
and respectively classifying the target emotion degrees of the face region images through a pre-trained expression recognition network to obtain corresponding attribute values of the target emotion degrees.
In an alternative implementation, the segmentation result includes a mask image;
the segmentation module is specifically configured to, when configured to perform target region segmentation on an image to be processed to obtain segmentation results of target regions corresponding to respective face region images, specifically:
and performing target area segmentation on the image to be processed through a pre-trained target area segmentation network to respectively obtain mask images corresponding to the face area images.
In an optional implementation manner, the special effect adding module is specifically configured to, when adding a target special effect at a position corresponding to a corresponding segmentation result in the image to be processed according to a target emotion degree corresponding to at least one face region image,:
according to the target emotion degree corresponding to at least one face region image, adjusting the target region in the corresponding segmentation result;
and adding a target special effect at a position in the image to be processed corresponding to the corresponding adjusted segmentation result.
In an optional implementation manner, if the segmentation result includes a mask image, the special effect adding module is specifically configured to, when being configured to adjust the target region in the corresponding segmentation result:
and contracting or expanding the target area mask in the corresponding mask image.
In an optional implementation manner, when the special effect adding module is configured to add the target special effect at a position in the to-be-processed image corresponding to the corresponding segmentation result, the special effect adding module is specifically configured to:
performing contour tracing processing on a corresponding target area in an image to be processed according to a segmentation result corresponding to at least one face area image;
and adding a target special effect at a corresponding position of each contour tracing result.
Example 3 provides, in accordance with one or more embodiments of the present disclosure, an electronic device comprising:
a processor and a memory storing at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement a method as shown in example 1 of the present disclosure.
Example 4 provides, in accordance with one or more embodiments of the present disclosure, a computer readable medium for storing a computer instruction, program, set of codes or set of instructions which, when run on a computer, causes the computer to perform the method as shown in example 1 of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (9)

1. An image processing method, comprising:
extracting each face region image from the image to be processed;
performing expression recognition on each face area image to obtain a corresponding target emotion degree; wherein the representation of the target emotional degree comprises any one of a numerical value, a percentage, and a grade;
performing target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images;
adjusting a target area in a corresponding segmentation result according to the target emotion degree corresponding to at least one face area image; adding a target special effect at a position in the image to be processed corresponding to the corresponding adjusted segmentation result;
wherein, the target area segmentation of the image to be processed includes:
performing at least one of head detection, face detection and human body detection on the image to be processed to obtain at least one of each head frame, each face frame and each human body frame;
and performing target region segmentation on the image to be processed based on at least one of the head frame, the face frame and the body frame.
2. The image processing method according to claim 1, wherein the extracting of each face region image from the image to be processed comprises:
positioning each face frame in the image to be processed through a pre-trained face detection network;
and extracting each face region image from the image to be processed according to each face frame.
3. The image processing method according to claim 1, wherein the target emotion degree includes an attribute value of the target emotion degree;
the expression recognition is respectively carried out on the face area images to obtain corresponding target emotion degrees, and the method comprises the following steps:
and respectively carrying out target emotion degree classification on each face region image through a pre-trained expression recognition network to obtain the attribute value of the corresponding target emotion degree.
4. The image processing method according to claim 1, wherein the segmentation result includes a mask image;
the target area segmentation of the image to be processed is performed to obtain the segmentation result of the target area corresponding to each face area image, respectively, and the method includes:
and performing target area segmentation on the image to be processed through a pre-trained target area segmentation network to respectively obtain mask images corresponding to the face area images.
5. The image processing method according to claim 1, wherein when the segmentation result includes a mask image, the adjusting the target region in the corresponding segmentation result includes:
and contracting or expanding the corresponding target area mask in the mask image.
6. The image processing method according to claim 1, wherein adding a target special effect to a position in the image to be processed corresponding to the corresponding segmentation result comprises:
performing contour delineation processing on a corresponding target area in the image to be processed according to a segmentation result corresponding to the at least one face area image;
and adding a target special effect at a corresponding position of each contour tracing result.
7. An image processing apparatus characterized by comprising:
the extraction module is used for extracting each face region image from the image to be processed;
the expression recognition module is used for respectively carrying out expression recognition on the face area images to obtain corresponding target emotion degrees; wherein the representation of the target emotional degree comprises any one of a numerical value, a percentage, and a grade;
the segmentation module is used for carrying out target area segmentation on the image to be processed to respectively obtain segmentation results of target areas corresponding to the face area images; wherein, the target area segmentation of the image to be processed comprises: performing at least one of head detection, face detection and human body detection on the image to be processed to obtain at least one of each head frame, each face frame and each human body frame; performing target area segmentation on the image to be processed based on at least one of the head frames, the face frames and the body frames;
the special effect adding module is used for adjusting a target area in a corresponding segmentation result according to the target emotion degree corresponding to at least one face area image; and adding a target special effect at a position in the image to be processed corresponding to the corresponding adjusted segmentation result.
8. An electronic device, comprising:
a processor and a memory storing at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by the processor to implement the method of any of claims 1-6.
9. A computer readable medium for storing a computer instruction, a program, a set of codes or a set of instructions which, when run on a computer, causes the computer to perform the method according to any one of claims 1-6.
CN202010514535.6A 2020-06-08 2020-06-08 Image processing method, image processing device, electronic equipment and computer readable medium Active CN111696176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010514535.6A CN111696176B (en) 2020-06-08 2020-06-08 Image processing method, image processing device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010514535.6A CN111696176B (en) 2020-06-08 2020-06-08 Image processing method, image processing device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111696176A CN111696176A (en) 2020-09-22
CN111696176B true CN111696176B (en) 2022-08-19

Family

ID=72479814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010514535.6A Active CN111696176B (en) 2020-06-08 2020-06-08 Image processing method, image processing device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111696176B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422844A (en) * 2020-09-23 2021-02-26 上海哔哩哔哩科技有限公司 Method, device and equipment for adding special effect in video and readable storage medium
CN112465843A (en) * 2020-12-22 2021-03-09 深圳市慧鲤科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112822418B (en) * 2020-12-31 2022-12-06 北京字节跳动网络技术有限公司 Video processing method and device, storage medium and electronic equipment
CN112766189B (en) * 2021-01-25 2023-08-08 北京有竹居网络技术有限公司 Deep forgery detection method and device, storage medium and electronic equipment
CN113422910A (en) * 2021-05-17 2021-09-21 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN115358958A (en) * 2022-08-26 2022-11-18 北京字跳网络技术有限公司 Special effect graph generation method, device and equipment and storage medium
CN115426505B (en) * 2022-11-03 2023-03-24 北京蔚领时代科技有限公司 Preset expression special effect triggering method based on face capture and related equipment
CN117079324B (en) * 2023-08-17 2024-03-12 厚德明心(北京)科技有限公司 Face emotion recognition method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854306A (en) * 2012-12-07 2014-06-11 山东财经大学 High-reality dynamic expression modeling method
CN109117760A (en) * 2018-07-27 2019-01-01 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer-readable medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104703043A (en) * 2015-03-26 2015-06-10 努比亚技术有限公司 Video special effect adding method and device
CN108229269A (en) * 2016-12-31 2018-06-29 深圳市商汤科技有限公司 Method for detecting human face, device and electronic equipment
CN110162670B (en) * 2019-05-27 2020-05-08 北京字节跳动网络技术有限公司 Method and device for generating expression package
CN110992247A (en) * 2019-11-25 2020-04-10 杭州趣维科技有限公司 Method and system for realizing special effect of straightening hair of portrait photo

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854306A (en) * 2012-12-07 2014-06-11 山东财经大学 High-reality dynamic expression modeling method
CN109117760A (en) * 2018-07-27 2019-01-01 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer-readable medium

Also Published As

Publication number Publication date
CN111696176A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111696176B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
US11436863B2 (en) Method and apparatus for outputting data
KR102463101B1 (en) Image processing method and apparatus, electronic device and storage medium
CN111369427A (en) Image processing method, image processing device, readable medium and electronic equipment
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
US20230036338A1 (en) Method and apparatus for generating image restoration model, medium and program product
CN115311178A (en) Image splicing method, device, equipment and medium
CN113923378B (en) Video processing method, device, equipment and storage medium
CN112990176B (en) Writing quality evaluation method and device and electronic equipment
CN110619602B (en) Image generation method and device, electronic equipment and storage medium
CN111783677A (en) Face recognition method, face recognition device, server and computer readable medium
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
CN111797873A (en) Scene recognition method and device, storage medium and electronic equipment
CN114422698B (en) Video generation method, device, equipment and storage medium
CN113905177B (en) Video generation method, device, equipment and storage medium
CN111291640B (en) Method and apparatus for recognizing gait
CN110263743B (en) Method and device for recognizing images
CN112085035A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN115147434A (en) Image processing method, device, terminal equipment and computer readable storage medium
CN111079472A (en) Image comparison method and device
CN111797869A (en) Model training method and device, storage medium and electronic equipment
CN113610034B (en) Method and device for identifying character entities in video, storage medium and electronic equipment
CN111784710B (en) Image processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant