CN111507149A - Interaction method, device and equipment based on expression recognition - Google Patents

Interaction method, device and equipment based on expression recognition Download PDF

Info

Publication number
CN111507149A
CN111507149A CN202010005487.8A CN202010005487A CN111507149A CN 111507149 A CN111507149 A CN 111507149A CN 202010005487 A CN202010005487 A CN 202010005487A CN 111507149 A CN111507149 A CN 111507149A
Authority
CN
China
Prior art keywords
expression
facial
interactive content
image
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010005487.8A
Other languages
Chinese (zh)
Other versions
CN111507149B (en
Inventor
陈冠男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boe Yiyun Hangzhou Technology Co ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010005487.8A priority Critical patent/CN111507149B/en
Publication of CN111507149A publication Critical patent/CN111507149A/en
Application granted granted Critical
Publication of CN111507149B publication Critical patent/CN111507149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An interaction method, device and equipment based on expression recognition are disclosed. The interaction method based on expression recognition comprises the following steps: acquiring a face image of a user; identifying a facial expression of a user based on the facial image; and adjusting the interactive contents to be output according to the facial expression. The interactive content comprises an object, the object comprises a plurality of different states, and the states of the object in the output interactive content are different when the facial expressions are different. The interactive method, the device and the equipment based on the expression recognition have the advantages of simple structure, high algorithm execution speed and better real-time property and interactivity.

Description

Interaction method, device and equipment based on expression recognition
Technical Field
The invention relates to the field of expression recognition, in particular to an interaction method, device and equipment based on expression recognition.
Background
Facial feature recognition is a hot technique in recent years in biometric pattern recognition. The technology requires detection and positioning of facial feature points of a face, and application such as face matching, expression analysis and the like is performed according to the feature points. In recent years, a lot of research institutions and enterprises have made a lot of resource investment in the field of target identification, and have obtained a series of achievements, and the achievements also have many applications in the industries of security protection, finance, life and entertainment and the like. Expression recognition is an extension of the field of facial feature recognition technology and is also a hotspot in this field. At present, practical systems based on expression recognition have appeared in many fields of products, such as interactive systems based on expression recognition. However, the current interactive system based on expression recognition has a complex structure, a slow algorithm execution speed and poor real-time performance and interactivity, so it is required to provide an interactive method and an interactive system with a simple structure, a fast algorithm execution speed and better real-time performance and interactivity.
Disclosure of Invention
The embodiment of the invention provides an interaction method based on expression recognition, which comprises the following steps: acquiring a face image of a user; identifying a facial expression of a user based on the facial image; and adjusting the interactive contents to be output according to the facial expression.
According to the embodiment of the invention, the adjusting the interactive content to be output according to the facial expression comprises: outputting interactive contents corresponding to a forward expression based on the facial image being recognized as the forward expression; outputting interactive content corresponding to a negative expression based on the facial image being identified as the negative expression; and outputting interactive contents corresponding to the neutral expression based on the facial image being recognized as the neutral expression.
According to an embodiment of the present invention, wherein the recognizing the facial expression of the user based on the facial image comprises: when the time that the user keeps facial expressions of the same category exceeds a preset time threshold, identifying the facial image as a specific facial expression, wherein the specific facial expression is the facial expression of the user, and the specific facial expression is one of a positive expression, a neutral expression and a negative expression.
According to the embodiment of the invention, the interactive content comprises an object, the object comprises a plurality of different states, and the states of the object in the output interactive content are different when the facial expressions are different.
According to an embodiment of the invention, wherein the object is a flower, the plurality of different states are a state in which the flower is closed, a state in which the flower is semi-open, a state in which the flower is fully open; when the facial expression is happy, the output interactive content is an image with a completely opened flower; when the facial expression is angry, outputting the interactive content as a flower closed image; and when the facial expression is a neutral expression, the output interactive content is an image with a flower half open.
According to an embodiment of the present invention, wherein the interactive content comprises at least one of an image, a video and an audio.
According to an embodiment of the present invention, wherein identifying the facial expression of the user based on the facial image comprises: extracting expression features of the facial image; acquiring an expression score value of the facial image based on the extracted expression features of the facial image; and performing expression recognition on the facial image based on the expression score value.
According to the embodiment of the invention, the output interactive content is selected from at least one sub-interactive content in an interactive content set comprising a plurality of sub-interactive contents; wherein each sub-interactive content in the set of interactive content is indicated with a pointer; wherein a pointer offset corresponding to the specific facial expression is generated based on the recognition result of the specific facial expression; and outputting current interactive content from the set of interactive content based on a current value of a pointer and the pointer offset.
According to the embodiment of the present invention, the generating the pointer offset corresponding to the specific facial expression based on the recognition result of the specific facial expression includes: generating a forward pointer offset based on the particular facial expression being identified as a forward expression; generating a negative pointer offset based on the particular facial expression being identified as a negative expression; and generating a zero pointer offset based on the particular facial expression being identified as a neutral expression.
According to an embodiment of the present invention, wherein the outputting of the current interactive content from the set of interactive content based on the current value of the pointer and the pointer offset comprises: generating an updated value of the pointer according to the sum of the current value of the pointer and the offset of the pointer; and outputting the sub-interactive contents corresponding to the updated value of the pointer from the interactive content set as the current interactive contents.
The embodiment of the invention also provides an interactive device based on expression recognition, which comprises: a display;
an image collector configured to: acquiring a face image of a user; an expression recognition module configured to: identifying a facial expression of a user based on the facial image; and a controller configured to: and adjusting the interactive content to be output according to the facial expression and controlling the display to display the interactive content.
According to the embodiment of the invention, the adjusting the interactive content to be output according to the facial expression comprises: outputting interactive contents corresponding to a forward expression based on the facial image being recognized as the forward expression; outputting interactive content corresponding to a negative expression based on the facial image being identified as the negative expression; and outputting interactive contents corresponding to the neutral expression based on the facial image being recognized as the neutral expression.
According to the embodiment of the invention, the interactive content comprises an object, the object comprises a plurality of different states, and the states of the object in the output interactive content are different when the facial expressions are different.
An embodiment of the present invention further provides an intelligent interaction device, including: the image acquisition unit is used for acquiring a face image of a user; a processor; a memory having stored thereon computer-executable instructions that, when executed by the processor, implement any of the methods according to embodiments of the invention; and an output unit for displaying the current interactive content.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement any of the methods described in accordance with embodiments of the present invention.
The embodiment of the invention provides an interaction method, device and equipment based on expression recognition, which are simple in structure, high in algorithm execution speed and better in instantaneity and interactivity.
Drawings
Fig. 1 is a schematic diagram illustrating an application scenario of an interactive system based on expression recognition according to an embodiment of the present invention.
FIG. 2 shows a schematic diagram of an image acquisition flow according to an embodiment of the invention.
Fig. 3 shows a flowchart of an interaction method based on expression recognition according to an embodiment of the present invention.
Fig. 4 shows a flowchart of an expression recognition method according to an embodiment of the present invention.
Fig. 5 shows a Gabor filter bank response diagram according to an embodiment of the invention.
Fig. 6A and 6B show an exemplary face image and its corresponding Gabor filter bank response map, respectively.
FIG. 7 illustrates an exemplary facial landmark position diagram according to an embodiment of the present invention.
Fig. 8 shows an expression recognition flow diagram corresponding to the expression recognition method of fig. 4.
FIG. 9 illustrates an interaction diagram of an exemplary set of interaction content, according to an embodiment of the invention.
FIG. 10 sets forth a flow chart illustrating an exemplary interaction method based on expression recognition according to embodiments of the present invention.
FIG. 11 illustrates a block flow diagram of an exemplary artistic drawing recommendation scene, in accordance with an embodiment of the present invention.
Fig. 12 is a schematic diagram of an interactive device based on expression recognition according to an embodiment of the present invention.
FIG. 13 shows a schematic diagram of an intelligent interaction device, according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein.
In the present specification and the drawings, substantially the same or similar steps and elements are denoted by the same or similar reference numerals, and repeated descriptions of the steps and elements will be omitted. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance or order.
Fig. 1 shows a schematic diagram of an application scenario 100 of an interactive system based on expression recognition according to an embodiment of the present invention. And fig. 2 shows a schematic diagram of an image acquisition flow 200 according to an embodiment of the invention.
As shown in fig. 1, in a scene 100, a user 101 interacts with a smart device 102 through facial expressions.
Referring to fig. 1 and 2, the image capturing apparatus 103 captures a real-time facial image of the user 101 intending to interact with the smart device 102, and then transmits the captured facial image to the system background 201 of the smart device 102 for analysis and processing as shown in fig. 2, and identifies a current facial expression of the user 101, and then displays the interactive content 105 conforming to the current facial expression of the user 101 on the display interface 104, thereby enabling the user 101 to interact with the smart device 102 in real time based on the facial expression.
According to the embodiment of the invention, the smart device 102 may be any type of smart device, such as a smart picture frame, a desktop computer, a tablet computer, a smart television, a smart home appliance, a smart phone, a smart car device, and the like. The smart device 102 may further include a smart interaction device, smart interaction software, and the like that can be loaded in the above devices.
According to the embodiment of the present invention, the image capturing device 103 may be composed of a video camera, a still camera, or any other device capable of image capturing, and the video camera or the still camera may have various resolutions and frame rates, for example, resolutions such as 240p, 480i, 480p, 720i, 720p, and 1080p, and frame rates such as 30fps, 60 fps. The image capture device 103 may be part of the smart device 102 or may be a separate image capture device that is communicably connected to the smart device 102.
According to the embodiment of the present invention, the Display interface 104 may be a Display interface embedded in the smart device 102, or may be any type of Display interface such as a liquid Crystal Display (L liquid Crystal Display, L CD) Display screen, a Cathode Ray Tube (CRT) Display screen, or the like connected to the smart device 102.
According to embodiments of the invention, the system back-end 201 may include a memory that may store image data acquired by the image acquisition device 103, as well as any other related system instructions and data. The system background 201 may further include any Processing Unit capable of analyzing and recognizing an image and an expression, for example, a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), and the like.
In the embodiment shown in fig. 1, the interactive content 105 is shown as AN image of a blooming flower, but the present invention is not limited thereto, and in other embodiments, the interactive content 105 may be any other form of image content, video content, text information, audio content, and the like, which may be pre-stored in a memory accessible by the smart device 102, or may be acquired via a local Area Network (L Area Network, L AN), a Wide Area Network (WAN), AN intranet, the internet, a Storage Area Network (SAN), a Personal Area Network (PAN), a Metropolitan Area Network (MAN), a Wireless local Area Network (Wireless L Area Network, W L AN), a Virtual Private Network (Virtual Private Network, VPN), a cellular or other mobile Communication Network, bluetooth, Near Field Communication (Near-Field Communication, NFC, and the like), and various real-time Communication.
In particular, fig. 3 shows a flow diagram of an interaction method 300 based on expression recognition according to an embodiment of the invention.
First, in step S301, a user face image is acquired.
As described above, the image of the face of the user can be acquired in real time by any image acquisition device capable of image acquisition, such as a video camera, a still camera, or the like.
In step S302, a facial expression of the user is recognized based on the facial image.
According to an embodiment of the present invention, recognizing the facial expression of the user based on the facial image may include: when the time that the user keeps facial expressions of the same category exceeds a preset time threshold, identifying the facial image as a specific facial expression, wherein the specific facial expression is the facial expression of the user, and the specific facial expression is one of a positive expression, a neutral expression and a negative expression.
In one embodiment, the positive expression may include any expression corresponding to a positive emotion, for example, it may be a "happy" or "surprised" expression, a "smile" or "laugh" expression, and so on. In one embodiment, neutral expressions may include any expression corresponding to a neutral mood, which may be, for example, "calm," "dull," or "thinking" expressions, among others. In one embodiment, a negative expression may include any expression corresponding to a negative emotion, for example, it may be a "frowning," "angry," "sad," or "crying" expression, and so on. It should be understood that the various expressions listed here are only some examples of positive, neutral, and negative expressions, and are not limiting.
According to an embodiment of the invention, taking a "happy" expression as an example, if the user keeps the "happy" expression for more than a predetermined time threshold (e.g., 5 seconds), the system may recognize the user's facial expression as a "happy" expression. In another embodiment, the facial expression of the user corresponding to the currently acquired facial image of the user may also be determined in real time without continuing for a predetermined time threshold. For example, the currently acquired face image of the user may be directly determined as a "happy" expression. An expression recognition method 400 according to an embodiment of the present invention will be described in detail with reference to fig. 4.
FIG. 4 shows a flow diagram of an expression recognition method 400 according to an embodiment of the invention.
First, in step S401, expressive features of a face image are extracted.
As described above, the face image may be a user face image captured in real time by any image capturing device capable of image capturing, such as a video camera, a still camera, or the like. In the embodiment of the present invention, feature vector extraction may be performed on the acquired face image by a Gabor filter bank. However, those skilled in the art will appreciate that the feature extraction of the present invention is not limited to the use of Gabor filters, and any other method that can be used for feature extraction of facial images, such as Steerable filtering, Schmid filtering, etc., may be used. Before describing the specific steps of feature extraction, a brief description of Gabor kernel functions and Gabor filters involved in embodiments of the present invention will be given below.
The two-dimensional Gabor kernel function has the same characteristics as the two-dimensional reflecting regions of simple cells of the cerebral cortex of the feeding animal, namely has strong spatial position and direction selectivity, and can capture local structural information corresponding to space and frequency; the Gabor kernel is a good approximation to neurons in the visual cortex of higher vertebrates, a compromise between temporal and frequency domain accuracy. The Gabor filter is robust to changes in brightness and contrast of the image, as well as changes in facial pose, and it expresses local features that are most useful for facial recognition.
The two-dimensional Gabor kernel function is shown in equation (1) below:
Figure BDA0002355088360000071
wherein x isX cos θ + ysin θ, and y-x sin θ + ycos θ; λ is a sine function wavelength whose value is specified in pixels, and in one embodiment λ may be greater than or equal to 2 and less than one fifth the size of the input image; theta is the direction of parallel stripes of the Gabor kernel function, and the value of theta is 0-2 pi; phi is the phase offset and ranges from-pi to pi, where 0 and pi correspond to the centrosymmetric center-on function and center-off function, respectively, and-pi/2 and pi/2 correspond to the antisymmetric functions; γ is the aspect ratio, i.e., the spatial aspect ratio, which determines the ellipticity (ellipticity) of the shape of the Gabor function, which is round when γ is 1 and round when γ is<1, the shape is elongated with the direction of the parallel stripes, in one embodiment, γ may be 0.5; σ represents the standard deviation of the gaussian factor of the Gabor kernel, whose value cannot be set directly, but varies only with the half-response spatial frequency bandwidth b of the Gabor filter, whose value must be positive and real, in one embodiment, bandwidth b is 1, where the standard deviation σ and the wavelength λ have the relationship: σ is 0.56 λ. By setting different Gabor kernel functionsAnd (4) obtaining Gabor filters with different scales and directions according to the parameters.
In an embodiment of the present invention, the acquired face image may be input to a Gabor filter bank including a plurality of scales and directions to obtain a corresponding filter response map, as shown in fig. 5.
Fig. 5 shows a Gabor filter bank response diagram 500 according to an embodiment of the invention.
In the embodiment shown in fig. 5, the Gabor filter bank comprises 40 filters in 8 directions at 5 scales. Each row corresponds to a group of filters with the same dimension and different directions, and each column corresponds to a group of filters with the same direction and different dimensions.
Fig. 6A and 6B each show an exemplary face image 600 and its corresponding Gabor filter bank response map 610.
According to the embodiment of the present invention, the exemplary face image 600 shown in fig. 6A may be input into a Gabor filter bank including filters of 5 scales and 8 directions as shown in fig. 5, and then subjected to a filtering process of each filter, a Gabor filter response diagram 610 of the exemplary face image 600 in the 5 scales and 8 directions may be obtained, respectively, as shown in fig. 6B.
Then, feature points of the face image 600 may be extracted from the filter response map 610, and feature values corresponding to the extracted feature points constitute a feature vector of the face image 600.
In particular, FIG. 7 shows an exemplary facial feature point location diagram 700, according to an embodiment of the present invention.
In one embodiment, all feature points including the contour, eyebrows, eyes, nose, and mouth of the facial image may be extracted as shown in fig. 7, and then the feature values corresponding to all feature points constitute a feature vector (e.g., a one-dimensional vector of 1 ×) of the facial image.
In an embodiment of the present invention, one sub-feature vector may be extracted from each of the filter response maps shown in fig. 6B (for example, 40 one-dimensional vectors of 1 × 68 may be extracted), and then all the sub-feature vectors may be pieced together to form a feature vector of a total face image (for example, a one-dimensional vector of 1 × 2720).
Next, returning to fig. 4, in step S402, an expression score value of the face image is acquired based on the extracted expression features of the face image.
In an embodiment of the present invention, the expression score value of the face image may be obtained by a random forest regressor. In embodiments of the invention, the random forest regressor may be pre-trained with a set of facial image samples having preset expression score values. The following describes the training of the random forest regressor and the obtaining process of the expression score value in detail with reference to specific embodiments.
First, as described above, according to the embodiment of the present invention, facial expressions can be classified into three expression categories of negative expression, neutral expression, and positive expression. In some embodiments, facial expressions may also be classified into other expression categories, or the three expression categories may be further subdivided, for example, negative expressions are further subdivided into "sadness" and "anger", positive expressions are further subdivided into "happiness" and "surprise", and so on.
In an embodiment of the present invention, sufficient face image samples may be constructed first, wherein for each face image sample, an expression score value may be set in advance. In one embodiment, a range of expression score values, such as [ -10,10], may be preset, and then a corresponding expression score value may be set within the range of expression score values according to the positive or negative degree of facial expression contained in the facial image sample. For example, the expression score value of a facial image sample containing a "surprise" expression may be preset to 10, the expression score value of a facial image sample containing a "laugh" expression may be preset to 8, the expression score value of a facial image sample containing a "smile" expression may be preset to 2, the expression score value of a facial image sample containing a "calm" expression may be preset to 0, the expression score value of a facial image sample containing a "frown" expression may be preset to-2, and the expression score value of a facial image sample containing a "anger" expression may be preset to-10, and so on. The feature vectors of these facial image samples with preset expression score values can then be extracted by the above-described feature extraction method and used for training of a random forest regressor. After training, the random forest regressor can output the corresponding expression score value for each input facial image (or the feature vector of the facial image).
Returning again to fig. 4, in step S403, the facial image is subjected to expression recognition based on the expression score value.
In an embodiment of the invention, the facial image may be associated with the corresponding expression category according to a preset classification threshold range based on the expression score value, wherein different classification threshold ranges correspond to different expression categories. For example, in an embodiment containing three expression categories of negative expression, neutral expression, and positive expression, a threshold range of [ -10, -1) may be associated with negative expression, a threshold range of [ -1,1] may be associated with neutral expression, and a threshold range of (1, 10) may be associated with positive expression. It will be appreciated by those skilled in the art that the classification threshold range may also be preset in any other way. Then, the facial image may be associated with a specific expression category based on the acquired expression score value, thereby achieving expression recognition of the facial image. For example, in this embodiment, a facial image with an expression score value of 5 may be identified as a positive expression.
According to the expression recognition method based on the Gabor filter and the random forest regressor, provided by the embodiment of the invention, multi-expression classification and recognition can be accurately carried out, the algorithm execution speed is high, in one embodiment, the processing speed of 60fps can be achieved on a RK3399 ARM development board for a 150 × 150 facial image, and the requirement of real-time expression analysis can be completely met.
Fig. 8 illustrates an expression recognition flow diagram 800 corresponding to the expression recognition method 400 of fig. 4.
After performing expression recognition on the facial image according to the expression recognition method shown in fig. 4 or 8, it is possible to return to step S303 in fig. 3 and adjust the interactive contents to be output according to the recognized facial expression.
According to an embodiment of the present invention, adjusting interactive contents to be output according to facial expressions may include: outputting interactive contents corresponding to the forward expression based on the facial image being recognized as the forward expression; outputting interactive contents corresponding to the negative expression based on the face image being recognized as the negative expression; and outputting interactive contents corresponding to the neutral expression based on the facial image being recognized as the neutral expression. For example, when the user's facial image is recognized as a "happy" expression, interactive contents corresponding to the "happy" expression, such as a completely open flower, a cartoon character smiling face, or cheerful music, etc., may be output; when the user's facial image is recognized as a "calm" expression, interactive contents corresponding to the "calm" expression, for example, a calm lake surface, a semi-open flower, or a soothing music, etc., may be output; when the user's facial image is recognized as a "sad" expression, interactive contents corresponding to the "sad" expression, for example, a closed flower, a cartoon figure crying face, or music of a sense of impairment, etc., may be output. It should be understood that these corresponding relationships are merely illustrative and not restrictive, and any particular expression may also correspond to other different interactive contents according to any predetermined rule. For example, in one embodiment, when the user's facial image is recognized as a "sad" expression, a cartoon character smiley face or a gradual transition from calm to happy music may be output to guide the user away from the "sad" mood more quickly.
According to the embodiment of the invention, the interactive content can include an object, the object can include a plurality of different states, and when the recognized facial expressions of the users are different, the states of the objects in the output interactive content can also be different. In one embodiment, the object may be a flower, and the plurality of different states may be a state in which the flower is closed, a state in which the flower is semi-open, a state in which the flower is fully open; when the facial expression of the user is happy, the output interactive content can be an image with a completely opened flower; when the facial expression of the user is angry, the output interactive content can be a flower closed image; and when the facial expression of the user is neutral expression, the output interactive content can be an image with a flower half open. In this embodiment, the plurality of different states of the flower may also be a plurality of continuous dynamic states throughout the process from flower closure to full flower opening.
In another embodiment, the subject may be a face of a cartoon character, and the corresponding plurality of different states may be a crying face state of the cartoon character, a neutral expression state of the cartoon character, and a smiling face state of the cartoon character. When the facial expression of the user is happy, the output interactive content can be the image of the smiling face of the cartoon character; when the facial expression of the user is anger, the output interactive content can be an image of a crying face of a cartoon figure; and when the facial expression of the user is neutral expression, the output interactive content can be the image of the neutral expression of the cartoon character.
In yet another embodiment, the object may be the sun, and the plurality of different states corresponding thereto may be a sunrise state, a sunset state, and a state in which the sun rises first to half-empty. When the facial expression of the user is neutral, the output interactive content can be an image from the first rising of the sun to the half space; when the facial expression of the user is happy, the output interactive content can be a daily image; and when the user's facial expression is angry, the output interactive contents may be an image of sunset.
According to an embodiment of the present invention, the interactive contents to be output may be selected from at least one sub-interactive contents in an interactive contents set including a plurality of sub-interactive contents; wherein each sub-interactive content in the set of interactive content may be indicated with a pointer.
In particular, still taking the embodiment described above in which the object in the interactive content is a flower as an example, the set of interactive content may comprise a set of a plurality of consecutive image frames throughout the process from the flower closed state to the flower fully open state. For example, the interactive content set may be a linked list of interactive content, wherein each node in the linked list corresponds to a child interactive content in the interactive content set, and each node is indicated by a pointer, as shown in FIG. 9.
FIG. 9 shows an interaction diagram of an exemplary interactive content set 900 according to an embodiment of the invention, and FIG. 10 shows a flow diagram 1000 of an exemplary interaction method based on emotion recognition according to an embodiment of the invention.
In the embodiment shown in connection with fig. 9 and 10, a group of dynamic image frames, such as the image frames 1 to 11 shown in fig. 9, may be imported in advance into the interactive content set, and the 11 image frames may respectively correspond to consecutive images at different time points in the process from flower closing to flower opening. The 11 image frames may then be organized into an image frame link 902 comprising 11 nodes, in sequential order from flower closed to flower open, where each node corresponds to an image frame. The image frame to be currently output may be indicated by the current value of the pointer 901, for example, as shown in fig. 9, the pointer 901 currently points to the node 6 in the image frame linked list 902 (i.e., the current value of the pointer 901 is 6), and the image frame corresponding to the node 6 may be regarded as the content to be currently output.
In one embodiment, a pointer offset corresponding to a particular facial expression may be generated based on a recognition result of the particular facial expression; and the current interactive content may be output from the set of interactive content based on the current value of the pointer and the pointer offset.
Specifically, still taking the above-described embodiment containing three expression categories of negative expression, neutral expression, and positive expression as an example, in this embodiment, a positive pointer offset may be generated based on the facial expression of the user being recognized as a positive expression, for example, the pointer offset is 1; a negative pointer offset may be generated based on the user's facial expression being identified as a negative expression, e.g., the pointer offset is-1; a zero pointer offset may be generated based on the facial expression of the user being identified as a neutral expression, e.g., the pointer offset is 0. In the embodiment of further subdividing the expression category, the value of the pointer offset can be further subdivided correspondingly. For example, where the forward expression is further subdivided into "happy" and "surprised," a pointer offset value of 1 may be generated based on the user's facial expression being identified as "happy" and a pointer offset value of 2 based on the user's facial expression being identified as "surprised" expression. However, it will be appreciated by those skilled in the art that in other embodiments of the invention, the pointer offset corresponding to different emoji categories may also be set to any other value.
In one embodiment, an updated value of the pointer may be generated from a sum of a current value of the pointer and the pointer offset, and the sub-interactive content corresponding to the updated value of the pointer may be output from the interactive content set as the current interactive content.
Specifically, still taking the above-mentioned embodiment containing three expression categories of negative expression, neutral expression and positive expression as an example, in this embodiment, as shown in fig. 9, assuming that the current value of the pointer 901 is 6 (i.e., points to the node 6 in the linked list), if the current facial expression of the user is identified as a positive expression, the pointer offset having a value of 1 may be generated according to the method of the above-mentioned embodiment, then the updated value 7 of the pointer may be generated according to the sum of the current value of the pointer 901 and the pointer offset (i.e., the updated pointer points to the node 7 in the linked list), and therefore, the blooming image frame corresponding to the node 7 may be taken as the interactive content to be currently output. According to the embodiment of the invention, if the facial expression of the user is still recognized as the positive expression at the next moment, the system may take the blooming image frame corresponding to the node 8 as the interactive content to be output at the next moment; if the user's facial expression is recognized as a positive expression at multiple moments in time in succession, the system may output a continuous dynamic process of the flower gradually opening. The pointer 901 may slide at different locations in the linked list based on the current value and the pointer offset. Therefore, according to the method provided by the embodiment of the invention, high real-time interaction between the user and the interactive system can be realized, the system can output the image which is consistent with the current facial expression according to the current facial expression of the user, and can output continuous dynamic image frames according to the continuous facial expression of the user, so that the method has the advantages of interactivity and interestingness.
In other embodiments of the present invention, the updated value of the pointer may also be generated according to other predetermined rules, for example, the updated pointer value is correspondingly updated when N (N is a positive integer) expressions of the same category are continuously recognized, or the updated pointer value is correspondingly updated when it is recognized that the time for the user to keep the facial expression of the same category exceeds a predetermined time threshold, and so on.
In an embodiment of the present invention, in addition to outputting interactive content from a set of pre-imported interactive content according to the above-described exemplary interaction method in conjunction with fig. 10, second interactive content may be output from a set of second interactive content different from the above-described set of interactive content based on an updated value of the pointer exceeding a predetermined trigger threshold.
In one embodiment of the present invention, the length range of the image frame link 902 as shown in fig. 9 may be used as the predetermined trigger threshold, i.e., the upper trigger threshold may be set to the maximum node number 11 of the image frame link 902, and the lower trigger threshold may be set to the minimum node number 1 of the image frame link 902. In this embodiment, when the updated value of the pointer is greater than 11 or less than 1, the second interactive contents may be output from the second interactive contents set. In the embodiment of the present invention, the trigger threshold may also be set according to other manners, for example, N positive expressions that are continuously recognized may be used as an upper trigger threshold, and M negative expressions that are continuously recognized may be used as a lower trigger threshold (N and M are both positive integers). In embodiments including, for example, further sub-division into "angry", "sad", "neutral", "happy", and "surprised" expression categories, a plurality of respective trigger thresholds may also be set in segments to output second interactive content corresponding to different expression categories, respectively.
In the embodiment of the present invention, the second interactive content set may be any form of image content, video content, text information, audio content, and the like, which may be pre-stored in a memory accessible by the interactive system, or may be acquired in real time via various communication networks such as a local area network, a wide area network, an intranet, the internet, a storage area network, a personal area network, a metropolitan area network, a wireless local area network, a virtual private network, a cellular or other mobile communication network, bluetooth, near field communication, and ultrasonic communication.
In another embodiment of the present invention, it is also possible to determine an expression category currently corresponding to the facial image based on the directional characteristic of the predetermined trigger threshold, and output the interactive content corresponding to the expression category as the current interactive content, as shown in fig. 11.
FIG. 11 illustrates a flow diagram 1100 of an exemplary artistic drawing recommendation scene, in accordance with an embodiment of the present invention.
In the scenario shown in fig. 11, when the updated value of the pointer exceeds the predetermined trigger threshold, it may be further determined whether the updated value of the pointer exceeds an upper trigger threshold or a lower trigger threshold, and assuming that the upper trigger threshold corresponds to a "happy" expression category and the lower trigger threshold corresponds to a "unhappy" expression category (i.e., a directional feature of the trigger threshold), when it is detected that the updated value of the pointer exceeds the upper trigger threshold, it may be determined that the current expression of the user is "happy", so that a recommended drawing suitable for happy mood may be recommended and displayed; when the fact that the updated value of the pointer exceeds the lower trigger threshold value is detected, the current expression of the user can be judged to be 'unhappy', and therefore recommended pictures suitable for the unhappy mood can be recommended and displayed. Therefore, the purposes of real-time interaction with the user and recommendation of mood suitable painting for the user are achieved. In another embodiment of the invention, the number of times a particular drawing is recommended and the user's mood feedback when the drawing is displayed may also be recorded so that subsequent recommendations may be optimized by analyzing the recorded information.
Fig. 12 is a schematic diagram of an interaction apparatus 1200 based on expression recognition according to an embodiment of the present invention.
The interactive device 1200 based on expression recognition according to an embodiment of the present invention may include a display 1201, an image collector 1202, an expression recognition module 1203, and a controller 1204. the display 1201 may be any type of display built in or externally connected to the device 1200, such as a liquid crystal display (L CD), a Cathode Ray Tube (CRT) display, etc. the image collector 1202 may be any device capable of image collection, such as a video camera, a still camera, or a smartphone with a photographing function, etc. the image collector 1202 is configured to acquire a facial image of a user.
The interaction apparatus 1200 based on expression recognition according to an embodiment of the present invention may further include: a recommendation module (not shown) configured to output the second interactive content from the second set of interactive content if the updated value of the pointer exceeds a predetermined trigger threshold.
FIG. 13 shows a schematic diagram of an intelligent interaction device 1300, according to an embodiment of the invention.
As shown in fig. 13, an intelligent interactive device 1300 according to an embodiment of the present invention may include: image acquisition unit 1301, processor 1302, memory 1303, and output unit 1304.
The image capturing unit 1302 may be constituted by a video camera, a still camera or any other device capable of image capturing, may be part of the intelligent interactive device 1300, or may be a separate image capturing unit capable of being communicatively connected to the intelligent interactive device 1300. The image acquired by the image acquisition unit 1302 may be stored in the memory 1303.
The processor 1302 may perform various actions and processes according to programs stored in the memory 1303. In particular, the processor 1302 may be an integrated circuit chip having signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. Various methods, steps and logic blocks disclosed in embodiments of the invention may be implemented or performed. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which may be the X86 architecture or the ARM architecture or the like.
The memory 1303 stores computer-executable instruction code that, when executed by the processor 1302, implements expression recognition methods according to embodiments of the present invention and interaction methods based on expression recognition, the memory 1303 may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories, the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
The output unit 1304 can be any type of display interface that can be used to display the current interactive content, such as a liquid crystal display (L CD) display interface, a Cathode Ray Tube (CRT) display interface, etc., which can be part of the intelligent interactive device 1300, or a separate display interface that can be communicatively coupled to the intelligent interactive device 1300.
The present invention also provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement an expression recognition method and an interaction method based on expression recognition according to embodiments of the present invention. Similarly, computer-readable storage media in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. It should be noted that the memories of the methods described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
The embodiment of the invention provides an interaction method, device and equipment based on expression recognition, which are simple in structure, high in algorithm execution speed and better in instantaneity and interactivity.
It is to be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In general, the various exemplary embodiments of this invention may be implemented in hardware or special purpose circuits, software, firmware, logic or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the embodiments of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The exemplary embodiments of the invention, as set forth in detail above, are intended to be illustrative, not limiting. It will be appreciated by those skilled in the art that various modifications and combinations of the embodiments or features thereof may be made without departing from the principles and spirit of the invention, and that such modifications are intended to be within the scope of the invention.

Claims (15)

1. An interaction method based on expression recognition comprises the following steps:
acquiring a face image of a user;
identifying a facial expression of a user based on the facial image; and
and adjusting the interactive content to be output according to the facial expression.
2. The interaction method of claim 1, wherein the adjusting of the interactive contents to be output according to the facial expression comprises:
outputting interactive contents corresponding to a forward expression based on the facial image being recognized as the forward expression;
outputting interactive content corresponding to a negative expression based on the facial image being identified as the negative expression; and
based on the facial image being recognized as a neutral expression, interactive content corresponding to the neutral expression is output.
3. The interaction method of claim 1, wherein said identifying a facial expression of the user based on the facial image comprises:
when the time that the user keeps facial expressions of the same category exceeds a preset time threshold, identifying the facial image as a specific facial expression, wherein the specific facial expression is the facial expression of the user, and the specific facial expression is one of a positive expression, a neutral expression and a negative expression.
4. The interactive method as claimed in claim 1, wherein the interactive content includes an object, the object includes a plurality of different states, and the states of the object in the output interactive content are different when the facial expressions are different.
5. The interaction method of claim 4, wherein the object is a flower, and the plurality of different states are a flower closed state, a flower half-open state, a flower full-open state; wherein,
when the facial expression is happy, the output interactive content is an image with a flower completely opened;
when the facial expression is angry, outputting the interactive content as a flower closed image; and
when the facial expression is a neutral expression, the output interactive content is an image with a flower half open.
6. The interactive method of claim 1, wherein the interactive content comprises at least one of an image, video, audio.
7. The interaction method of claim 1, wherein identifying a facial expression of the user based on the facial image comprises:
extracting expression features of the facial image;
acquiring an expression score value of the facial image based on the extracted expression features of the facial image; and
and performing expression recognition on the facial image based on the expression score value.
8. The interaction method of claim 3,
the output interactive content is selected from at least one sub-interactive content in an interactive content set comprising a plurality of sub-interactive contents;
wherein each sub-interactive content in the set of interactive content is indicated with a pointer;
wherein a pointer offset corresponding to the specific facial expression is generated based on the recognition result of the specific facial expression; and
outputting current interactive content from the set of interactive content based on a current value of a pointer and the pointer offset.
9. The interaction method of claim 8, wherein the generating a pointer offset corresponding to the particular facial expression based on the recognition result of the particular facial expression comprises:
generating a forward pointer offset based on the particular facial expression being identified as a forward expression;
generating a negative pointer offset based on the particular facial expression being identified as a negative expression; and
generating a zero pointer offset based on the particular facial expression being identified as a neutral expression.
10. The interactive method of claim 9, wherein the outputting current interactive content from the set of interactive content based on a current value of a pointer and the pointer offset comprises:
generating an updated value of the pointer according to the sum of the current value of the pointer and the offset of the pointer; and
and outputting the sub-interactive content corresponding to the updated value of the pointer from the interactive content set as the current interactive content.
11. An interactive device based on expression recognition, comprising:
a display;
an image collector configured to: acquiring a face image of a user;
an expression recognition module configured to: identifying a facial expression of a user based on the facial image; and
a controller configured to: and adjusting the interactive content to be output according to the facial expression and controlling the display to display the interactive content.
12. The interaction apparatus of claim 11, wherein the adjusting of the interactive contents to be output according to the facial expression comprises:
outputting interactive contents corresponding to a forward expression based on the facial image being recognized as the forward expression;
outputting interactive content corresponding to a negative expression based on the facial image being identified as the negative expression; and
based on the facial image being recognized as a neutral expression, interactive content corresponding to the neutral expression is output.
13. The interactive apparatus according to claim 11, wherein the interactive content comprises an object, the object comprises a plurality of different states, and the states of the object in the output interactive content are different when the facial expressions are different.
14. An intelligent interaction device, comprising:
the image acquisition unit is used for acquiring a face image of a user;
a processor;
a memory having stored thereon computer-executable instructions that, when executed by the processor, implement the method of any one of claims 1-10; and
and the output unit is used for displaying the current interactive content.
15. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement the method of any one of claims 1-10.
CN202010005487.8A 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition Active CN111507149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010005487.8A CN111507149B (en) 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010005487.8A CN111507149B (en) 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition

Publications (2)

Publication Number Publication Date
CN111507149A true CN111507149A (en) 2020-08-07
CN111507149B CN111507149B (en) 2023-10-27

Family

ID=71871033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010005487.8A Active CN111507149B (en) 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition

Country Status (1)

Country Link
CN (1) CN111507149B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418146A (en) * 2020-12-02 2021-02-26 深圳市优必选科技股份有限公司 Expression recognition method and device, service robot and readable storage medium
WO2021043023A1 (en) * 2019-09-02 2021-03-11 京东方科技集团股份有限公司 Image processing method and device, classifier training method, and readable storage medium
CN115601821A (en) * 2022-12-05 2023-01-13 中国汽车技术研究中心有限公司(Cn) Interaction method based on expression recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120219A1 (en) * 2010-11-15 2012-05-17 Hon Hai Precision Industry Co., Ltd. Electronic device and emotion management method using the same
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN108733209A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Man-machine interaction method, device, robot and storage medium
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN109819100A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 Mobile phone control method, device, computer installation and computer readable storage medium
CN110363079A (en) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 Expression exchange method, device, computer installation and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120219A1 (en) * 2010-11-15 2012-05-17 Hon Hai Precision Industry Co., Ltd. Electronic device and emotion management method using the same
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN108733209A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Man-machine interaction method, device, robot and storage medium
CN109819100A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 Mobile phone control method, device, computer installation and computer readable storage medium
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN110363079A (en) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 Expression exchange method, device, computer installation and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021043023A1 (en) * 2019-09-02 2021-03-11 京东方科技集团股份有限公司 Image processing method and device, classifier training method, and readable storage medium
US11961327B2 (en) 2019-09-02 2024-04-16 Boe Technology Group Co., Ltd. Image processing method and device, classifier training method, and readable storage medium
CN112418146A (en) * 2020-12-02 2021-02-26 深圳市优必选科技股份有限公司 Expression recognition method and device, service robot and readable storage medium
CN112418146B (en) * 2020-12-02 2024-04-30 深圳市优必选科技股份有限公司 Expression recognition method, apparatus, service robot, and readable storage medium
CN115601821A (en) * 2022-12-05 2023-01-13 中国汽车技术研究中心有限公司(Cn) Interaction method based on expression recognition
CN115601821B (en) * 2022-12-05 2023-04-07 中国汽车技术研究中心有限公司 Interaction method based on expression recognition

Also Published As

Publication number Publication date
CN111507149B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
TWI777162B (en) Image processing method and apparatus, electronic device and computer-readable storage medium
US10264177B2 (en) Methods and systems to obtain desired self-pictures with an image capture device
TW201911130A (en) Method and device for remake image recognition
WO2021236296A9 (en) Maintaining fixed sizes for target objects in frames
TWI773096B (en) Makeup processing method and apparatus, electronic device and storage medium
Loke et al. Indian sign language converter system using an android app
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN111507149B (en) Interaction method, device and equipment based on expression recognition
CN108182714B (en) Image processing method and device and storage medium
Agarwal et al. Anubhav: recognizing emotions through facial expression
US11704563B2 (en) Classifying time series image data
KR20170095817A (en) Avatar selection mechanism
CN106254952A (en) video quality dynamic control method and device
CN108805047A (en) A kind of biopsy method, device, electronic equipment and computer-readable medium
CN111541943B (en) Video processing method, video operation method, device, storage medium and equipment
Vazquez-Fernandez et al. Built-in face recognition for smart photo sharing in mobile devices
CN111491187A (en) Video recommendation method, device, equipment and storage medium
WO2024001095A1 (en) Facial expression recognition method, terminal device and storage medium
CN108197585A (en) Recognition algorithms and device
CN113111782A (en) Video monitoring method and device based on salient object detection
WO2023197780A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN107977636B (en) Face detection method and device, terminal and storage medium
US11804032B2 (en) Method and system for face detection
US20160140748A1 (en) Automated animation for presentation of images
Tapia et al. Sex-classification from cellphones periocular iris images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210514

Address after: Room 2305, luguyuyuan venture building, 27 Wenxuan Road, high tech Development Zone, Changsha City, Hunan Province, 410005

Applicant after: BOE Yiyun Technology Co.,Ltd.

Address before: 100015 No. 10, Jiuxianqiao Road, Beijing, Chaoyang District

Applicant before: BOE TECHNOLOGY GROUP Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230925

Address after: Room 207, 207M, Building 1, 1818-1 Wenyi West Road, Yuhang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311121

Applicant after: BOE Yiyun (Hangzhou) Technology Co.,Ltd.

Address before: Room 2305, luguyuyuan venture building, 27 Wenxuan Road, high tech Development Zone, Changsha City, Hunan Province, 410005

Applicant before: BOE Yiyun Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant