CN111507149B - Interaction method, device and equipment based on expression recognition - Google Patents

Interaction method, device and equipment based on expression recognition Download PDF

Info

Publication number
CN111507149B
CN111507149B CN202010005487.8A CN202010005487A CN111507149B CN 111507149 B CN111507149 B CN 111507149B CN 202010005487 A CN202010005487 A CN 202010005487A CN 111507149 B CN111507149 B CN 111507149B
Authority
CN
China
Prior art keywords
expression
facial
image
interactive
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010005487.8A
Other languages
Chinese (zh)
Other versions
CN111507149A (en
Inventor
陈冠男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boe Yiyun Hangzhou Technology Co ltd
Original Assignee
Boe Yiyun Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boe Yiyun Hangzhou Technology Co ltd filed Critical Boe Yiyun Hangzhou Technology Co ltd
Priority to CN202010005487.8A priority Critical patent/CN111507149B/en
Publication of CN111507149A publication Critical patent/CN111507149A/en
Application granted granted Critical
Publication of CN111507149B publication Critical patent/CN111507149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An interaction method, device and equipment based on expression recognition are disclosed. The interaction method based on expression recognition comprises the following steps: acquiring a facial image of a user; identifying a facial expression of the user based on the facial image; and adjusting the interactive content to be output according to the facial expression. The interactive content comprises an object, the object comprises a plurality of different states, and when the facial expressions are different, the states of the object in the output interactive content are different. The interaction method, the interaction device and the interaction equipment based on the expression recognition are simple in structure, high in algorithm execution speed and better in instantaneity and interactivity.

Description

Interaction method, device and equipment based on expression recognition
Technical Field
The invention relates to the field of expression recognition, in particular to an interaction method, device and equipment based on expression recognition.
Background
Facial feature recognition is a hotspot technology in recent years of biometric pattern recognition. The technology requires that facial feature points of the face are detected and positioned, and facial matching, expression analysis and other applications are carried out according to the feature points. In recent years, a lot of research institutions and enterprises have made a lot of resource investment in the field of target identification, and a series of achievements are obtained, and the achievements have many applications in industries such as security, finance, life entertainment and the like. Expression recognition is an extension of the technical field of facial feature recognition and is also a hotspot in the field. Currently, practical systems based on expression recognition have emerged in products in many fields, such as interactive systems based on expression recognition. However, the existing interactive system based on expression recognition has complex structure, low algorithm execution speed and poor real-time performance and interactivity, so that an interactive method and an interactive system with simple structure, high algorithm execution speed and better real-time performance and interactivity are needed to be provided.
Disclosure of Invention
The embodiment of the invention provides an interaction method based on expression recognition, which comprises the following steps: acquiring a facial image of a user; identifying a facial expression of the user based on the facial image; and adjusting the interactive content to be output according to the facial expression.
According to an embodiment of the present invention, the adjusting the interactive contents to be output according to the facial expression includes: outputting interactive contents corresponding to the forward expression based on the facial image being recognized as the forward expression; outputting interactive content corresponding to the negative expression based on the facial image being identified as the negative expression; and outputting interactive contents corresponding to the neutral expression based on the facial image being recognized as the neutral expression.
According to an embodiment of the present invention, wherein the identifying the facial expression of the user based on the facial image includes: when the time that the user holds the facial expressions of the same category exceeds a predetermined time threshold, the facial image is identified as a specific facial expression, which is a facial expression of the user, wherein the specific facial expression is one of a positive expression, a neutral expression and a negative expression.
According to the embodiment of the invention, the interactive content comprises an object, the object comprises a plurality of different states, and when the facial expressions are different, the states of the object in the output interactive content are different.
According to an embodiment of the present invention, the object is a flower, and the plurality of different states are a flower closed state, a flower half-open state, a flower fully open state; when the facial expression is happy, the output interactive content is an image with flowers fully opened; when the facial expression is anger, outputting interactive content which is a flower closed image; and when the facial expression is a neutral expression, outputting interactive content which is a flower semi-open image.
According to an embodiment of the invention, the interactive content comprises at least one of an image, a video, and an audio.
According to an embodiment of the present invention, wherein identifying the facial expression of the user based on the facial image comprises: extracting expression features of the facial image; based on the expression characteristics of the extracted facial image, obtaining expression grading values of the facial image; and performing expression recognition on the facial image based on the expression score value.
According to an embodiment of the present invention, the outputted interactive contents are selected from at least one sub-interactive contents in an interactive content set including a plurality of sub-interactive contents; wherein each sub-interactive content in the set of interactive contents is indicated with a pointer; wherein, based on the recognition result of the specific facial expression, generating a pointer offset corresponding to the specific facial expression; and outputting current interactive content from the set of interactive content based on a current value of a pointer and the pointer offset.
According to an embodiment of the present invention, the generating the pointer offset corresponding to the specific facial expression based on the recognition result of the specific facial expression includes: generating a forward pointer offset based on the particular facial expression being identified as a forward expression; generating a negative pointer offset based on the particular facial expression being identified as a negative expression; and generating a zero pointer offset based on the particular facial expression being identified as a neutral expression.
According to an embodiment of the invention, wherein said outputting current interactive content from said set of interactive content based on the current value of the pointer and said pointer offset comprises: generating an updated value of the pointer according to the sum of the current value of the pointer and the pointer offset; and outputting sub-interactive contents corresponding to the updated value of the pointer from the interactive contents set as current interactive contents.
The embodiment of the invention also provides an interaction device based on expression recognition, which comprises: a display;
an image collector configured to: acquiring a facial image of a user; an expression recognition module configured to: identifying a facial expression of the user based on the facial image; and a controller configured to: and adjusting the interactive content to be output according to the facial expression and controlling the display to display the interactive content.
According to an embodiment of the present invention, the adjusting the interactive contents to be output according to the facial expression includes: outputting interactive contents corresponding to the forward expression based on the facial image being recognized as the forward expression; outputting interactive content corresponding to the negative expression based on the facial image being identified as the negative expression; and outputting interactive contents corresponding to the neutral expression based on the facial image being recognized as the neutral expression.
According to an embodiment of the present invention, the interactive content includes an object, the object includes a plurality of different states, and when the facial expressions are different, the states of the object in the output interactive content are different.
The embodiment of the invention also provides intelligent interaction equipment, which comprises: the image acquisition unit is used for acquiring facial images of the user; a processor; a memory having stored thereon computer executable instructions which when executed by a processor implement any of the methods according to embodiments of the present invention; and an output unit for displaying the current interactive contents.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement any of the methods according to embodiments of the present invention.
The embodiment of the invention provides an interaction method, device and equipment based on expression recognition, which have the advantages of simple structure, high algorithm execution speed and better real-time performance and interactivity.
Drawings
Fig. 1 shows a schematic diagram of an application scenario of an interactive system based on expression recognition according to an embodiment of the present invention.
Fig. 2 shows a schematic diagram of an image acquisition procedure according to an embodiment of the invention.
Fig. 3 shows a flow chart of an interaction method based on expression recognition according to an embodiment of the invention.
Fig. 4 shows a flowchart of an expression recognition method according to an embodiment of the present invention.
Fig. 5 shows a schematic diagram of a Gabor filter bank response according to an embodiment of the invention.
Fig. 6A and 6B show an exemplary facial image and its corresponding Gabor filter bank response diagram, respectively.
Fig. 7 shows an exemplary facial feature point location schematic in accordance with an embodiment of the invention.
Fig. 8 shows a block diagram of an expression recognition flow corresponding to the expression recognition method of fig. 4.
FIG. 9 shows an interaction diagram of an exemplary interaction content set in accordance with an embodiment of the present invention.
FIG. 10 illustrates a flow diagram of an example interaction method based on expression recognition, according to an embodiment of the invention.
FIG. 11 illustrates a block flow diagram of an exemplary artistic drawing recommendation scenario according to an embodiment of the present invention.
Fig. 12 shows a schematic diagram of an interaction device based on expression recognition according to an embodiment of the invention.
FIG. 13 shows a schematic diagram of a smart interactive device, in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein.
In the present specification and drawings, substantially the same or similar steps and elements are denoted by the same or similar reference numerals, and repeated descriptions of the steps and elements will be omitted. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance or order.
Fig. 1 shows a schematic diagram of an application scenario 100 of an interactive system based on expression recognition according to an embodiment of the present invention. And fig. 2 shows a schematic diagram of an image acquisition procedure 200 according to an embodiment of the invention.
As shown in fig. 1, in a scene 100, a user 101 achieves interaction with a smart device 102 through facial expressions.
Referring to fig. 1 and 2, the image capturing apparatus 103 captures a real-time facial image of a user 101 who intends to interact with the smart device 102, then transmits the captured facial image to the system background 201 of the smart device 102 for analysis and processing as shown in fig. 2, and recognizes a current facial expression of the user 101, and then displays the interactive content 105 conforming to the current facial expression of the user 101 on the display interface 104, thereby realizing real-time interaction of the user 101 with the smart device 102 based on the facial expression.
According to embodiments of the invention, the smart device 102 may be any type of smart device, such as a smart picture frame, desktop computer, tablet computer, smart television, smart appliance, smart phone, smart car device, and the like. The smart device 102 may also include smart interaction means, smart interaction software, etc. that can be installed in the device.
According to embodiments of the present invention, image capture device 103 may be comprised of a video camera, a still camera, or any other device capable of capturing images, which may have a variety of different resolutions and frame rates, such as 240p, 480i, 480p, 720i, 720p, 1080p, etc., and 30fps, 60fps, etc. The image acquisition device 103 may be part of the smart device 102 or may be a separate image acquisition device that can be communicatively coupled to the smart device 102.
According to an embodiment of the present invention, the display interface 104 may be a display interface embedded in the smart device 102, or may be any type of display interface such as a liquid crystal display (Liquid Crystal Display, LCD) display screen, a Cathode Ray Tube (CRT) display screen, etc. connected to the smart device 102.
According to an embodiment of the invention, the system background 201 may include a memory that may store image data acquired by the image acquisition device 103, as well as any other relevant system instructions and data. The system background 201 may also include any processing unit capable of analyzing and processing images and performing expression recognition, for example, a central processing unit (Central Processing Unit, CPU) and a graphics processing unit (Graphic Processing Unit, GPU), etc.
In the embodiment shown in fig. 1, the interactive content 105 is shown as an image of a bloom flower, but the present invention is not limited thereto, and in other embodiments, the interactive content 105 may be any other form of image content, video content, text information, audio content, etc. that may be pre-stored in a memory accessible by the smart device 102, or may be acquired in real time via a local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN), intranet, internet, storage area network (Storage Area Network, SAN), personal area network (Personal Area Network, PAN), metropolitan area network (Metropolitan Area Network, MAN), wireless local area network (Wireless Local Area Network, WLAN), virtual private network (Virtual Private Network, VPN), cellular or other mobile communication network, bluetooth, near field communication (Near-Field Communication, NFC), ultrasonic communication, etc.
In particular, FIG. 3 shows a flow chart of an interaction method 300 based on expression recognition according to an embodiment of the invention.
First, in step S301, a user face image is acquired.
As described above, the user face image may be acquired in real time by any image acquisition device capable of image acquisition, such as a video camera, a still camera, or the like.
In step S302, a facial expression of the user is identified based on the facial image.
According to an embodiment of the present invention, identifying the facial expression of the user based on the facial image may include: when the time that the user holds the facial expressions of the same category exceeds a predetermined time threshold, the facial image is identified as a specific facial expression, which is a facial expression of the user, wherein the specific facial expression is one of a positive expression, a neutral expression and a negative expression.
In one embodiment, the forward expression may include any expression corresponding to a forward emotion, for example, it may be a "happy" or "surprise" expression, it may also be a "smiling" or "laughing" expression, and so on. In one embodiment, the neutral expression may include any expression corresponding to a neutral emotion, e.g., it may be a "calm", "fool" or "thought" expression, and so forth. In one embodiment, the negative expression may include any expression corresponding to a negative emotion, for example, it may be a "frowning," "anger," "sadness," or "crying" expression, and so forth. It should be understood that the various expressions listed herein are merely some examples of positive, neutral, and negative expressions, and are not limiting.
Taking a "happy" expression as an example, if the user remains "happy" for more than a predetermined time threshold (e.g., 5 seconds), the system may recognize the user's facial expression as a "happy" expression, in accordance with an embodiment of the present invention. In another embodiment, the facial expression of the user corresponding to the currently acquired facial image of the user may also be determined in real time without a predetermined time threshold. For example, the currently acquired user face image may be directly judged as a "happy" expression. Next, an expression recognition method 400 according to an embodiment of the present invention will be described in detail with reference to fig. 4.
Fig. 4 shows a flowchart of an expression recognition method 400 according to an embodiment of the present invention.
First, in step S401, expression features of a face image are extracted.
As described above, the face image may be a user face image acquired in real time by any image acquisition device capable of image acquisition, such as a video camera, a still camera, or the like. In an embodiment of the present invention, feature vector extraction may be performed on the acquired face image by a Gabor filter bank. However, it will be appreciated by those skilled in the art that the feature extraction of the present invention is not limited to the use of Gabor filters, but may use any other method that may be used for feature extraction of facial images, such as, for example, steerabable filtering, schmid filtering, etc. Before describing the specific steps of feature extraction, a brief description of Gabor kernel functions and Gabor filters involved in embodiments of the present invention is given below.
The two-dimensional Gabor kernel function has the same characteristics as the two-dimensional reflection area of the simple cells of the cerebral cortex of the nursing animal, namely has stronger spatial position and direction selectivity, and can capture local structural information corresponding to space and frequency; gabor kernel is a good approximation to neurons in the visual cortex of higher vertebrates, a compromise in time and frequency domain accuracy. The Gabor filter has a strong robustness to brightness and contrast changes of an image and facial pose changes, and it expresses local features most useful for facial recognition.
The two-dimensional Gabor kernel function is shown in the following equation (1):
wherein x is =xcos θ+ysin θ, and y = -x sin θ+ycos θ; λ is a sine function wavelength whose value is specified in pixels, and in one embodiment λ may be greater than or equal to 2 and less than one fifth of the input image size; θ is the direction of the Gabor kernel parallel stripes, and the value of θ is 0 to 2pi; phi is a phase shift, the value range of which is-pi to pi, wherein 0 and pi correspond to a center-on function and a center-off function of central symmetry, respectively, and-pi/2 and pi/2 correspond to an antisymmetric function; gamma is the aspect ratio, i.e. the spatial aspect ratio, which determines the ellipticity (ellipticity) of the Gabor function shape, which is circular when gamma=1, and circular when gamma <1, the shape is elongated with the parallel stripe direction, in one embodiment, γ can take a value of 0.5; σ represents the standard deviation of the gaussian factor of the Gabor kernel function, the value of which cannot be set directly, but only varies with the half-response spatial frequency bandwidth b of the Gabor filter, the value of bandwidth b having to be a positive real number, in one embodiment, bandwidth b=1, where the relationship between standard deviation σ and wavelength λ is: σ=0.56λ. By setting different Gabor kernel function parameters, gabor filters with different scales and directions can be obtained.
In an embodiment of the present invention, the acquired facial image may be input into a Gabor filter bank comprising a plurality of scales and directions to obtain a corresponding filter response map, as shown in fig. 5.
Fig. 5 shows a Gabor filter bank response diagram 500 according to an embodiment of the invention.
In the embodiment shown in fig. 5, the Gabor filter bank includes 40 filters in all of the 5 dimensions and 8 directions. Wherein each row corresponds to a set of filters of the same scale and different directions, and each column corresponds to a set of filters of the same scale and different directions.
Fig. 6A and 6B illustrate an exemplary facial image 600 and its corresponding Gabor filter bank response map 610, respectively.
According to an embodiment of the present invention, the exemplary face image 600 shown in fig. 6A may be input into a Gabor filter bank including 5-scale and 8-direction filters as shown in fig. 5, and then subjected to a filtering process of each filter, a Gabor filter response map 610 of the exemplary face image 600 in the 5-scale and 8-direction directions may be obtained, respectively, as shown in fig. 6B.
Then, feature points of the face image 600 may be extracted from the filter response map 610, and feature values corresponding to the extracted feature points constitute feature vectors of the face image 600.
In particular, fig. 7 shows an exemplary facial feature point location schematic 700 in accordance with an embodiment of the invention.
According to the embodiment shown in fig. 7, for one exemplary face image, feature values at 68 different positions may be extracted to describe feature points such as the outline, eyebrows, eyes, nose, and mouth of the face image. In embodiments of the present invention, any number of feature values may also be extracted to describe other feature points of the facial image, such as laughter, forehead, etc. In one embodiment, all feature points including the contour, eyebrows, eyes, nose, mouth, and the like of the face image may be extracted as shown in fig. 7, and then feature values corresponding to all the feature points may be formed into a feature vector (for example, a 1×68 one-dimensional vector) of the face image. In another embodiment, only part of the feature points of the facial image, such as eyes and mouth, may be extracted, in which case a smaller feature vector (e.g., a 1 x 32 one-dimensional vector) may be constructed.
In an embodiment of the present invention, one sub-feature vector may be extracted from each of the filter response maps shown in fig. 6B (for example, 40 one-dimensional vectors of 1×68 may be extracted), and then all the sub-feature vectors may be spliced together to form a feature vector of a total face image (for example, one-dimensional vectors of 1×2720). In another embodiment of the present invention, all the filter response graphs shown in fig. 6B may be screened in advance according to a specific standard and method, and then feature vectors may be extracted from one or more of the screened filter response graphs, so that the data processing amount may be reduced and the processing speed may be increased.
Next, returning to fig. 4, in step S402, expression score values of the face image are acquired based on the expression features of the extracted face image.
In an embodiment of the present invention, the expression score value of the facial image may be obtained by a random forest regressor. In an embodiment of the present invention, the random forest regressor may be pre-trained with a set of facial image samples having preset expression scoring values. The training of the random forest regressor and the expression score obtaining process are described in detail below with reference to specific embodiments.
First, as described above, according to the embodiment of the present invention, facial expressions can be classified into three expression categories of negative expression, neutral expression, and positive expression. In some embodiments, facial expressions may also be categorized into other expression categories, or the three expression categories may be further subdivided, for example, negative expressions may be further subdivided into "sad" and "anger", positive expressions may be further subdivided into "happy" and "surprise", etc.
In the embodiment of the present invention, a sufficient number of face image samples may be constructed first, wherein for each face image sample, an expression score value may be set in advance for it. In one embodiment, a range of expression scores may be preset, such as [ -10,10], and then a corresponding expression score value is set for the facial expression contained in the facial image sample based on its positive or negative extent. For example, it is possible to preset the expression score value of a facial image sample containing a "surprise" expression to 10, the expression score value of a facial image sample containing a "laugh" expression to 8, the expression score value of a facial image sample containing a "smile" expression to 2, the expression score value of a facial image sample containing a "calm" expression to 0, the expression score value of a facial image sample containing a "frown" expression to-2, and the expression score value of a facial image sample containing an "anger" expression to-10, and so on. The feature vectors of the facial image samples with the preset expression score values can then be extracted by the feature extraction method and used for training of a random forest regressor. After training, the random forest regressor can output the expression scoring value corresponding to each input facial image (or the feature vector of the facial image).
Returning again to fig. 4, in step S403, expression recognition is performed on the face image based on the expression score value.
In an embodiment of the present invention, the facial image may be associated with a corresponding expression category according to a preset classification threshold range based on the expression score value, wherein different classification threshold ranges correspond to different expression categories. For example, in an embodiment containing three expression categories of negative, neutral and positive, the threshold range of [ -10, -1) may be associated with the negative expression, the threshold range of [ -1,1] with the neutral expression, and the threshold range of (1, 10) with the positive expression. It will be appreciated by those skilled in the art that the classification threshold range may also be preset in any other way. Then, based on the obtained expression score values, the facial image may be associated with a specific expression category, thereby realizing expression recognition of the facial image. For example, in this embodiment, a facial image with an expression score value of 5 may be recognized as a forward expression.
According to the expression recognition method based on the Gabor filter and the random forest regressor, the multi-expression classification and recognition can be accurately performed, the algorithm execution speed is high, in one embodiment, for a 150×150 facial image, the processing speed of 60fps can be achieved on an RK3399 ARM development board, and the requirement of real-time expression analysis can be completely met.
Fig. 8 shows an expression recognition flow diagram 800 corresponding to the expression recognition method 400 of fig. 4.
After the facial image is subjected to the expression recognition according to the expression recognition method shown in fig. 4 or 8, it is possible to return to step S303 in fig. 3 to adjust the interactive contents to be output according to the recognized facial expression.
According to an embodiment of the present invention, adjusting interactive contents to be output according to a facial expression may include: outputting interactive contents corresponding to the forward expression based on the facial image being recognized as the forward expression; outputting interactive contents corresponding to the negative expression based on the facial image being recognized as the negative expression; and outputting interactive contents corresponding to the neutral expression based on the facial image being recognized as the neutral expression. For example, when the user's facial image is recognized as a "happy" expression, interactive contents corresponding to the "happy" expression, such as a fully opened flower, a cartoon character smiling face, or happy music, etc., may be output; when the user's facial image is recognized as a "calm" expression, interactive contents corresponding to the "calm" expression, for example, calm lake surfaces, semi-open flowers, or mild music, etc., may be output; when the user's facial image is recognized as a "sad" expression, interactive contents corresponding to the "sad" expression, for example, closed flowers, music of a cartoon character crying face or a feeling of injury, or the like, may be output. It should be appreciated that these correspondence relationships listed here are merely illustrative and not limiting, and that any particular expression may also correspond to other different interactive content according to any predetermined rule. For example, in one embodiment, when the user's facial image is identified as a "sad" expression, a cartoon character smiling face may be output or gradually transition from calm to cheerful music to guide the user away from the "sad" emotion more quickly.
According to the embodiment of the invention, the interactive content can contain an object, the object can contain a plurality of different states, and when the recognized facial expressions of the users are different, the states of the object in the output interactive content can also be different. In one embodiment, the object may be a flower and the plurality of different states may be a flower closed state, a flower semi-open state, a flower fully open state; when the facial expression of the user is happy, the output interactive content can be an image with flowers fully opened; when the facial expression of the user is anger, the output interactive content can be a flower-closed image; and when the facial expression of the user is a neutral expression, the output interactive content can be a flower semi-open image. In this embodiment, the plurality of different states of the flower may also be a plurality of successive dynamic states throughout the process from flower closure to flower complete opening.
In another embodiment, the subject may be a cartoon character face, and the corresponding plurality of different states may be a cartoon character crying state, a cartoon character neutral expression state, and a cartoon character smiling state. When the facial expression of the user is happy, the output interaction content can be an image of the smiling face of the cartoon character; when the facial expression of the user is anger, the output interaction content can be an image of the crying face of the cartoon character; and when the facial expression of the user is a neutral expression, the output interactive content can be an image of the cartoon character neutral expression.
In yet another embodiment, the object may be the sun, and the corresponding plurality of different states may be a sunset state, and a state in which the sun is initially raised to half-empty. When the facial expression of the user is a neutral expression, the output interactive content can be an image from the initial rise of the sun to the half-empty; when the facial expression of the user is happy, the output interactive content can be a daily-rising image; and when the user's facial expression is anger, the outputted interactive contents may be images of sunset.
According to an embodiment of the present invention, the interactive contents to be outputted may be selected from at least one sub-interactive content in an interactive content set including a plurality of sub-interactive contents; wherein each sub-interactive content in the set of interactive contents may be indicated with a pointer.
In particular, still taking the embodiment in which the object in the interactive content is a flower as an example, the set of interactive content may include a set of a plurality of consecutive image frames throughout the process from a flower closed state to a flower fully open state. For example, the interactive contents set may be an interactive contents linked list, wherein each node in the linked list corresponds to one sub-interactive contents in the interactive contents set, and each node is indicated by a pointer, as shown in fig. 9.
Fig. 9 shows an interaction diagram of an exemplary interaction content set 900 according to an embodiment of the invention, and fig. 10 shows a flow diagram 1000 of an exemplary interaction method based on expression recognition according to an embodiment of the invention.
In the embodiment shown in connection with fig. 9 and 10, a set of dynamic image frames, such as image frames 1-11 shown in fig. 9, may be imported into the interactive content set in advance, which 11 image frames may correspond to successive images of the flowers at different points in time during their closure to their opening, respectively. The 11 image frames may then be formed into an image frame linked list 902 comprising 11 nodes, each corresponding to an image frame, in sequential order of flower closure to flower opening. The image frame to be currently output may be indicated by the current value of the pointer 901, for example, as shown in fig. 9, where the pointer 901 currently points to the node 6 in the image frame linked list 902 (i.e., the current value of the pointer 901 is 6), the half-open image frame corresponding to the node 6 may be regarded as the content to be currently output.
In one embodiment, a pointer offset corresponding to a particular facial expression may be generated based on a recognition result of the particular facial expression; and outputting the current interactive content from the interactive content set based on the current value of the pointer and the pointer offset.
Specifically, still taking the above-described embodiment including three expression categories of negative expression, neutral expression, and positive expression as an example, in this embodiment, a positive pointer offset may be generated based on the facial expression of the user being recognized as a positive expression, for example, pointer offset=1; a negative pointer offset may be generated based on the facial expression of the user being identified as a negative expression, e.g., pointer offset = -1; a zero pointer offset may be generated based on the facial expression of the user being identified as a neutral expression, e.g., pointer offset = 0. In the embodiment of further subdivision of the category of the condition, the value of the pointer offset may be further subdivided accordingly. For example, in the case where the forward expression is further subdivided into "happy" and "surprise", a pointer offset of value 1 may be generated based on the facial expression of the user being recognized as "happy" expression, and a pointer offset of value 2 may be generated based on the facial expression of the user being recognized as "surprise" expression. However, it will be appreciated by those skilled in the art that in other embodiments of the present invention, pointer offsets corresponding to different expression categories may also be set to any other value.
In one embodiment, an updated value of the pointer may be generated from a sum of a current value of the pointer and the pointer offset, and sub-interactive contents corresponding to the updated value of the pointer may be output from the interactive contents set as current interactive contents.
Specifically, taking the above embodiment including three expression categories of negative expression, neutral expression and positive expression as an example, in this embodiment, as shown in fig. 9, assuming that the current value of the pointer 901 is 6 (i.e., points to the node 6 in the linked list), if the current facial expression of the user is recognized as the positive expression, a pointer offset with a value of 1 may be generated according to the method of the above embodiment, and then the updated value 7 of the pointer may be generated according to the sum of the current value of the pointer 901 and the pointer offset (i.e., the updated pointer points to the node 7 in the linked list), and thus the bloom image frame corresponding to the node 7 may be regarded as the interactive content to be currently output. According to the embodiment of the invention, if the facial expression of the user is still recognized as the forward expression at the next moment, the system can take the flower-bloom image frame corresponding to the node 8 as the interaction content to be output at the next moment; if the user's facial expression is recognized as a positive expression at a plurality of successive moments, the system may output a continuous dynamic process in which the flower gradually opens. Pointer 901 may slide on different locations of the linked list depending on the current value and pointer offset. Therefore, according to the method provided by the embodiment of the invention, the high real-time interaction between the user and the interaction system can be realized, the system can output images conforming to the current expression according to the current facial expression of the user, and can output continuous dynamic image frames according to the continuous facial expression of the user, so that the method has more interactivity and interestingness.
In other embodiments of the present invention, the updated values of the pointers may also be generated according to other predetermined rules, such as updating the pointer values only when N (N is a positive integer) expressions of the same category are continuously recognized, or updating the pointer values only when it is recognized that the time for which the user holds facial expressions of the same category exceeds a predetermined time threshold, and so on.
In an embodiment of the present invention, in addition to outputting the interactive contents from the previously imported set of interactive contents according to the above-described exemplary interactive method in connection with fig. 10, the second interactive contents may be outputted from a second set of interactive contents different from the above-described set of interactive contents based on the updated value of the pointer exceeding a predetermined trigger threshold.
In one embodiment of the present invention, the length range of the image frame linked list 902 as shown in fig. 9 may be used as a predetermined trigger threshold, that is, the upper trigger threshold may be set to the maximum node number 11 of the image frame linked list 902 and the lower trigger threshold may be set to the minimum node number 1 of the image frame linked list 902. In this embodiment, when the updated value of the pointer is greater than 11 or less than 1, the second interactive contents may be outputted from the second interactive contents set. In an embodiment of the present invention, the trigger threshold may also be set according to other manners, for example, N positive expressions may be continuously identified as an upper trigger threshold, and M negative expressions may be continuously identified as a lower trigger threshold (N and M are both positive integers). In embodiments including, for example, further subdivision into "anger", "sad", "neutral", "happy" and "surprise" expression categories, a plurality of respective trigger thresholds may also be set in segments in order to output second interaction content corresponding to different expression categories, respectively.
In an embodiment of the present invention, the second interactive content set may be any form of image content, video content, text information, audio content, etc., which may be pre-stored in a memory accessible to the interactive system, or may be acquired in real time via a local area network, a wide area network, an intranet, the internet, a storage area network, a personal area network, a metropolitan area network, a wireless local area network, a virtual private network, a cellular or other mobile communication network, bluetooth, near field communication, ultrasonic communication, etc.
In another embodiment of the present invention, it is also possible to determine an expression category currently corresponding to the facial image based on the directional characteristic of the predetermined trigger threshold, and output the interactive content corresponding to the expression category as the current interactive content, as shown in fig. 11.
FIG. 11 illustrates a flow diagram 1100 of an exemplary artistic drawing recommendation scenario according to an embodiment of the present invention.
In the scenario shown in fig. 11, when the updated value of the pointer exceeds the predetermined trigger threshold, it may be further determined whether the updated value of the pointer exceeds the upper trigger threshold or the lower trigger threshold, and if the upper trigger threshold corresponds to the "happy" expression category and the lower trigger threshold corresponds to the "not happy" expression category (i.e., the directional characteristic of the trigger threshold), it may be determined that the current expression of the user is "happy" when the updated value of the pointer is detected to exceed the upper trigger threshold, so that recommended drawing applicable to happy mood may be recommended and displayed; when the updated value of the pointer is detected to exceed the lower trigger threshold, the current expression of the user can be judged to be 'unpleasant', so that recommended drawing applicable to the unpleasant mood can be recommended and displayed. Therefore, the purposes of real-time interaction with the user and recommendation of the moods suitable for drawing for the user are achieved. In another embodiment of the invention, the number of times a particular drawing is recommended and the mood feedback of the user when the drawing is displayed may also be recorded so that subsequent recommendations may be optimized by analyzing the recorded information.
Fig. 12 shows a schematic diagram of an interaction device 1200 based on expression recognition according to an embodiment of the invention.
The expression recognition-based interaction device 1200 according to an embodiment of the present invention may include: a display 1201, an image collector 1202, an expression recognition module 1203, and a controller 1204. The display 1201 may be any type of display built into or external to the apparatus 1200, such as a Liquid Crystal Display (LCD), cathode Ray Tube (CRT) display, or the like. Image collector 1202 may be any device capable of image collection, such as a video camera, a still camera, or a smart phone with a camera function, etc. Image collector 1202 is configured to acquire a user face image. Expression recognition module 1203 is configured to recognize a facial expression of the user based on the user facial image acquired by image collector 1202. The controller 1204 is configured to adjust the interactive content to be output according to the recognized facial expression and control the display 1201 to display the interactive content.
The expression recognition-based interaction device 1200 according to an embodiment of the present invention may further include: a recommendation module (not shown) configured to output the second interactive content from the second set of interactive contents in case the updated value of the pointer exceeds a predetermined trigger threshold.
Fig. 13 shows a schematic diagram of a smart interactive device 1300 according to an embodiment of the invention.
As shown in fig. 13, a smart interactive device 1300 according to an embodiment of the present invention may include: an image acquisition unit 1301, a processor 1302, a memory 1303 and an output unit 1304.
The image capturing unit 1302 may be formed by a video camera, a still camera, or any other device capable of capturing images, and may be part of the intelligent interactive device 1300 or may be a separate image capturing unit capable of being communicatively coupled to the intelligent interactive device 1300. The image acquired by the image acquisition unit 1302 may be stored in the memory 1303.
The processor 1302 may perform various actions and processes according to programs stored in the memory 1303. In particular, processor 1302 may be an integrated circuit chip with signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. Various methods, steps, and logic blocks disclosed in embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and may be an X86 architecture or an ARM architecture or the like.
The memory 1303 stores computer executable instruction code that, when executed by the processor 1302, implements the expression recognition method and the expression recognition-based interaction method according to the embodiment of the present invention. The memory 1303 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (ddr SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The output unit 1304 may be any type of display interface capable of being used to display the current interactive content, such as a Liquid Crystal Display (LCD) display interface, a Cathode Ray Tube (CRT) display interface, etc., which may be part of the smart interactive device 1300 or may be a separate display interface capable of being communicatively connected to the smart interactive device 1300. In an embodiment of the present invention, the output unit 1304 may further include an output device capable of outputting any other form of interactive contents, such as a speaker for outputting audio-sound interactive contents, a vibration generator for outputting vibration effects, and the like.
The invention also provides a computer readable storage medium having stored thereon computer executable instructions which when executed by a processor implement the expression recognition method and the interaction method based on expression recognition according to embodiments of the invention. Similarly, the computer readable storage medium in embodiments of the present invention may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The embodiment of the invention provides an interaction method, device and equipment based on expression recognition, which have the advantages of simple structure, high algorithm execution speed and better real-time performance and interactivity.
It is noted that the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In general, the various example embodiments of the invention may be implemented in hardware or special purpose circuits, software, firmware, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of the embodiments of the invention are illustrated or described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The exemplary embodiments of the invention described in detail above are illustrative only and are not limiting. It will be appreciated by those skilled in the art that various modifications and combinations of the embodiments or features thereof can be made without departing from the principles and spirit of the invention, and such modifications are intended to be within the scope of the invention.

Claims (14)

1. An interaction method based on expression recognition comprises the following steps:
acquiring a facial image of a user;
identifying a facial expression of the user based on the facial image; and
The interactive contents to be output are adjusted according to the facial expression,
wherein the outputted interactive content is selected from at least one sub-interactive content in an interactive content set comprising a plurality of sub-interactive contents, the interactive content set comprising an image frame linked list generated based on a plurality of consecutive image frames, wherein each node in the image frame linked list corresponds to one sub-interactive content and each node is represented by a pointer,
wherein adjusting the interactive content to be output according to the facial expression includes:
generating pointer offsets corresponding to the identified facial expressions; and
outputting current interactive contents from the set of interactive contents based on the current value of the pointer and the pointer offset,
the interaction method further comprises the following steps:
and outputting continuous dynamic image frames based on the image frame linked list as the interactive content to be output according to the continuous facial expressions of the user identified at a plurality of continuous moments.
2. The interaction method of claim 1, wherein the adjusting the interaction content to be output according to the facial expression comprises:
outputting interactive contents corresponding to the forward expression based on the facial image being recognized as the forward expression;
Outputting interactive content corresponding to the negative expression based on the facial image being identified as the negative expression; and
based on the facial image being identified as a neutral expression, interactive content corresponding to the neutral expression is output.
3. The interaction method of claim 1, wherein the identifying facial expressions of the user based on the facial image comprises:
when a time for which a user holds facial expressions of the same category exceeds a predetermined time threshold, the facial image is identified as a specific facial expression as the facial expression of the user, wherein the specific facial expression is one of a positive expression, a neutral expression, and a negative expression.
4. The interactive method of claim 1, wherein the interactive contents comprise an object, the object comprises a plurality of different states, and when the facial expressions are different, the states of the object in the output interactive contents are different.
5. The interaction method of claim 4, wherein the object is a flower, and the plurality of different states are a flower closed state, a flower semi-open state, a flower fully open state; wherein,,
when the facial expression is happy, the output interactive content is an image with flowers fully opened;
When the facial expression is anger, outputting interactive content which is a flower closed image; and
when the facial expression is a neutral expression, the output interactive content is a flower semi-open image.
6. The interactive method of claim 1, wherein the interactive content comprises at least one of an image, a video, and an audio.
7. The interaction method of claim 1, wherein identifying a facial expression of a user based on the facial image comprises:
extracting expression features of the facial image;
based on the expression characteristics of the extracted facial image, obtaining expression grading values of the facial image; and
and carrying out expression recognition on the facial image based on the expression grading value.
8. The interaction method of claim 3, wherein the generating a pointer offset corresponding to the particular facial expression based on the recognition result of the particular facial expression comprises:
generating a forward pointer offset based on the particular facial expression being identified as a forward expression;
generating a negative pointer offset based on the particular facial expression being identified as a negative expression; and
based on the particular facial expression being identified as a neutral expression, a zero pointer offset is generated.
9. The interaction method of claim 1, wherein the outputting the current interaction content from the interaction content set based on the current value of the pointer and the pointer offset comprises:
generating an updated value of the pointer according to the sum of the current value of the pointer and the pointer offset; and
sub-interactive contents corresponding to the updated value of the pointer are output from the interactive contents set as current interactive contents.
10. An interaction device based on expression recognition, comprising:
a display;
an image collector configured to: acquiring a facial image of a user;
an expression recognition module configured to: identifying a facial expression of the user based on the facial image; and
a controller configured to: adjusting interactive contents to be output according to the facial expression and controlling the display to display the interactive contents,
wherein the outputted interactive content is selected from at least one sub-interactive content in an interactive content set comprising a plurality of sub-interactive contents, the interactive content set comprising an image frame linked list generated based on a plurality of consecutive image frames, wherein each node in the image frame linked list corresponds to one sub-interactive content and each node is represented by a pointer,
Wherein adjusting the interactive content to be output according to the facial expression includes:
generating pointer offsets corresponding to the identified facial expressions; and
outputting current interactive contents from the set of interactive contents based on the current value of the pointer and the pointer offset,
the controller is further configured to:
and outputting continuous dynamic image frames based on the image frame linked list as the interactive content to be output according to the continuous facial expressions of the user identified at a plurality of continuous moments.
11. The interaction device of claim 10, wherein the adjusting the interaction content to be output according to the facial expression comprises:
outputting interactive contents corresponding to the forward expression based on the facial image being recognized as the forward expression;
outputting interactive content corresponding to the negative expression based on the facial image being identified as the negative expression; and
based on the facial image being identified as a neutral expression, interactive content corresponding to the neutral expression is output.
12. The interactive apparatus of claim 10, wherein the interactive contents comprise an object, the object comprising a plurality of different states, and the states of the object in the output interactive contents are different when the facial expressions are different.
13. An intelligent interaction device, comprising:
the image acquisition unit is used for acquiring facial images of the user;
a processor;
a memory having stored thereon computer executable instructions which when executed by a processor implement the method of any of claims 1-9; and
and the output unit is used for displaying the current interactive content.
14. A computer readable storage medium having stored thereon computer executable instructions which when executed by a processor implement the method of any of claims 1-9.
CN202010005487.8A 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition Active CN111507149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010005487.8A CN111507149B (en) 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010005487.8A CN111507149B (en) 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition

Publications (2)

Publication Number Publication Date
CN111507149A CN111507149A (en) 2020-08-07
CN111507149B true CN111507149B (en) 2023-10-27

Family

ID=71871033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010005487.8A Active CN111507149B (en) 2020-01-03 2020-01-03 Interaction method, device and equipment based on expression recognition

Country Status (1)

Country Link
CN (1) CN111507149B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532971B (en) * 2019-09-02 2023-04-28 京东方科技集团股份有限公司 Image processing apparatus, training method, and computer-readable storage medium
CN112418146B (en) * 2020-12-02 2024-04-30 深圳市优必选科技股份有限公司 Expression recognition method, apparatus, service robot, and readable storage medium
CN115601821B (en) * 2022-12-05 2023-04-07 中国汽车技术研究中心有限公司 Interaction method based on expression recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN108733209A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Man-machine interaction method, device, robot and storage medium
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN109819100A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 Mobile phone control method, device, computer installation and computer readable storage medium
CN110363079A (en) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 Expression exchange method, device, computer installation and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201220216A (en) * 2010-11-15 2012-05-16 Hon Hai Prec Ind Co Ltd System and method for detecting human emotion and appeasing human emotion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN105082150A (en) * 2015-08-25 2015-11-25 国家康复辅具研究中心 Robot man-machine interaction method based on user mood and intension recognition
CN108733209A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Man-machine interaction method, device, robot and storage medium
CN109819100A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 Mobile phone control method, device, computer installation and computer readable storage medium
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification
CN110363079A (en) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 Expression exchange method, device, computer installation and computer readable storage medium

Also Published As

Publication number Publication date
CN111507149A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
US10783354B2 (en) Facial image processing method and apparatus, and storage medium
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
US20210174072A1 (en) Microexpression-based image recognition method and apparatus, and related device
CN111507149B (en) Interaction method, device and equipment based on expression recognition
US20220172518A1 (en) Image recognition method and apparatus, computer-readable storage medium, and electronic device
TW201911130A (en) Method and device for remake image recognition
EP4105877A1 (en) Image enhancement method and image enhancement apparatus
WO2019133403A1 (en) Multi-resolution feature description for object recognition
Loke et al. Indian sign language converter system using an android app
EP3917131A1 (en) Image deformation control method and device and hardware device
WO2021047587A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN110858316A (en) Classifying time series image data
Zhao et al. Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection
WO2024001095A1 (en) Facial expression recognition method, terminal device and storage medium
WO2023137915A1 (en) Feature fusion-based behavior recognition method and apparatus, device and storage medium
US11804032B2 (en) Method and system for face detection
CN109313797B (en) Image display method and terminal
WO2023197780A1 (en) Image processing method and apparatus, electronic device, and storage medium
US20160140748A1 (en) Automated animation for presentation of images
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
CN111507139A (en) Image effect generation method and device and electronic equipment
CN111967436B (en) Image processing method and device
US12020469B2 (en) Method and device for generating image effect of facial expression, and electronic device
Taraghi et al. Object detection using Google Glass

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210514

Address after: Room 2305, luguyuyuan venture building, 27 Wenxuan Road, high tech Development Zone, Changsha City, Hunan Province, 410005

Applicant after: BOE Yiyun Technology Co.,Ltd.

Address before: 100015 No. 10, Jiuxianqiao Road, Beijing, Chaoyang District

Applicant before: BOE TECHNOLOGY GROUP Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230925

Address after: Room 207, 207M, Building 1, 1818-1 Wenyi West Road, Yuhang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311121

Applicant after: BOE Yiyun (Hangzhou) Technology Co.,Ltd.

Address before: Room 2305, luguyuyuan venture building, 27 Wenxuan Road, high tech Development Zone, Changsha City, Hunan Province, 410005

Applicant before: BOE Yiyun Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant