CN110568933A - human-computer interaction method and device based on face recognition and computer equipment - Google Patents

human-computer interaction method and device based on face recognition and computer equipment Download PDF

Info

Publication number
CN110568933A
CN110568933A CN201910871291.4A CN201910871291A CN110568933A CN 110568933 A CN110568933 A CN 110568933A CN 201910871291 A CN201910871291 A CN 201910871291A CN 110568933 A CN110568933 A CN 110568933A
Authority
CN
China
Prior art keywords
face
rendering
human
data
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910871291.4A
Other languages
Chinese (zh)
Inventor
刘凯
杨沙
何从华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Quchuang Technology Co Ltd
Original Assignee
Shenzhen Quchuang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Quchuang Technology Co Ltd filed Critical Shenzhen Quchuang Technology Co Ltd
Priority to CN201910871291.4A priority Critical patent/CN110568933A/en
Publication of CN110568933A publication Critical patent/CN110568933A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The application relates to a human-computer interaction method, a human-computer interaction device, computer equipment and a storage medium based on face recognition, wherein the method comprises the following steps: acquiring data of key points of the human face acquired by a sensor; analyzing the data of the face key points to obtain a recognition result corresponding to the face expression; feeding back a corresponding face rendering request according to the recognition result of the facial expression; and rendering the corresponding face image on the screen according to the face rendering request. The invention realizes interesting interaction of man-machine by recognizing the expression characteristics of the face, thereby relieving the work or life pressure of modern people, and sharing the mood of the modern people to improve the satisfaction of users.

Description

human-computer interaction method and device based on face recognition and computer equipment
Technical Field
The invention relates to the technical field of computer application, in particular to a human-computer interaction method and device based on face recognition, computer equipment and a storage medium.
Background
Currently, with the development of computer technology, face recognition technology has become more mature and has been widely applied to various mobile terminals, such as: in smart phones, mobile tablets and other devices, the user experience of face recognition is more and more emphasized by various manufacturers, and how to extend the application scene of the face recognition technology to improve the user experience is a problem to be solved urgently.
In the conventional technology, a face recognition technology is generally used for recognizing a specific user to unlock the device, or is only used for detecting facial features of the user, and the face recognition technology is not used for deeply mining human-computer interaction of the user so as to improve the satisfaction degree of the user when the user uses the terminal device.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a human-computer interaction method and apparatus based on face recognition, a computer device, and a storage medium.
A human-computer interaction method based on face recognition, the method comprising:
Acquiring data of key points of the human face acquired by a sensor;
analyzing the data of the face key points to obtain a recognition result corresponding to the face expression;
feeding back a corresponding face rendering request according to the recognition result of the facial expression;
and rendering the corresponding face image on the screen according to the face rendering request.
in one embodiment, before the step of acquiring the data of the face key points acquired by the sensor, the method further includes:
data of key points of a human face are collected through a sensor at an application layer, wherein the key points of the human face comprise eyes, a nose, lips, a chin, eyebrows and ears.
in one embodiment, the step of analyzing the data of the face key points to obtain the recognition result corresponding to the facial expression includes:
Transmitting the data of the face key points acquired by the application layer to a framework layer, to Libraries or Android Runtime and finally to a HAL layer;
Processing data of key points of the human face according to an algorithm in a HAL layer;
And matching the data processing result with a large-scale data model to obtain a recognition result corresponding to the facial expression.
In one embodiment, the step of rendering the corresponding face image on the screen according to the face rendering request includes:
performing expression rendering on a face image through OpenGL according to the face rendering request;
the expression rendering mode comprises plane image rendering, 3D rendering and sound rendering.
a human-computer interaction device based on face recognition, the device comprising:
the acquisition module is used for acquiring the data of the key points of the human face acquired by the sensor;
the data analysis module is used for analyzing the data of the face key points to obtain the recognition result of the corresponding face expression;
The feedback module is used for feeding back a corresponding face rendering request according to the recognition result of the facial expression;
And the rendering module is used for rendering the corresponding face image on the screen according to the face rendering request.
in one embodiment, the apparatus further comprises an acquisition module configured to:
Data of key points of a human face are collected through a sensor at an application layer, wherein the key points of the human face comprise eyes, a nose, lips, a chin, eyebrows and ears.
in one embodiment, the data analysis module is further configured to:
Transmitting the data of the face key points acquired by the application layer to a framework layer, to Libraries or Android Runtime and finally to a HAL layer;
Processing data of key points of the human face according to an algorithm in a HAL layer;
and matching the data processing result with a large-scale data model to obtain a recognition result corresponding to the facial expression.
In one embodiment, the rendering module is further configured to:
performing expression rendering on a face image through OpenGL according to the face rendering request;
The expression rendering mode comprises plane image rendering, 3D rendering and sound rendering.
a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the above methods when executing the computer program.
a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods described above.
The human-computer interaction method, the human-computer interaction device, the computer equipment and the storage medium based on the face recognition acquire the data of key points of the face acquired by the sensor; analyzing the data of the face key points to obtain a recognition result corresponding to the face expression; feeding back a corresponding face rendering request according to the recognition result of the facial expression; and rendering the corresponding face image on the screen according to the face rendering request. The invention realizes interesting interaction of man-machine by recognizing the expression characteristics of the face, thereby relieving the work or life pressure of modern people, sharing the mood of the modern people and improving the satisfaction of users.
Drawings
FIG. 1 is a schematic flow chart of a human-computer interaction method based on face recognition in one embodiment;
FIG. 2 is a schematic flow chart of a human-computer interaction method based on face recognition in another embodiment;
FIG. 3 is a schematic data flow diagram illustrating a human-computer interaction method based on face recognition in one embodiment;
FIG. 4 is a schematic structural diagram of a human-computer interaction device based on face recognition according to an embodiment;
FIG. 5 is a schematic structural diagram of a human-computer interaction device based on face recognition in another embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 7 is a block diagram of a portion of a handset associated with the computing device in one embodiment.
Detailed Description
in order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
in one embodiment, as shown in fig. 1, a human-computer interaction method based on face recognition is provided, and the method includes:
102, acquiring data of key points of a human face acquired by a sensor;
step 104, analyzing the data of the key points of the human face to obtain the recognition result of the corresponding human face expression;
Step 106, feeding back a corresponding face rendering request according to the recognition result of the facial expression;
And step 108, rendering the corresponding face image on the screen according to the face rendering request.
in this embodiment, a human-computer interaction method based on face recognition is provided, and the method can be applied to various intelligent mobile terminals, for example: smart phones, etc. The intelligent terminal judges the mood state of the face recognition person according to the facial expression of the face recognition person by the face recognition technology, for example: joy, anger, sadness and funeral. After the recognition result, an interesting expression is rendered on the screen for the recognizer, so that the purpose of the recognizer is facilitated. The technologies used in the embodiment include technologies related to face recognition and rendering of OpenGL expressions, and the emphasis of the scheme is on a human-computer expression interaction process.
Specifically, first, the mobile terminal has a corresponding sensor, and the sensor is used to collect data of key points of a human face, for example: the most basic can utilize the camera sensor to collect the plane data of the image, if the camera has a double-shot function, the corresponding depth of field data can be collected by the depth of field camera, so as to improve the accuracy and the safety of the face recognition.
and then, after the mobile terminal acquires the data of the key points of the human face through the sensor, analyzing the data of the key points of the human face to obtain an identification result corresponding to the human face expression. Specifically, the state of a person can be recognized according to an algorithm of key points of a face part, the key points for improving the recognition degree of the expression are the algorithm for improving the face recognition, then the result is fed back according to the result (the expression of the face) judged by the algorithm, and then the corresponding interesting expression is rendered on the screen according to the result. For example: the expression of a person can be judged to be happy, angry and saddley according to the face algorithm, when the person frown the eyebrow, the face recognition technology judges that the mood of the person is not good according to the expression, and an expression giving a cheer up is rendered on a screen or an encouraging sound is added. When a person shows a smiley face, the face algorithm recognizes that an open expression is rendered on the open state screen and a speech from a makefun is coming.
in the embodiment, data of key points of a human face acquired by a sensor are acquired; analyzing the data of the key points of the face to obtain the recognition result of the corresponding facial expression; feeding back a corresponding face rendering request according to the recognition result of the facial expression; and rendering the corresponding face image on the screen according to the face rendering request. According to the embodiment, the human-computer interesting interaction is realized by recognizing the expression characteristics of the face, so that the work or life pressure of modern people is relieved, the mood of the modern people can be shared, and the satisfaction of users is improved.
in an embodiment, as shown in fig. 2, a human-computer interaction method based on face recognition is provided, in which the step of analyzing data of key points of a face to obtain a recognition result corresponding to a facial expression includes:
Step 202, transmitting the data of the face key points acquired by the application layer to a framework layer, to Libraries or Android Runtime and finally to a HAL layer;
Step 204, processing data of key points of the human face according to an algorithm in a HAL layer;
And step 206, matching the data processing result with the large-scale data model to obtain a recognition result corresponding to the facial expression.
specifically, with reference to the data flow diagram shown in fig. 3, first, data is collected from an application layer to a framework layer of an Android system to a Libraries (or an Android Runtime) to a HAL layer, and the HAL layer returns data according to an algorithm analysis result.
In one embodiment, before the step of acquiring the data of the face key points acquired by the sensor, the method further comprises: and acquiring data of key points of the human face through a sensor at an application layer, wherein the key points of the human face comprise eyes, a nose, lips, a chin, eyebrows and ears.
in this embodiment, the face recognition algorithm is a key point, and an expression state of a recognized person can be determined by the key point of the face part. Specifically, the key points of the acquisition are points in the face region, including: and carrying out complex data analysis processing on the data of eyes, nose, lips, chin, eyebrows, ears and the like. The result of each data processing can be closely matched with the result of the large-scale data model to obtain the facial expression of the current person. The expression result of the human is obtained by large-scale and large-range data face modeling and face structure characteristics. And finally, identifying key points in the graph through a face recognition algorithm to judge the mood state of the person, and then rendering the expression of the make fun on a screen according to the result of the feedback algorithm to achieve an interesting interaction process.
in one embodiment, the step of rendering the corresponding face image on the screen according to the face rendering request includes: performing expression rendering on the face image through OpenGL according to the face rendering request; the expression rendering mode comprises plane image rendering, 3D rendering and sound rendering.
In the embodiment, not only expression rendering but also corresponding 3D rendering and sound rendering of the two-dimensional screen image are supported. For example: the corresponding joke expression or interesting sound is rendered on the screen by OpenGL, so that the work or life pressure of modern people is relieved, and the method can be used for sharing the mood of the modern people.
It should be understood that although the various steps in the flow charts of fig. 1-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
in one embodiment, as shown in fig. 4, there is provided a human-computer interaction device 400 based on face recognition, the device comprising:
an obtaining module 401, configured to obtain data of key points of a human face acquired by a sensor;
the data analysis module 402 is configured to analyze data of key points of a human face to obtain an identification result corresponding to a human face expression;
a feedback module 403, configured to feed back a corresponding face rendering request according to a recognition result of the facial expression;
and the rendering module 404 is configured to render the corresponding face image on the screen according to the face rendering request.
In one embodiment, as shown in fig. 5, a human-computer interaction device 400 based on face recognition is provided, the device further includes an acquisition module 405 for:
And acquiring data of key points of the human face through a sensor at an application layer, wherein the key points of the human face comprise eyes, a nose, lips, a chin, eyebrows and ears.
In one embodiment, the data analysis module 402 is further configured to:
transmitting the data of the face key points acquired by the application layer to a framework layer, to Libraries or Android Runtime and finally to a HAL layer;
processing data of key points of the human face according to an algorithm in a HAL layer;
And matching the data processing result with a large-scale data model to obtain a recognition result corresponding to the facial expression.
In one embodiment, the rendering module 404 is further configured to:
Performing expression rendering on the face image through OpenGL according to the face rendering request;
The expression rendering mode comprises plane image rendering, 3D rendering and sound rendering.
For specific limitations of the human-computer interaction device based on face recognition, reference may be made to the above limitations of the human-computer interaction method based on face recognition, and details thereof are not repeated here.
in one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 6. The computer apparatus includes a processor, a memory, and a network interface connected by a device bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores an operating device, a computer program, and a database. The internal memory provides an environment for the operation device in the nonvolatile storage medium and the execution of the computer program. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a human-computer interaction method based on face recognition.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
in one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above respective method embodiments.
the embodiment of the application also provides computer equipment. As shown in fig. 7, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The computer device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the computer device as the mobile phone as an example:
fig. 7 is a block diagram of a partial structure of a mobile phone related to a computer device provided in an embodiment of the present application. Referring to fig. 7, the handset includes: radio Frequency (RF) circuit 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuit 760, wireless fidelity (WiFi) module 770, processor 780, and power supply 790. Those skilled in the art will appreciate that the handset configuration shown in fig. 7 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 710 may be used for receiving and transmitting signals during information transmission or communication, and may receive downlink information of a base station and then process the downlink information to the processor 780; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
the memory 720 may be used to store software programs and modules, and the processor 780 may execute various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
the input unit 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 700. Specifically, the input unit 730 may include a touch panel 731 and other input devices 732. The touch panel 731, which may also be referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 731 by using a finger, a stylus, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 731 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch panel 731 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 730 may include other input devices 732 in addition to the touch panel 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
the display unit 740 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 740 may include a display panel 741. In one embodiment, the Display panel 741 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 731 can cover the display panel 741, and when the touch panel 731 detects a touch operation on or near the touch panel 731, the touch operation is transmitted to the processor 780 to determine the type of the touch event, and then the processor 780 provides a corresponding visual output on the display panel 741 according to the type of the touch event. Although the touch panel 731 and the display panel 741 are two independent components in fig. 7 to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 731 and the display panel 741 may be integrated to implement the input and output functions of the mobile phone.
the cell phone 700 may also include at least one sensor 750, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 741 and/or a backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and a cell phone. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, and then outputs the audio data to the processor 780 for processing, and then the processed audio data may be transmitted to another mobile phone through the RF circuit 710, or outputs the audio data to the memory 720 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 770, and provides wireless broadband Internet access for the user. Although fig. 7 shows WiFi module 770, it is understood that it does not belong to the essential components of handset 700 and may be omitted as desired.
The processor 780 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby integrally monitoring the mobile phone. In one embodiment, processor 780 may include one or more processing units. In one embodiment, processor 780 may integrate an application processor and a modem processor, where the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The handset 700 also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system that may be used to manage charging, discharging, and power consumption.
in one embodiment, the cell phone 700 may also include a camera, a bluetooth module, and the like.
In the embodiment of the present application, the processor 780 included in the mobile terminal implements the above-described steps of human-computer interaction based on face recognition when executing the computer program stored in the memory.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A human-computer interaction method based on face recognition is characterized by comprising the following steps:
Acquiring data of key points of the human face acquired by a sensor;
Analyzing the data of the face key points to obtain a recognition result corresponding to the face expression;
feeding back a corresponding face rendering request according to the recognition result of the facial expression;
And rendering the corresponding face image on the screen according to the face rendering request.
2. the human-computer interaction method based on face recognition according to claim 1, wherein before the step of acquiring the data of the face key points collected by the sensor, the method further comprises:
Data of key points of a human face are collected through a sensor at an application layer, wherein the key points of the human face comprise eyes, a nose, lips, a chin, eyebrows and ears.
3. The human-computer interaction method based on the face recognition of claim 2, wherein the step of analyzing the data of the face key points to obtain the recognition result corresponding to the face expression comprises:
Transmitting the data of the face key points acquired by the application layer to a framework layer, to Libraries or Android Runtime and finally to a HAL layer;
Processing data of key points of the human face according to an algorithm in a HAL layer;
and matching the data processing result with a large-scale data model to obtain a recognition result corresponding to the facial expression.
4. the human-computer interaction method based on face recognition of claim 1, wherein the step of rendering the corresponding face image on the screen according to the face rendering request comprises:
performing expression rendering on a face image through OpenGL according to the face rendering request;
The expression rendering mode comprises plane image rendering, 3D rendering and sound rendering.
5. A human-computer interaction device based on face recognition is characterized in that the device comprises:
The acquisition module is used for acquiring the data of the key points of the human face acquired by the sensor;
The data analysis module is used for analyzing the data of the face key points to obtain the recognition result of the corresponding face expression;
the feedback module is used for feeding back a corresponding face rendering request according to the recognition result of the facial expression;
And the rendering module is used for rendering the corresponding face image on the screen according to the face rendering request.
6. The human-computer interaction device based on face recognition of claim 5, wherein the device further comprises an acquisition module, and the acquisition module is configured to:
Data of key points of a human face are collected through a sensor at an application layer, wherein the key points of the human face comprise eyes, a nose, lips, a chin, eyebrows and ears.
7. the human-computer interaction device based on face recognition of claim 6, wherein the data analysis module is further configured to:
Transmitting the data of the face key points acquired by the application layer to a framework layer, to Libraries or Android Runtime and finally to a HAL layer;
Processing data of key points of the human face according to an algorithm in a HAL layer;
and matching the data processing result with a large-scale data model to obtain a recognition result corresponding to the facial expression.
8. the human-computer interaction device based on face recognition of claim 5, wherein the rendering module is further configured to:
performing expression rendering on a face image through OpenGL according to the face rendering request;
the expression rendering mode comprises plane image rendering, 3D rendering and sound rendering.
9. a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 4 are implemented when the computer program is executed by the processor.
10. a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN201910871291.4A 2019-09-16 2019-09-16 human-computer interaction method and device based on face recognition and computer equipment Pending CN110568933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910871291.4A CN110568933A (en) 2019-09-16 2019-09-16 human-computer interaction method and device based on face recognition and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910871291.4A CN110568933A (en) 2019-09-16 2019-09-16 human-computer interaction method and device based on face recognition and computer equipment

Publications (1)

Publication Number Publication Date
CN110568933A true CN110568933A (en) 2019-12-13

Family

ID=68780034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910871291.4A Pending CN110568933A (en) 2019-09-16 2019-09-16 human-computer interaction method and device based on face recognition and computer equipment

Country Status (1)

Country Link
CN (1) CN110568933A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111638784A (en) * 2020-05-26 2020-09-08 浙江商汤科技开发有限公司 Facial expression interaction method, interaction device and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872442A (en) * 2016-03-30 2016-08-17 宁波三博电子科技有限公司 Instant bullet screen gift giving method and instant bullet screen gift giving system based on face recognition
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN109981989A (en) * 2019-04-04 2019-07-05 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872442A (en) * 2016-03-30 2016-08-17 宁波三博电子科技有限公司 Instant bullet screen gift giving method and instant bullet screen gift giving system based on face recognition
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN109981989A (en) * 2019-04-04 2019-07-05 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111638784A (en) * 2020-05-26 2020-09-08 浙江商汤科技开发有限公司 Facial expression interaction method, interaction device and computer storage medium

Similar Documents

Publication Publication Date Title
CN107360327B (en) Speech recognition method, apparatus and storage medium
CN108320744B (en) Voice processing method and device, electronic equipment and computer readable storage medium
US10331965B2 (en) Method, device and computer-readable medium for updating sequence of fingerprint templates for matching
CN108388414B (en) Screen-off control method and device for terminal, computer-readable storage medium and terminal
CN106778175B (en) Interface locking method and device and terminal equipment
CN106293308B (en) Screen unlocking method and device
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN108418969B (en) Antenna feed point switching method and device, storage medium and electronic equipment
WO2018166204A1 (en) Method for controlling fingerprint recognition module, and mobile terminal and storage medium
CN107172267B (en) Fingerprint identification control method and related product
CN110456911B (en) Electronic equipment control method and device, electronic equipment and readable storage medium
US11262911B2 (en) Integrated home key and virtual key area for a smart terminal
CN107317918B (en) Parameter setting method and related product
CN107066374B (en) Data processing method and mobile terminal
JP7221305B2 (en) Object recognition method and mobile terminal
CN106484563B (en) Data migration method and terminal equipment
CN106934003B (en) File processing method and mobile terminal
CN107729857B (en) Face recognition method and device, storage medium and electronic equipment
CN110277097B (en) Data processing method and related equipment
CN110719361B (en) Information transmission method, mobile terminal and storage medium
CN110568933A (en) human-computer interaction method and device based on face recognition and computer equipment
CN109521916B (en) File processing method based on flexible screen, mobile terminal and storage medium
CN108108608B (en) Control method of mobile terminal and mobile terminal
CN108170360B (en) Control method of gesture function and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination