CN115131866A - Adopt stage action expression sampling system of AI technique - Google Patents

Adopt stage action expression sampling system of AI technique Download PDF

Info

Publication number
CN115131866A
CN115131866A CN202210838342.5A CN202210838342A CN115131866A CN 115131866 A CN115131866 A CN 115131866A CN 202210838342 A CN202210838342 A CN 202210838342A CN 115131866 A CN115131866 A CN 115131866A
Authority
CN
China
Prior art keywords
facial expression
unit
action
limb
model library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210838342.5A
Other languages
Chinese (zh)
Inventor
耿广悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing 88 Digital Technology Co ltd
Original Assignee
Nanjing 88 Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing 88 Digital Technology Co ltd filed Critical Nanjing 88 Digital Technology Co ltd
Priority to CN202210838342.5A priority Critical patent/CN115131866A/en
Publication of CN115131866A publication Critical patent/CN115131866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The invention discloses a stage action and expression sampling system adopting AI technology, which comprises a control module, a database, a storage module, a limb action acquisition module, a facial expression acquisition module and a display module, wherein through the matching of the control module, the database, the storage module, the limb action acquisition module, the facial expression acquisition module and the display module, the limb action acquisition module can compare the acquired actions of dancers with a limb standard action model library and evaluate the acquired actions, the facial expression acquisition module can compare the acquired facial expressions of the dancers with a facial expression emotion model library and evaluate the acquired facial expressions, and evaluation results made by a limb action evaluation unit and a facial expression evaluation unit are divided into three grades, namely good, good and poor, wherein the good grade can be divided into 3 grades, the good can be divided into 2 points, the poor can be divided into 1 point, and the performance of each dancer can be evaluated fairly and fairly.

Description

Adopt stage action expression sampling system of AI technique
Technical Field
The invention belongs to the technical field of stage performance, and particularly relates to a stage action expression sampling system adopting an AI technology.
Background
Artificial intelligence is a system that can interpret external data correctly, learn from the data, and use the learning to improve the ability to achieve specific goals and tasks through flexible adaptation.
Dance itself is a kind of body language, and dance mainly uses body movements to express human emotion and story line. The dance performance is a unique expression form, has the characteristics of exaggeration, sensibility and the like, can bring huge visual and emotional impact to audiences, has strong expressive force, and can help the characters to express emotions through facial expressions. During dance performance, to ensure the success of the performance, and to make the performance attractive to the audience, rich emotions are injected into the performance. Dance performance is not simple body action show, but according to work content, drama, personage, carries out the show of personage and expression of emotion through the body language, dance performance often contains deep thought and emotion connotation, expresses a very important form of emotion for people, for better record and analysis dancer's body action and facial expression, can combine AI technique to catch and analyze its body action and facial expression.
In the current stage performance competition, scoring evaluation is mostly carried out on the dancers through subjective impression and knowledge reserve of judges under the stage, the judges possibly bring self emotion into the evaluation, the fairness and the fairness of the evaluation result are influenced, the deviation of observation dead angles and observation angles can occur if the body actions and the facial expressions in the dancer performance process are identified by the eyes of the judges, and finally obtained results cannot objectively and clearly reflect the true level of the dancers, so that the stage action expression sampling system adopting the AI technology is provided.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and the limb action acquisition module can compare the acquired actions of the dancer with the limb standard action model library through the matching of the control module, the database, the storage module, the limb action acquisition module, the facial expression acquisition module and the display module, and evaluates the collected actions, the facial expression collection module can compare the collected facial expressions of the dancers with the facial expression emotion model base, and evaluates the collected facial expressions, and divides the evaluation results of the limb action evaluation unit and the facial expression evaluation unit into three grades, namely good, good and bad, wherein, the excellent can be 3 points, the good can be 2 points, the poor can be 1 point, and the performance of each dancer can be evaluated fairly and fairly, so as to solve the problems proposed in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a stage action and expression sampling system adopting an AI technology comprises a control module, a database, a storage module, a limb action acquisition module, a facial expression acquisition module and a display module, wherein the control module is respectively electrically connected with the database, the storage module, the limb action acquisition module, the facial expression acquisition module and the display module;
the database comprises a limb standard action model library and a facial expression emotion model library, the limb standard action model library comprises three-dimensional models of a plurality of dance actions, the facial expression emotion model library comprises a plurality of emotion models, and the emotion models comprise admirability, worship, aesthetic appreciation, entertainment, anger, anxiety, worship, embarrassment, chatty, coolness, confusion, perspicuity, craving, disappointment, disgust, enthusiasm, excitement, usurriety, irritation, fear, guilt, fright, interest, happiness, nostalgia, pride, relief, romance, sadness, satisfaction, desire, surprise, sympathy and victory.
The limb action acquisition module can compare the acquired actions of the dancers with a limb standard action model library and evaluate the acquired actions; the limb action acquisition module comprises a limb action acquisition unit, a limb action recognition unit, a limb action comparison unit, a limb action analysis unit and a limb action evaluation unit which are sequentially connected;
the facial expression acquisition module can compare the acquired facial expressions of the dancers with a facial expression emotion model library and evaluate the acquired facial expressions; the facial expression acquisition module comprises a facial expression acquisition unit, a facial expression recognition unit, a facial expression comparison unit, a facial expression analysis unit and a facial expression evaluation unit;
the display module is used for displaying the evaluation results made by the limb action evaluation unit and the facial expression evaluation unit;
the storage module is used for storing the evaluation results made by the limb action evaluation unit and the facial expression evaluation unit, so that the comprehensive evaluation results of a plurality of dancers can be evaluated conveniently;
preferably, the database further includes a storage unit and a network unit, the limb standard motion model library and the facial expression emotion model library are stored in the storage unit, and the network unit is configured to perform data update on the limb standard motion model library and the facial expression emotion model library.
Preferably, the limb standard action model library is connected with the limb action comparison unit, and the facial expression emotion model library is connected with the facial expression comparison unit.
Preferably, the body movement collecting unit is set as a first camera, the facial expression collecting unit comprises a second camera, and the first camera and the second camera are both set as rotatable cameras, so that automatic tracking of the movement and the expression of the dancer is realized.
Preferably, the limb action recognition unit can recognize the limb actions collected by the limb action collection unit and transmit the recognized actions to the limb action comparison unit, the limb action comparison unit compares the collected actions with the limb standard action model library, the limb action analysis unit analyzes the comparison result, and then the limb action evaluation unit evaluates the analysis result and displays the evaluation result on the display module.
Preferably, the facial expression recognition unit can recognize the facial expressions collected by the facial expression collection unit and transmit the recognized expressions to the facial expression comparison unit, the facial expression comparison unit compares the collected expressions with the facial expression emotion model library, the facial expression analysis unit analyzes the comparison result, and then the facial expression evaluation unit evaluates the analysis result and displays the evaluation result on the display module.
Preferably, the display module includes a display screen, and two display interfaces are arranged on the display screen, wherein one interface is used for displaying the evaluation result of the limb movement, and the other interface is used for displaying the evaluation result of the facial expression.
Preferably, the memory unit is configured as a memory chip, and the memory chip is configured as a memory chip with a capacity of 1T.
The invention has the technical effects and advantages that:
according to the invention, through the matching of the control module, the database, the storage module, the limb action acquisition module, the facial expression acquisition module and the display module, the limb action acquisition module can compare the acquired actions of the dancers with the limb standard action model base and evaluate the acquired actions, the facial expression acquisition module can compare the acquired facial expressions of the dancers with the facial expression emotion model base and evaluate the acquired facial expressions, and evaluation results made by the limb action evaluation unit and the facial expression evaluation unit are divided into three grades, namely, good, poor, good and good, wherein the good can be obtained by 3, the good can be obtained by 2, the poor can be obtained by 1, and the performance of each dancer can be evaluated fairly and neatly.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the invention provides a stage motion and expression sampling system adopting AI technology, which comprises a control module, a database, a storage module, a limb motion acquisition module, a facial expression acquisition module and a display module, wherein the control module is respectively electrically connected with the database, the storage module, the limb motion acquisition module, the facial expression acquisition module and the display module;
the database comprises a limb standard action model library and a facial expression emotion model library, the limb standard action model library comprises a plurality of three-dimensional models of dance actions, the three-dimensional models of the dance actions can be subjected to data updating, the facial expression emotion model library comprises a plurality of emotion models, and the plurality of emotion models comprise Chinese charms, worship, aesthetic appreciation, entertainment, anger, anxiety, worship, embarrassment, chatlessness, coolness, confusion, eyesight improvement, craving, disappointment, disgust, enthusiasm, excitement, mustiness, stimulation, fear, guilt, fright, interest, happiness, old feeling, proud, relief, romance, sadness, satisfaction, desire, surprise, homonymy and victory.
The limb action acquisition module can compare the acquired actions of the dancers with a limb standard action model library and evaluate the acquired actions; the limb action acquisition module comprises a limb action acquisition unit, a limb action recognition unit, a limb action comparison unit, a limb action analysis unit and a limb action evaluation unit which are sequentially connected, wherein the limb action acquisition unit acquires actions of a dancer, the limb action recognition unit recognizes the acquired actions, the limb action comparison unit compares the acquired actions with a limb standard action model library, the limb action analysis unit analyzes comparison results, and the limb action evaluation unit evaluates the analysis results.
The facial expression acquisition module can compare the acquired facial expressions of the dancers with a facial expression emotion model library and evaluate the acquired facial expressions; the facial expression acquisition module comprises a facial expression acquisition unit, a facial expression recognition unit, a facial expression comparison unit, a facial expression analysis unit and a facial expression evaluation unit, the facial expression acquisition unit acquires the expressions of the dancers, the facial expression recognition unit recognizes the acquired expressions, the facial expression comparison unit compares the acquired expressions with a facial expression emotion model library, the facial expression analysis unit analyzes the comparison result, and the facial expression evaluation unit evaluates the analysis result.
The display module is used for displaying the evaluation results made by the limb action evaluation unit and the facial expression evaluation unit, and the results can be clearly disclosed to the public.
The storage module is used for storing the evaluation results made by the limb action evaluation unit and the facial expression evaluation unit, so that the comprehensive evaluation results of a plurality of dancers can be conveniently compared;
specifically, the evaluation results of the limb movement evaluation unit and the facial expression evaluation unit can be divided into three grades, i.e., good and bad, wherein the good can be divided into 3 points, the good can be divided into 2 points, and the bad can be divided into 1 point, as shown in the following table:
Figure BDA0003746247120000061
Figure BDA0003746247120000071
the database also comprises a storage unit and a network unit, wherein the limb standard action model library and the facial expression emotion model library are stored in the storage unit, and the network unit is used for updating the data of the limb standard action model library and the facial expression emotion model library.
The body standard action model library is connected with the body action comparison unit, so that collected body actions can be conveniently compared with three-dimensional models of dance actions in the body standard action model library, the facial expression emotion model library is connected with the facial expression comparison unit, so that collected facial expressions can be conveniently compared with multiple expressions and emotions in the facial expression emotion model library, and whether the acquired emotions conform to the emotion of the current dance main body or not is observed.
The limbs action acquisition unit sets up to first camera, and facial expression acquisition unit includes the second camera, and first camera and second camera all set up to rotatable formula camera, realize the autotracking to dancer's action and expression, and is specific, and first camera and second camera set up to 1080P standard style of cloud platform intelligent camera, have 360 degrees panoramas, infrared night vision, two-way pronunciation are talkbacked, day night conversion, self-defined cruise, functions such as intelligent movement pursuit.
The limb action identification unit can identify the limb actions collected by the limb action collection unit and transmit the identified actions to the limb action comparison unit, the limb action comparison unit compares the collected actions with a limb standard action model library, meanwhile, the limb action analysis unit analyzes the comparison result, and then the limb action evaluation unit evaluates the analysis result and displays the evaluation result on the display module.
The facial expression recognition unit can recognize the facial expressions collected by the facial expression collection unit, the recognized expressions are transmitted to the facial expression comparison unit, the facial expression comparison unit compares the collected expressions with the facial expression emotion model base, meanwhile, the facial expression analysis unit analyzes the comparison result, then the facial expression evaluation unit evaluates the analysis result, and the evaluation result is displayed on the display module.
The display module comprises a display screen, two display interfaces are arranged on the display screen, one interface is used for displaying the evaluation result of the limb movement, and the other interface is used for displaying the evaluation result of the facial expression.
The storage unit is set as a storage chip, and the storage chip is set as a storage chip with the capacity of 1T and can store more data.
In summary, the limb action recognition unit of the invention can recognize the limb actions collected by the limb action collection unit and transmit the recognized actions to the limb action comparison unit, the limb action comparison unit compares the collected actions with the limb standard action model library, meanwhile, the limb action analysis unit analyzes the comparison result, and then the limb action evaluation unit evaluates the analysis result and displays the evaluation result on the display module;
the facial expression identification unit can identify the facial expressions collected by the facial expression collection unit and transmit the identified expressions to the facial expression comparison unit, the facial expression comparison unit compares the collected expressions with the facial expression emotion model base, meanwhile, the facial expression analysis unit analyzes the comparison result, then the facial expression evaluation unit evaluates the analysis result and displays the evaluation result on the display module;
the acquired three-dimensional models of the body actions and the dance actions in the body standard action model library are conveniently compared, the acquired facial expressions and various expressions and emotions in the facial expression and emotion model library are conveniently compared, whether the emotion of the dance main body is met or not is observed, evaluation results made by the body action evaluation unit and the facial expression evaluation unit can be divided into three grades, namely excellent, good and poor, wherein the excellent grade can be 3, the good grade can be 2, and the poor grade can be 1, so that the performance of each dancer can be evaluated fairly and fairly.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (8)

1. A stage action and expression sampling system adopting an AI technology is characterized by comprising a control module, a database, a storage module, a limb action acquisition module, a facial expression acquisition module and a display module, wherein the control module is respectively electrically connected with the database, the storage module, the limb action acquisition module, the facial expression acquisition module and the display module;
the database comprises a limb standard action model library and a facial expression emotion model library, the limb standard action model library comprises three-dimensional models of a plurality of dance actions, the facial expression emotion model library comprises a plurality of emotion models, and the plurality of emotion models comprise admirability, worship, aesthetic appreciation, entertainment, anger, anxiety, worship, embarrassment, chatty, coolness, confusion, perspicuity, craving, disappointment, disgust, enthusiasm, excitement, horror, guilt, terror, interest, happiness, nostalgia, pride, relief, romance, sadness, satisfaction, desire, surprise, sympathy and victory;
the limb action acquisition module can compare the acquired actions of the dancers with a limb standard action model library and evaluate the acquired actions; the limb action acquisition module comprises a limb action acquisition unit, a limb action recognition unit, a limb action comparison unit, a limb action analysis unit and a limb action evaluation unit which are sequentially connected;
the facial expression acquisition module can compare the acquired facial expressions of the dancers with a facial expression emotion model library and evaluate the acquired facial expressions; the facial expression acquisition module comprises a facial expression acquisition unit, a facial expression recognition unit, a facial expression comparison unit, a facial expression analysis unit and a facial expression evaluation unit;
the display module is used for displaying the evaluation results made by the limb action evaluation unit and the facial expression evaluation unit;
the storage module is used for storing the evaluation results made by the limb action evaluation unit and the facial expression evaluation unit, so that the comprehensive evaluation results of a plurality of dancers can be evaluated conveniently.
2. A stage action expression sampling system adopting AI technology according to claim 1, characterized in that: the database also comprises a storage unit and a network unit, the limb standard action model library and the facial expression emotion model library are stored in the storage unit, and the network unit is used for updating data of the limb standard action model library and the facial expression emotion model library.
3. A stage action expression sampling system adopting AI technology according to claim 1, characterized in that: the limb standard action model library is connected with the limb action comparison unit, and the facial expression emotion model library is connected with the facial expression comparison unit.
4. A stage action expression sampling system adopting AI technology according to claim 1, characterized in that: the body action acquisition unit is set as a first camera, the facial expression acquisition unit comprises a second camera, the first camera and the second camera are both set as rotatable cameras, and automatic tracking of actions and expressions of dancers is achieved.
5. A stage action expression sampling system adopting AI technology according to claim 1, characterized in that: the limb action identification unit can identify the limb actions collected by the limb action collection unit and transmit the identified actions to the limb action comparison unit, the limb action comparison unit compares the collected actions with a limb standard action model library, meanwhile, the limb action analysis unit analyzes the comparison result, and then the limb action evaluation unit evaluates the analysis result and displays the evaluation result on the display module.
6. A stage action expression sampling system adopting AI technology according to claim 1, characterized in that: the facial expression recognition unit can recognize the facial expressions collected by the facial expression collection unit and transmit the recognized expressions to the facial expression comparison unit, the facial expression comparison unit compares the collected expressions with the facial expression emotion model base, meanwhile, the facial expression analysis unit analyzes the comparison results, then the facial expression evaluation unit evaluates the analysis results and displays the evaluation results on the display module.
7. A stage action expression sampling system adopting AI technology according to claim 1, characterized in that: the display module comprises a display screen, wherein two display interfaces are arranged on the display screen, one interface is used for displaying the evaluation result of the limb action, and the other interface is used for displaying the evaluation result of the facial expression.
8. A stage action expression sampling system adopting AI technology according to claim 2, characterized in that: the storage unit is set as a storage chip, and the storage chip is set as a storage chip with the capacity of 1T.
CN202210838342.5A 2022-07-14 2022-07-14 Adopt stage action expression sampling system of AI technique Pending CN115131866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210838342.5A CN115131866A (en) 2022-07-14 2022-07-14 Adopt stage action expression sampling system of AI technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210838342.5A CN115131866A (en) 2022-07-14 2022-07-14 Adopt stage action expression sampling system of AI technique

Publications (1)

Publication Number Publication Date
CN115131866A true CN115131866A (en) 2022-09-30

Family

ID=83384835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210838342.5A Pending CN115131866A (en) 2022-07-14 2022-07-14 Adopt stage action expression sampling system of AI technique

Country Status (1)

Country Link
CN (1) CN115131866A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241909A (en) * 2018-09-06 2019-01-18 闫维新 A kind of long-range dance movement capture evaluating system based on intelligent terminal
CN110215218A (en) * 2019-06-11 2019-09-10 北京大学深圳医院 A kind of wisdom wearable device and its mood identification method based on big data mood identification model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241909A (en) * 2018-09-06 2019-01-18 闫维新 A kind of long-range dance movement capture evaluating system based on intelligent terminal
CN110215218A (en) * 2019-06-11 2019-09-10 北京大学深圳医院 A kind of wisdom wearable device and its mood identification method based on big data mood identification model

Similar Documents

Publication Publication Date Title
CN108665492B (en) Dance teaching data processing method and system based on virtual human
CN110070065A (en) The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
Jamieson The metaphoric cluster in the rhetoric of Pope Paul VI and Edmund G. Brown, Jr.
Blake Folklore and Community in Song of Solomon
CN104298722B (en) Digital video interactive and its method
Streeck Gesture as communication I: Its coordination with gaze and speech
CN110349667B (en) Autism assessment system combining questionnaire and multi-modal model behavior data analysis
CN108777081A (en) A kind of virtual Dancing Teaching method and system
CN111045582B (en) Personalized virtual portrait activation interaction system and method
CN111081371A (en) Virtual reality-based early autism screening and evaluating system and method
CN108942919A (en) A kind of exchange method and system based on visual human
CN109300529A (en) Intervened based on artificial intelligence and the craving targeting of virtual reality/augmented reality drug addiction and rescues system
CN105787974A (en) Establishment method for establishing bionic human facial aging model
CN108052250A (en) Virtual idol deductive data processing method and system based on multi-modal interaction
CN109034090A (en) A kind of emotion recognition system and method based on limb action
CN112464018A (en) Intelligent emotion recognition and adjustment method and system
CN114926837A (en) Emotion recognition method based on human-object space-time interaction behavior
CN109101953A (en) The facial expressions and acts generation method of subregion element based on human facial expressions
CN115131866A (en) Adopt stage action expression sampling system of AI technique
Winston Storytelling and conversation: discourse in deaf communities
CN110838357A (en) Attention holographic intelligent training system based on face recognition and dynamic capture
CN215875885U (en) Immersion type anti-stress psychological training system based on VR technology
CN115239533A (en) Interactive online English teaching system and use method thereof
CN113633870B (en) Emotion state adjustment system and method
Wright From deaf to deaf-blind: A phenomenological study of the lived experiences of deaf-blind individuals in the deep south

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination