CN114419285A - Virtual character performance control method and system applied to composite theater - Google Patents

Virtual character performance control method and system applied to composite theater Download PDF

Info

Publication number
CN114419285A
CN114419285A CN202111398567.5A CN202111398567A CN114419285A CN 114419285 A CN114419285 A CN 114419285A CN 202111398567 A CN202111398567 A CN 202111398567A CN 114419285 A CN114419285 A CN 114419285A
Authority
CN
China
Prior art keywords
action
instruction
virtual character
computer system
feedback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111398567.5A
Other languages
Chinese (zh)
Inventor
李红红
龙光
项智亮
刘勇
李景霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3d New Culture Co ltd Ningbo China
Original Assignee
3d New Culture Co ltd Ningbo China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3d New Culture Co ltd Ningbo China filed Critical 3d New Culture Co ltd Ningbo China
Publication of CN114419285A publication Critical patent/CN114419285A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides a virtual character performance control method and a virtual character performance control system applied to a composite theater, wherein the control method comprises the following steps: (a) generating a virtual character, and projecting the virtual character in a specific space through an imaging system to form a preset virtual character model; (b) acquiring an action instruction, detecting whether an input signal of the action instruction exists in the console by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continuing to wait; (c) calling an action instruction, calling a corresponding action instruction in an instruction library by the computer system according to an input signal of the console, outputting the corresponding action instruction to the imaging system, and controlling the virtual character to complete a corresponding action according to the action instruction; (d) outputting a feedback instruction, wherein the computer system controls the monitoring equipment to collect the sound or the image of the auditorium and outputs the feedback instruction to control the virtual character to complete the corresponding performance action; (e) and (d) after the virtual character completes the action command, circularly executing the steps (b) to (d) until all the action commands are completed. The invention utilizes stage imaging to form virtual characters, and realizes the real-time interaction effect with live audiences by controlling the virtual characters.

Description

Virtual character performance control method and system applied to composite theater
Technical Field
The invention relates to the technical field of computer control and Augmented Reality (AR), in particular to a virtual character performance control method and system applied to a composite theater.
Background
At present, with the application of Augmented Reality (AR) technology in theaters, stages and other scenes, strong interactive visual effects and participation sense can be brought to audiences. The AR technology generates virtual objects which do not exist in the real environment by means of a computer graphics technology and a visualization technology, accurately places the virtual objects into the real environment, integrates the virtual objects and the real environment into a whole by means of display equipment, presents a new environment with real sensory effect to audiences, and has the characteristics of virtual-real combination, real-time interaction, three-dimensional imaging and the like.
The existing composite theater is a theater which integrates multiple functions of a fantasy theater, a science popularization theater, a common performance theater, a conference report hall and the like. For example, in the existing places such as museums and exhibition halls, special interpreters are needed for on-site explanation, and the traditional real-person explanation not only increases the labor cost, but also easily causes the interpreters to feel tired. The appearance of virtual hosts (such as virtual hosts, virtual cartoon roles and the like) can solve the problems, so that audiences feel strong technological feelings, the form is novel, more contents can be displayed in a limited space by using a digital technology, and the information content is rich. However, the existing virtual host usually adopts pre-recorded audio and video materials, and then projects and shows the audio and video materials through the AR technology, so that the audio and video materials cannot interact and communicate with audiences in real time.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a virtual character performance control method and system applied to a composite theater, which can realize the field interaction and communication with audiences through the control of virtual characters.
The technical scheme adopted by the invention is as follows: a virtual character performance control method applied to a composite theater comprises a performance stage, an imaging system, a computer system and a console which are arranged on the performance stage, and auditoriums in the theater, and the control method comprises the following steps:
(a) generating virtual figures, and projecting the virtual figures in a specific space of a theater through an imaging system to form a preset virtual figure model;
(b) acquiring an action instruction, detecting whether an input signal of the action instruction exists in the console by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continuing to wait;
(c) calling an action instruction, calling a corresponding action instruction in an instruction library by the computer system according to an input signal of the console, outputting the corresponding action instruction to the imaging system, and controlling the virtual character to complete a corresponding performance action according to the action instruction;
(d) outputting a feedback instruction, wherein the computer system controls the monitoring equipment to collect the sound or the image of the auditorium and outputs the feedback instruction to control the virtual character to complete the corresponding performance action;
(e) and (d) after the virtual character completes one action command, circularly executing the steps (b) to (d) until all the action commands are completed.
Further, the computer system further comprises a voice analysis module and/or an image analysis module, which is used for calculating and obtaining the feedback instruction in the step (d) according to the sound or image information collected by the monitoring equipment. Specifically, the feedback instruction is obtained by:
(1) collecting scene images of on-site auditoriums, collecting facial expressions of each audience by taking a single face as a unit, extracting expression characteristics A of audience characters through a facial feature recognition algorithm, and simultaneously recording time information T1 for collecting the expression characteristics A;
(2) collecting sound signals of a scene auditorium, filtering environmental noise, extracting a scene effective sound signal B according to a preset threshold value, and simultaneously recording time information T2 for collecting the expression feature A;
(3) the computer system collects the expression characteristics A, the sound signals B and the time information T1 and T2, combines the expression characteristics A and the sound signals B in the same time interval and generates feedback information C of the on-site auditorium;
(4) the computer system outputs a feedback instruction D according to the feedback information C of the on-site auditorium;
(5) and after receiving the feedback instruction D, the computer system calls a corresponding action instruction in the instruction library, outputs the action instruction to the imaging system and controls the virtual character to complete a corresponding performance action according to the action instruction.
Based on the above method for controlling the actions of the virtual character, the present invention further provides a virtual character performance control system applied to a composite theater, which includes a performance stage, an imaging system, a computer system and a console installed on the performance stage, and auditorium in the theater, wherein the control system includes:
the virtual character generating unit is used for forming a preset virtual character model through projection in a specific space of a theater through an imaging system;
the action instruction acquisition unit is used for detecting whether the console has an input signal of an action instruction or not by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continuing to wait;
the action instruction calling unit is used for calling corresponding action instructions in the instruction library by the computer system according to input signals of the console, outputting the corresponding action instructions to the imaging system and controlling the virtual character to complete corresponding performance actions according to the action instructions;
and the feedback instruction output unit is used for controlling the monitoring equipment to collect the sound or the image of the auditorium by the computer system and outputting a feedback instruction to control the virtual character to complete the corresponding performance action.
Furthermore, the control method also comprises a voice instruction, and when the computer system controls the virtual character to execute the corresponding action instruction, sound corresponding to the action instruction is synchronously output through the voice playing device. And after being recorded by the voice receiver, the voice command is converted by the preset sound changing equipment and the preset sound adjusting equipment and then is output to the voice playing equipment.
Furthermore, the control method also comprises a monitoring device which is used for monitoring the sound or the image of the site and sending a corresponding action instruction through the console; or generating a feedback instruction through a preset algorithm of the computer system, and controlling the virtual character to complete the corresponding performance action.
The invention overcomes the defects of the prior art, utilizes the virtual stage imaging system to form virtual figures (including a virtual host, various virtual cartoon characters and the like), inputs corresponding action instructions through the console, and then controls the imaging system to control the virtual figures through the computer system, thereby realizing real-time interaction with field audiences and increasing the interaction effect and the interest of the stage. Furthermore, the invention acquires the facial expression, voice, action and other information of the on-site audience, analyzes the information by using the preset algorithm of the computer system to obtain feedback information, and controls the virtual character to complete the corresponding action by using the feedback information, thereby realizing the real-time interaction and communication between the virtual character and the on-site audience.
Drawings
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for controlling actions of a virtual character according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of a motion control system of a virtual character according to an embodiment of the present invention.
Fig. 3 is a schematic view of a scenario applied to a theater according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that if the description of "first", "second", etc. is provided in the embodiment of the present invention, the description of "first", "second", etc. is only for descriptive purposes and is not to be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
As shown in fig. 1, the virtual character performance control method applied to a composite theater according to the embodiment of the present invention includes a performance stage, an imaging system, a computer system and a console disposed on the performance stage, and auditorium in the theater, and includes the following steps:
(a) generating virtual figures, and projecting the virtual figures in a specific space of a theater through an imaging system to form a preset virtual figure model;
(b) acquiring an action instruction, detecting whether an input signal of the action instruction exists in the console by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continue waiting
(c) Calling an action instruction, calling a corresponding action instruction in an instruction library by the computer system according to an input signal of the console, outputting the corresponding action instruction to the imaging system, and controlling the virtual character to complete a corresponding performance action according to the action instruction;
(d) outputting a feedback instruction, wherein the computer system controls the monitoring equipment to collect the sound or the image of the auditorium and outputs the feedback instruction to control the virtual character to complete the corresponding performance action;
(e) and (d) after the virtual character completes one action command, circularly executing the steps (b) to (d) until all the action commands are completed.
The action commands in the command library are specific action codes or action images which are designed for virtual characters in advance through a computer programming program. The instruction library is stored in a memory of the computer system, and each action instruction in the instruction library corresponds to one input command on the console. Preferably, the input command of the console is input by a background operator in a manual mode.
As shown in fig. 2, the present invention is based on the above-mentioned method for controlling actions of virtual characters, and further provides a virtual character performance control system applied to a composite theater, which includes a performance stage, an imaging system, a computer system and a console installed on the performance stage, and auditorium in the theater, the control system including: the virtual character generating unit is used for forming a preset virtual character model through projection in a specific space of a theater through an imaging system; the action instruction acquisition unit is used for detecting whether the console has an input signal of an action instruction or not by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continuing to wait; the action instruction calling unit is used for calling corresponding action instructions in the instruction library by the computer system according to input signals of the console, outputting the corresponding action instructions to the imaging system and controlling the virtual character to complete corresponding performance actions according to the action instructions; and the feedback instruction output unit is used for controlling the monitoring equipment to collect the sound or the image of the auditorium by the computer system and outputting a feedback instruction to control the virtual character to complete the corresponding performance action.
In the control system, the imaging system is used for projecting and forming a preset virtual character in a specific space; the console is used for inputting action instructions for controlling the virtual character; the computer system is used for detecting an action instruction input signal of the console, calling a corresponding action instruction in the instruction library according to the input signal and outputting the action instruction to the imaging system, and controlling the virtual character to complete a corresponding action according to the action instruction.
The console is selected from any one or more of a computer keyboard, an operating handle, an operating rod, a wearable device, a voice recognition device and a mouse, and in the embodiment, the operating handle is preferred. After the preset character action instructions are designed according to a computer programming program, each action instruction is defined as a control key of the console, and when the command is input, a background operator can input the command in a manual mode. For example, pressing the "left key" of the operation handle, that is, giving an operation instruction to turn left to the virtual character, pressing the "up key" of the operation handle, that is, giving an operation instruction to jump to the virtual character, and the like are not listed.
The control method also comprises a voice instruction, and when the computer system controls the virtual character to execute the corresponding action instruction, sound corresponding to the action instruction is synchronously output through the voice playing equipment. In order to make the output sound more attractive and adapt to different application scenes, the voice command is recorded by the voice receiver, converted by the preset sound changing device and the preset sound tuning device and output to the voice playing device, so that various sounds can be produced by matching with the action of the virtual character.
Furthermore, the control method further comprises monitoring equipment (comprising monitoring earphones and a camera installed inside the theater) for monitoring sound or images of the scene and sending corresponding action instructions through the console.
Further, the computer system further comprises a voice analysis module and/or an image analysis module, which is used for calculating and obtaining the feedback instruction in the step (d) according to the sound or image information collected by the monitoring equipment. Specifically, the feedback instruction is obtained by:
(1) collecting scene images of on-site auditoriums, collecting facial expressions of each audience by taking a single face as a unit, extracting expression characteristics A of audience characters through a facial feature recognition algorithm, and simultaneously recording time information T1 for collecting the expression characteristics A;
(2) collecting sound signals of a scene auditorium, filtering environmental noise, extracting a scene effective sound signal B according to a preset threshold value, and simultaneously recording time information T2 for collecting the expression feature A;
(3) the computer system collects the expression characteristics A, the sound signals B and the time information T1 and T2, combines the expression characteristics A and the sound signals B in the same time interval and generates feedback information C of the on-site auditorium;
(4) the computer system outputs a feedback instruction D according to the feedback information C of the on-site auditorium;
(5) and after receiving the feedback instruction D, the computer system calls a corresponding action instruction in the instruction library, outputs the action instruction to the imaging system and controls the virtual character to complete a corresponding performance action according to the action instruction.
Further, the feedback instruction D is calculated by the following formula:
D=G(C)=G[F(A,B,T1,T2)]wherein:
A=E(x1,x2,...,xn)
B=V(y,α,θ)
C=F(A,B,T1,T2)
thus: g [ F (E (x) ]1,x2,...,xn),V(y,α,θ),T1,T2)]
In the above formula, E () represents a function for extracting the facial expression characteristics of the audience character, xiA face image representing the ith audience, i 1.. and n, n being the total number of live audience persons; v () represents a function for extracting a live effective sound, y represents a sound signal of a live viewer, α represents an ambient noise, and θ represents a preset threshold; f () represents a function for obtaining feedback information of the on-site auditorium according to the expression characteristics, the sound signal and the time information; g () represents a function that generates a feedback instruction from the feedback information C.
Specifically, after an image or video of an auditorium is shot by a field image monitoring device (such as a camera), a corresponding image is extracted by a computer system according to a preset time interval, facial recognition is carried out, and expression features (such as smile, laugh, surprise, cry and the like) of the auditorium are judged by comparing with an expression feature database of a background. In addition, after sound monitoring equipment (such as monitoring earphones and the like) collects the sound of the auditorium, corresponding sound segments are extracted through a computer system according to a preset time interval, effective sound signals are obtained after noise filtering and voice recognition are carried out, and sound characteristics (such as applause sound, cheering sound, surprise sound and the like) of the auditorium are judged through comparison with a background sound characteristic database. And then, extracting the expression characteristics and the sound characteristics of the live audience in the same time period by using a preset algorithm of a computer system, analyzing to obtain a corresponding feedback instruction, and controlling the virtual character to complete a corresponding action by using the feedback instruction. The feedback instructions are obtained by the computer system according to feedback information (expression characteristics and sound characteristics) of audiences through calculation according to the formula, corresponding action instructions in an instruction library are called according to each instruction and output to the imaging system, and the virtual character is controlled to complete corresponding performance actions according to the action instructions, so that interaction and communication between the virtual character and the audiences on site are realized.
Further, the feedback instruction of the present embodiment may be obtained by:
(1) collecting scene images of a live auditorium, collecting limb actions of each auditorium by taking a single character as a unit, extracting action characteristics M of the auditorium characters through an action characteristic recognition algorithm, and simultaneously recording time information T3 for collecting the expression characteristics M;
(2) the computer system collects the action characteristics M and the time information T3 to generate feedback information N of the on-site auditorium;
(3) the computer system outputs a feedback instruction D according to the feedback information N of the on-site auditoriumN
(4) The computer system receives a feedback instruction DNThen, calling corresponding action instructions in the instruction library, outputting the action instructions to the imaging system, and controlling the virtual character to finish the phase according to the action instructionsThe corresponding performance action.
Wherein the feedback instruction DNObtained by the following formula:
DN=GN(N)=G[FN(M,T3)]wherein:
M=EN(z1,z2,...,zn)
thus: dN=G(N)=G[FN(EA(z1,z2,...,zn),T3)]
In the above formula, EN() Function representing extraction of action characteristics of the audience character, ziRepresenting the action of the ith audience, i 1.. and n is the total number of live audience characters; fN() A function for obtaining feedback information of the on-site auditorium according to the action characteristics and the time information; gN() Representing a function for generating a feedback instruction based on the feedback information N.
As shown in fig. 3, an embodiment of the present invention is used in a theater with auditorium 8, stage 7, and imaging system above the stage. The imaging system further comprises an imaging film 3 and an LED screen 4, a virtual character image generated by the computer system is displayed on the LED screen 4, then a virtual character 1 (such as a virtual host, a cartoon character and the like) is formed on the stage through the imaging film 3, and the virtual character can be controlled to make corresponding actions through instructions sent by the control console. In addition, in order to make the stage effect better, a projection device 2 is also provided, which can project various virtual scenes through a screen 9 at the back of the stage. In addition, a liftable curtain 5, lighting equipment 6 and the like are arranged at the front end of the stage 7 to enrich the stage effect.
Further, the computer system in the control system according to the present invention includes a memory for storing an instruction library.
Further, each action command in the command library corresponds to an input command on the console.
Further, the input command of the console is input by a background operator in a manual mode.
Furthermore, the control system also comprises a voice playing device for receiving and outputting the voice command, and the voice playing device is used for synchronously outputting the sound corresponding to the action command through the voice playing device while the computer system controls the virtual character to execute the corresponding action command.
Further, the voice command is recorded by the voice receiver, converted by the preset sound changing device and the preset sound tuning device and then output to the voice playing device.
Furthermore, the control system also comprises a monitoring device which is used for monitoring the sound or the image on site and sending a corresponding feedback instruction through the console, and the monitoring device is used for controlling the virtual character to complete a corresponding performance action. Further, the control system further comprises a monitoring device for monitoring the sound or facial expression images or motion characteristics of the scene and generating a feedback instruction through a preset algorithm of the computer system for controlling the virtual character to complete the corresponding performance action.
The embodiment of the invention is particularly suitable for being used in places such as theaters, museums, exhibition halls and the like, the virtual host image is formed through AR technology imaging, then the control method and the control system are used, the corresponding action instruction is input into the console behind the screen, and the imaging system is controlled by the computer system to control the virtual host, so that the real-time interaction with the field audience is realized, and the interaction effect and the interestingness of the stage are increased.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A virtual character performance control method applied to a composite theater comprises a performance stage, an imaging system, a computer system and a console which are arranged on the performance stage, and auditoriums in the theater, and is characterized in that the control method comprises the following steps:
(a) generating virtual figures, and projecting the virtual figures in a specific space of a theater through an imaging system to form a preset virtual figure model;
(b) acquiring an action instruction, detecting whether an input signal of the action instruction exists in the console by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continuing to wait;
(c) calling an action instruction, calling a corresponding action instruction in an instruction library by the computer system according to an input signal of the console, outputting the corresponding action instruction to the imaging system, and controlling the virtual character to complete a corresponding performance action according to the action instruction;
(d) outputting a feedback instruction, wherein the computer system controls the monitoring equipment to collect the sound or the image of the auditorium and outputs the feedback instruction to control the virtual character to complete the corresponding performance action;
(e) and (d) after the virtual character completes one action command, circularly executing the steps (b) to (d) until all the action commands are completed.
2. The virtual character performance control method applied to a composite theater as claimed in claim 1, wherein the feedback instruction is obtained by:
(1) collecting scene images of on-site auditoriums, collecting facial expressions of each audience by taking a single face as a unit, extracting expression characteristics A of audience characters through a facial feature recognition algorithm, and simultaneously recording time information T1 for collecting the expression characteristics A;
(2) collecting sound signals of a scene auditorium, filtering environmental noise, extracting a scene effective sound signal B according to a preset threshold value, and simultaneously recording time information T2 for collecting the expression feature A;
(3) the computer system collects the expression characteristics A, the sound signals B and the time information T1 and T2, combines the expression characteristics A and the sound signals B in the same time interval and generates feedback information C of the on-site auditorium;
(4) the computer system outputs a feedback instruction D according to the feedback information C of the on-site auditorium;
(5) and after receiving the feedback instruction D, the computer system calls a corresponding action instruction in the instruction library, outputs the action instruction to the imaging system and controls the virtual character to complete a corresponding performance action according to the action instruction.
3. The virtual character performance control method applied to a composite theater as set forth in claim 2, wherein the feedback instruction D is calculated by the following formula:
D=G(C)=G[F(A,B,T1,T2)],
A=E(x1,x2,...,xn)
B=V(y,α,θ)
C=F(A,B,T1,T2)
wherein: e () represents a function for extracting the facial expressive features of the audience character, xiA face image representing the ith audience, i 1.. and n, n being the total number of live audience persons; v () represents a function for extracting a live effective sound, y represents a sound signal of a live viewer, α represents an ambient noise, and θ represents a preset threshold; f () represents a function for obtaining feedback information of the on-site auditorium according to the expression characteristics, the sound signal and the time information; g () represents a function that generates a feedback instruction from the feedback information.
4. The virtual character performance control method applied to a composite theater as claimed in claim 1, wherein the feedback instruction is further obtained by:
(1) collecting scene images of a live auditorium, collecting limb actions of each auditorium by taking a single character as a unit, extracting action characteristics M of the auditorium characters through an action characteristic recognition algorithm, and simultaneously recording time information T3 for collecting the expression characteristics M;
(2) the computer system collects the action characteristics M and the time information T3 to generate feedback information N of the on-site auditorium;
(3) the computer system outputs a feedback instruction D according to the feedback information N of the on-site auditoriumN
(4) The computer system receives a feedback instruction DNThen, calling corresponding action instructions in the instruction library and outputting the action instructions to the imagingThe system controls the virtual character to complete corresponding performance actions according to the action instructions;
preferably, the feedback instruction DNObtained by the following formula:
DN=GN(N)=G[FN(M,T3)]
M=EN(z1,z2,...,zn)
wherein: eN() Function representing extraction of action characteristics of the audience character, ziRepresenting the action of the ith audience, i 1.. and n is the total number of live audience characters; fN() A function for obtaining feedback information of the on-site auditorium according to the action characteristics and the time information; gN() Representing a function for generating a feedback instruction based on the feedback information N.
5. A virtual character performance control system applied to a composite theater comprises a performance stage, an imaging system, a computer system and a console which are arranged on the performance stage, and auditorium in the theater, and is characterized in that the control system comprises:
the virtual character generating unit is used for forming a preset virtual character model through projection in a specific space of a theater through an imaging system;
the action instruction acquisition unit is used for detecting whether the console has an input signal of an action instruction or not by the computer system, and if so, continuing to execute the next step of calling the action instruction; if not, continuing to wait;
the action instruction calling unit is used for calling corresponding action instructions in the instruction library by the computer system according to input signals of the console, outputting the corresponding action instructions to the imaging system and controlling the virtual character to complete corresponding performance actions according to the action instructions;
and the feedback instruction output unit is used for controlling the monitoring equipment to collect the sound or the image of the auditorium by the computer system and outputting a feedback instruction to control the virtual character to complete the corresponding performance action.
6. The virtual character performance control system applied to the composite theater as claimed in claim 5, wherein the console is selected from any one or more of a computer keyboard, an operation handle, an operation rod, a wearable device, a voice recognition device and a mouse.
7. The virtual character performance control system as applied to a composite theater as set forth in claim 5, wherein the computer system comprises a memory for storing a library of instructions.
8. The virtual character performance control system as claimed in claim 5, wherein each action command in the command library corresponds to an input command on the console.
9. The virtual character performance control system as applied to a composite theater as set forth in claim 5, wherein the input command of the console is manually input by a back-office operator.
10. The virtual character performance control system as claimed in claim 5, wherein the control system further comprises a voice playing device for receiving and outputting voice commands, for synchronously outputting sounds corresponding to the motion commands through the voice playing device while the computer system controls the virtual character to execute the corresponding motion commands;
preferably, after being recorded by a voice receiver, the voice command is converted by preset sound changing equipment and preset sound adjusting equipment and then output to voice playing equipment;
preferably, the control system further comprises a monitoring device for monitoring the sound or image on site and sending a corresponding feedback instruction through the console for controlling the virtual character to complete a corresponding performance action;
preferably, the control system further comprises a monitoring device for monitoring the sound or facial expression image or motion characteristics of the scene, and generating a feedback instruction through a preset algorithm of the computer system, so as to control the virtual character to complete the corresponding performance action.
CN202111398567.5A 2020-11-23 2021-11-23 Virtual character performance control method and system applied to composite theater Pending CN114419285A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011326165 2020-11-23
CN202011326165X 2020-11-23

Publications (1)

Publication Number Publication Date
CN114419285A true CN114419285A (en) 2022-04-29

Family

ID=81266487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111398567.5A Pending CN114419285A (en) 2020-11-23 2021-11-23 Virtual character performance control method and system applied to composite theater

Country Status (1)

Country Link
CN (1) CN114419285A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619912A (en) * 2022-10-27 2023-01-17 深圳市诸葛瓜科技有限公司 Cartoon character display system and method based on virtual reality technology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619912A (en) * 2022-10-27 2023-01-17 深圳市诸葛瓜科技有限公司 Cartoon character display system and method based on virtual reality technology

Similar Documents

Publication Publication Date Title
CN106648083B (en) Enhanced playing scene synthesis control method and device
US5790124A (en) System and method for allowing a performer to control and interact with an on-stage display device
US9143721B2 (en) Content preparation systems and methods for interactive video systems
US7015934B2 (en) Image displaying apparatus
US8958686B2 (en) Information processing device, synchronization method, and program
CN103686450A (en) Video processing method and system
CN109564760A (en) It is positioned by 3D audio to generate the method and apparatus that virtual or augmented reality is presented
KR100748060B1 (en) Internet broadcasting system of Real-time multilayer multimedia image integrated system and Method thereof
KR102186607B1 (en) System and method for ballet performance via augumented reality
US7554542B1 (en) Image manipulation method and system
US6072478A (en) System for and method for producing and displaying images which are viewed from various viewpoints in local spaces
CN110750161A (en) Interactive system, method, mobile device and computer readable medium
JP2005527158A (en) Presentation synthesizer
CN103533445A (en) Flying theater playing system based on active interaction
JP2005094713A (en) Data display system, data display method, program and recording medium
CN114419285A (en) Virtual character performance control method and system applied to composite theater
WO2022221902A1 (en) System and method for performance in a virtual reality environment
WO2020234939A1 (en) Information processing device, information processing method, and program
CN114356090B (en) Control method, control device, computer equipment and storage medium
CN116962746A (en) Online chorus method and device based on continuous wheat live broadcast and online chorus system
CN218103295U (en) Performance control system for composite theater
Lombardo et al. Archeology of multimedia
JP3179318B2 (en) Information presentation device
JP3523784B2 (en) Interactive image operation display apparatus and method, and program storage medium
WO2021059770A1 (en) Information processing device, information processing system, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination