CN113253836A - Teaching method and system based on artificial intelligence and virtual reality - Google Patents

Teaching method and system based on artificial intelligence and virtual reality Download PDF

Info

Publication number
CN113253836A
CN113253836A CN202110300516.8A CN202110300516A CN113253836A CN 113253836 A CN113253836 A CN 113253836A CN 202110300516 A CN202110300516 A CN 202110300516A CN 113253836 A CN113253836 A CN 113253836A
Authority
CN
China
Prior art keywords
virtual
voice
action
library
target character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110300516.8A
Other languages
Chinese (zh)
Inventor
段先知
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unicom Woyuedu Technology Culture Co Ltd
Original Assignee
Unicom Woyuedu Technology Culture Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unicom Woyuedu Technology Culture Co Ltd filed Critical Unicom Woyuedu Technology Culture Co Ltd
Priority to CN202110300516.8A priority Critical patent/CN113253836A/en
Publication of CN113253836A publication Critical patent/CN113253836A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Abstract

The invention discloses a teaching method and a system based on artificial intelligence and virtual reality, wherein the method comprises the following steps: respectively synthesizing a virtual voice library and a virtual action library of the target character according to the voice characteristic data and the action characteristic data of the target character; acquiring a mapping relation between a virtual voice library and a virtual action library; constructing a virtual character model of the target character based on the mapping relation; acquiring corresponding scene elements through preset teaching contents; and synthesizing a virtual teaching scene based on the virtual character model and the scene elements to send the virtual teaching scene to the virtual reality equipment for displaying. The method realizes scene education, meets the requirement of a user for quickly and conveniently acquiring learning information, enhances the learning interest and learning enthusiasm of the user, and improves the learning experience.

Description

Teaching method and system based on artificial intelligence and virtual reality
Technical Field
The invention relates to the field of education and teaching. And more particularly, to a teaching method and system based on artificial intelligence and virtual reality.
Background
In a practical teaching scene, the selection of the teacher by the students is limited, and the external image and the external sound of the teacher are fixed. Research shows that teachers loved by students give lessons, so that learning interests of the students can be stimulated more often.
Therefore, a virtual teaching scene is needed to be provided, so that students can independently select images and sounds of own teachers to give lessons, scene education is realized, the requirement of the users for quickly and conveniently acquiring learning information is met, the learning interest and learning enthusiasm of the users are enhanced, and the learning experience is improved.
Disclosure of Invention
The application aims to provide a teaching method and system based on artificial intelligence and virtual reality, and aims to solve at least one of the problems in the prior art.
In order to achieve the purpose, the following technical scheme is adopted in the application:
the application provides a teaching method based on artificial intelligence and virtual reality in a first aspect, and the teaching method comprises the following steps:
respectively synthesizing a virtual voice library and a virtual action library of the target character according to the voice characteristic data and the action characteristic data of the target character;
acquiring a mapping relation between the virtual voice library and the virtual action library;
constructing a virtual character model of the target character based on the mapping relation;
acquiring corresponding scene elements through preset teaching contents;
and synthesizing a virtual teaching scene based on the virtual character model and the scene elements to be sent to virtual reality equipment for displaying.
In one possible implementation manner, before the synthesizing the virtual speech library and the virtual motion library of the target character according to the speech characteristic data and the motion characteristic data of the target character respectively, the method includes:
acquiring voice characteristic data through voice extraction according to training voice data which is sent by a target person and is based on preset voice content;
and acquiring the action characteristic data through action decomposition according to the training action data based on the preset action content displayed by the target person.
In one possible implementation manner, the synthesizing the virtual voice library and the virtual motion library of the target person according to the voice characteristic data and the motion characteristic data of the target person respectively includes:
synthesizing a virtual voice library of the target character through artificial intelligence based on the voice feature data;
and synthesizing the virtual action library of the target character through artificial intelligence based on the action characteristic data.
In one possible implementation manner, the obtaining the mapping relationship between the virtual voice library and the virtual action library includes:
generating a corresponding relation table between the virtual voice library and the virtual action library based on the matching relation between the preset voice content and the preset action content;
and acquiring the mapping relation between the virtual voice library and the virtual action library by calling the corresponding relation table.
In one possible implementation manner, the obtaining of the corresponding scene element through the preset teaching content includes:
analyzing preset teaching contents to obtain scene elements, wherein the scene elements comprise:
theme background, music background and scene animation.
In one possible implementation, the constructing the virtual character model of the target character based on the mapping relationship includes:
and constructing a virtual character model of the target character based on a 3D imaging technology and the mapping relation between the virtual voice library and the virtual action library.
In one possible implementation, the voice feature data includes:
the tone, the speed, the tone, the weight, the length and the ascending and descending of the voice of the target character.
In one possible implementation, the action characteristic data includes:
facial expressions and limb movements of the target user.
Another aspect of the present invention provides a teaching system based on artificial intelligence and virtual reality, the system comprising:
the system comprises an acquisition unit, a processing unit and virtual reality equipment;
the acquisition unit is used for acquiring training voice data which are sent by a target person and are based on preset voice content and acquiring training action data which are sent by the target person and are based on preset action content;
the processing unit is used for extracting the training voice data through voice to obtain voice feature data of the target person;
obtaining action characteristic data of the target figure by carrying out action decomposition on the training action data;
respectively synthesizing a virtual voice library and a virtual action library of the target character according to the voice characteristic data and the action characteristic data of the target character;
acquiring a mapping relation between the virtual voice library and the virtual action library;
constructing a virtual character model of the target character based on the mapping relation;
acquiring corresponding scene elements through preset teaching contents;
synthesizing a virtual teaching scene based on the virtual character model and the scene elements to be sent to virtual reality equipment for displaying;
the virtual reality equipment is used for displaying the virtual teaching scene loaded by the processing unit.
In one possible implementation manner, the system further includes a virtual reality interaction unit, configured to perform context interaction between the user and the virtual character model of the target character.
The invention has the following beneficial effects:
the technical scheme of this application provides the virtual teacher who can simulate the appearance of real personage, image, action, expression and sound etc, can break away from the solitary sense that the user utilized network learning in the past, and can load the teaching scene of reality, make the user also have the sensation of accepting regular education when can selecting virtual teacher, thereby realize the scene education, satisfy the user fast, the convenient demand that obtains the learning information, user's interest in learning and study enthusiasm have been strengthened, the experience of study has been promoted.
Drawings
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Fig. 1 shows a schematic structural diagram of an artificial intelligence and virtual reality based teaching system according to an embodiment of the present application.
Fig. 2 shows a schematic structural diagram of a processing unit proposed by an embodiment of the application.
FIG. 3 is a flow chart of a teaching method based on artificial intelligence and virtual reality proposed by an embodiment of the application.
Detailed Description
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
At present, people have two modes of off-line face-to-face education and on-line education in the aspect of education, for traditional off-line face-to-face education, a teacher and students need to be in the same place to achieve the on-line education, and the on-line education is provided for solving the problem that the teacher and the students are in different teaching places.
In order to improve the learning enthusiasm of students, a teaching system with a virtual teacher is proposed in the market at present, and the system generally has the functions of teaching, answering questions, arranging homework, testing and the like. The shape teacher with voice and 3D image is in the research stage.
To this end, an embodiment of the present application proposes an artificial intelligence and virtual reality based teaching system, as shown in fig. 1, the system includes:
the system comprises an acquisition unit, a processing unit and virtual reality equipment;
the acquisition unit comprises an audio acquisition device and a video acquisition device, the audio acquisition device is used for acquiring training voice data which are sent by a target person and are based on preset voice content, and the video acquisition device is used for acquiring training action data which are sent by the target person and are based on preset action content.
By way of example only, it is possible to illustrate,
the audio acquisition device acquires sound data of language characters (which can be reciting lessons or reading newspapers and the like) which are sent by the target character and are regulated by reciting as training voice data; the video acquisition device acquires action data such as facial, action and limb characteristics, which are obtained by corresponding action reaction when a target person shows a specified action (such as broadcasting operation), as training action data.
It should be noted that the audio acquisition device in this embodiment may be an existing recording pen, recorder, sound collector, or other device, and the video acquisition device may be an existing video recorder, camera, or other device, and the application is not specifically limited to the specific selection of the audio acquisition device and the video acquisition device.
The processing unit is used for extracting the training voice data acquired by the acquisition unit through voice to obtain the voice characteristic data of the target person. For example, voice feature data of the target person, such as the tone, the speed, the tone, the weight, the length, and the elevation of the voice, are acquired through a voice extraction algorithm.
And (4) decomposing the training motion data through motion to obtain motion characteristic data of the target person. For example, motion feature data such as a facial image, a limb feature, a facial expression, and a limb habit of the target person are acquired through techniques such as video frame processing and parsing.
The processing unit respectively synthesizes a virtual voice library and a virtual action library of the target character according to the voice characteristic data and the action characteristic data of the target character; that is, the voice feature and the motion feature data of the target character are respectively generated into a virtual voice library and a virtual motion library belonging to the target character by adopting an artificial intelligence technology and are stored.
The processing unit acquires a mapping relation between the virtual voice library and the virtual action library;
specifically, a corresponding relation table between a virtual voice library and a virtual action library is generated based on a pairing relation between preset voice content and preset action content;
and acquiring the mapping relation between the virtual voice library and the virtual action library by calling the corresponding relation table.
For example, when the target character shows that "the students look at the blackboard" in the prescribed language text recited, the action of "striking the blackboard" will correspondingly appear in the prescribed action of the target character display, and the voice content of "the students look at the blackboard" and the action content of "striking the blackboard" form a mutual corresponding relationship in the correspondence table.
Through the method, the preset voice content and the preset action content form a pairing relation one by one, and the pairing relation is applied to generate and store a corresponding relation table between the virtual voice library and the virtual action library.
Constructing a virtual character model of the target character based on the mapping relation;
and constructing a virtual character model belonging to the target character by using the 3D imaging technology, the contents in the virtual voice library, the contents in the virtual action library and the mapping relation between the virtual voice library and the virtual action library.
Acquiring corresponding scene elements through preset teaching contents;
the scene elements comprise information such as people, places, backgrounds, weather, themes, time, change trends and pictures; integrating keywords in the teaching content to generate a local scene, and forming an integral scene according to corresponding texts, pictures and weather elements; combining theme backgrounds according to time, weather and pictures; combining background music according to the audio files; popping up letter prompts according to task states; and displaying the current scene animation according to the motion trend.
For example, if the preset teaching content is a physical course, a scene of a physical laboratory is generated.
Synthesizing a virtual teaching scene based on the virtual character model and the scene elements to send to the virtual reality equipment for displaying; namely, the processing unit fuses the virtual model of the target character and the scene elements together to generate a final virtual teaching scene and displays the virtual teaching scene through the virtual reality equipment.
The virtual reality device proposed in this embodiment may be a VR, AR, 3(4 or 5) D projection device, and the like, which is not specifically limited in this application.
It should be noted that the processing unit proposed in the present application may be an existing computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, and configured to execute the content executed by the processing unit. As shown in fig. 2, a computer system suitable for implementing the server provided in the present embodiment includes a Central Processing Unit (CPU) that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage section into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the computer system are also stored. The CPU, ROM, and RAM are connected thereto via a bus. An input/output (I/O) interface is also connected to the bus.
An input section including a keyboard, a mouse, and the like; an output section including a speaker and the like such as a Liquid Crystal Display (LCD); a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the I/O interface as needed. A removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive as necessary, so that a computer program read out therefrom is mounted into the storage section as necessary.
In particular, it is mentioned that the processes described in the above flowcharts can be implemented as computer software programs according to the present embodiment. For example, the present embodiments include a computer program product comprising a computer program tangibly embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium.
The flowchart and schematic diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to the present embodiments. In this regard, each block in the flowchart or schematic diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the schematic and/or flowchart illustration, and combinations of blocks in the schematic and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The teaching system provided by the application can also realize the interaction between the user and the virtual character model and is used for increasing the interest of students in learning.
The virtual reality interaction unit disassembles the content through a built-in content optimization algorithm, extracts the user form keywords, and intelligently analyzes the current behavior and action of the user according to the current scene. For example: raising hands and asking questions.
In order to facilitate understanding of the technical solution of the present application, in another embodiment, the present application provides a teaching method based on artificial intelligence and virtual reality, and as shown in fig. 3, the teaching method includes the following steps:
s100, a collecting unit collects training voice data which are sent by a target person and based on preset voice content and training action data which are displayed by the target person and based on preset action content;
s200, the processing unit respectively obtains voice characteristic data and action characteristic data of the target person through voice extraction and action decomposition based on the training voice data and the training action data.
S300, respectively synthesizing a virtual voice library and a virtual action library of the target character by the processing unit according to the voice characteristic data and the action characteristic data of the target character;
in particular, the amount of the solvent to be used,
s310, synthesizing a virtual voice library of the target character through artificial intelligence based on the voice characteristic data of the target character;
and S320, synthesizing an action voice library of the target character through artificial intelligence based on the action characteristic data of the target character.
S400, the processor acquires a mapping relation between the virtual voice library and the virtual action library;
specifically, a corresponding relation table between a virtual voice library and a virtual action library is generated based on a pairing relation between preset voice content and preset action content;
and acquiring the mapping relation between the virtual voice library and the virtual action library by calling the corresponding relation table.
S500, constructing a virtual character model of the target character based on the mapping relation;
specifically, a virtual character model of the target character is constructed based on the 3D imaging technology and the mapping relation between the virtual voice library and the virtual motion library.
S600, acquiring corresponding scene elements through preset teaching contents;
specifically, analyzing a preset teaching content to obtain a scene element, wherein the scene element comprises:
theme backgrounds, music backgrounds, scene animations, etc.
And S700, synthesizing a virtual teaching scene based on the virtual character model and the scene elements to send the virtual teaching scene to the virtual reality equipment for displaying.
In a specific example, the target character is selected as the father of the user, namely a virtual character model of the father of the user is constructed, the father of the user generates a virtual teacher image which is the same as the sound and the face of the father by recording specified voice training and action training videos, a specified course is selected through communication with the AI of the user, then the specified education content is intelligently analyzed to be changed into a 3D education video, and the virtual father image is used for teaching the course.
It should be noted that the principle and the workflow of the display control method provided by this embodiment are performed based on the units in the education system, and reference may be made to the above description for relevant parts, which are not described herein again.
It is further noted that, in the description of the present application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention, and it will be obvious to those skilled in the art that other variations or modifications may be made on the basis of the above description, and all embodiments may not be exhaustive, and all obvious variations or modifications may be included within the scope of the present invention.

Claims (10)

1. A teaching method based on artificial intelligence and virtual reality is characterized by comprising the following steps:
respectively synthesizing a virtual voice library and a virtual action library of the target character according to the voice characteristic data and the action characteristic data of the target character;
acquiring a mapping relation between the virtual voice library and the virtual action library;
constructing a virtual character model of the target character based on the mapping relation;
acquiring corresponding scene elements through preset teaching contents;
and synthesizing a virtual teaching scene based on the virtual character model and the scene elements to be sent to virtual reality equipment for displaying.
2. The method of claim 1, wherein before the synthesizing of the virtual speech and motion libraries of the target character from the speech and motion characteristic data of the target character, respectively, the method comprises:
acquiring voice characteristic data through voice extraction according to training voice data which is sent by a target person and is based on preset voice content;
and acquiring the action characteristic data through action decomposition according to the training action data based on the preset action content displayed by the target person.
3. The method of claim 2, wherein the synthesizing the virtual voice library and the virtual motion library of the target character according to the voice characteristic data and the motion characteristic data of the target character comprises:
synthesizing a virtual voice library of the target character through artificial intelligence based on the voice feature data;
and synthesizing the virtual action library of the target character through artificial intelligence based on the action characteristic data.
4. The method according to claim 3, wherein said obtaining a mapping relationship between the virtual voice library and the virtual action library comprises:
generating a corresponding relation table between the virtual voice library and the virtual action library based on the matching relation between the preset voice content and the preset action content;
and acquiring the mapping relation between the virtual voice library and the virtual action library by calling the corresponding relation table.
5. The method of claim 1, wherein the obtaining the corresponding scene element through the preset teaching content comprises:
analyzing preset teaching contents to obtain scene elements, wherein the scene elements comprise:
theme background, music background and scene animation.
6. The method of claim 1, wherein constructing the virtual character model of the target character based on the mapping comprises:
and constructing a virtual character model of the target character based on a 3D imaging technology and the mapping relation between the virtual voice library and the virtual action library.
7. The method of claim 2, wherein the voice characteristic data comprises:
the tone, the speed, the tone, the weight, the length and the ascending and descending of the voice of the target character.
8. The method of claim 3, wherein the action profile data comprises:
facial expressions and limb movements of the target user.
9. A teaching system based on artificial intelligence and virtual reality is characterized by comprising:
the system comprises an acquisition unit, a processing unit and virtual reality equipment;
the acquisition unit is used for acquiring training voice data which are sent by a target person and are based on preset voice content and acquiring training action data which are sent by the target person and are based on preset action content;
the processing unit is used for extracting the training voice data through voice to obtain voice feature data of the target person;
obtaining action characteristic data of the target figure by carrying out action decomposition on the training action data;
respectively synthesizing a virtual voice library and a virtual action library of the target character according to the voice characteristic data and the action characteristic data of the target character;
acquiring a mapping relation between the virtual voice library and the virtual action library;
constructing a virtual character model of the target character based on the mapping relation;
acquiring corresponding scene elements through preset teaching contents;
synthesizing a virtual teaching scene based on the virtual character model and the scene elements to be sent to virtual reality equipment for displaying;
the virtual reality equipment is used for displaying the virtual teaching scene loaded by the processing unit.
10. An instructional system as claimed in claim 9 further comprising a virtual reality interaction unit for the user to interact contextually with the virtual character model of the target character.
CN202110300516.8A 2021-03-22 2021-03-22 Teaching method and system based on artificial intelligence and virtual reality Pending CN113253836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300516.8A CN113253836A (en) 2021-03-22 2021-03-22 Teaching method and system based on artificial intelligence and virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300516.8A CN113253836A (en) 2021-03-22 2021-03-22 Teaching method and system based on artificial intelligence and virtual reality

Publications (1)

Publication Number Publication Date
CN113253836A true CN113253836A (en) 2021-08-13

Family

ID=77181075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300516.8A Pending CN113253836A (en) 2021-03-22 2021-03-22 Teaching method and system based on artificial intelligence and virtual reality

Country Status (1)

Country Link
CN (1) CN113253836A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821104A (en) * 2021-09-17 2021-12-21 武汉虹信技术服务有限责任公司 Visual interactive system based on holographic projection
CN114302153A (en) * 2021-11-25 2022-04-08 阿里巴巴达摩院(杭州)科技有限公司 Video playing method and device
CN114470758A (en) * 2022-01-17 2022-05-13 上海光追网络科技有限公司 Character action data processing method and system based on VR

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106710590A (en) * 2017-02-24 2017-05-24 广州幻境科技有限公司 Voice interaction system with emotional function based on virtual reality environment and method
CN108153415A (en) * 2017-12-22 2018-06-12 歌尔科技有限公司 Virtual reality language teaching interaction method and virtual reality device
CN109377797A (en) * 2018-11-08 2019-02-22 北京葡萄智学科技有限公司 Virtual portrait teaching method and device
CN111986297A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 Virtual character facial expression real-time driving system and method based on voice control
CN112331001A (en) * 2020-10-23 2021-02-05 螺旋平衡(东莞)体育文化传播有限公司 Teaching system based on virtual reality technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106710590A (en) * 2017-02-24 2017-05-24 广州幻境科技有限公司 Voice interaction system with emotional function based on virtual reality environment and method
CN108153415A (en) * 2017-12-22 2018-06-12 歌尔科技有限公司 Virtual reality language teaching interaction method and virtual reality device
CN109377797A (en) * 2018-11-08 2019-02-22 北京葡萄智学科技有限公司 Virtual portrait teaching method and device
CN111986297A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 Virtual character facial expression real-time driving system and method based on voice control
CN112331001A (en) * 2020-10-23 2021-02-05 螺旋平衡(东莞)体育文化传播有限公司 Teaching system based on virtual reality technology

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821104A (en) * 2021-09-17 2021-12-21 武汉虹信技术服务有限责任公司 Visual interactive system based on holographic projection
CN114302153A (en) * 2021-11-25 2022-04-08 阿里巴巴达摩院(杭州)科技有限公司 Video playing method and device
CN114302153B (en) * 2021-11-25 2023-12-08 阿里巴巴达摩院(杭州)科技有限公司 Video playing method and device
CN114470758A (en) * 2022-01-17 2022-05-13 上海光追网络科技有限公司 Character action data processing method and system based on VR

Similar Documents

Publication Publication Date Title
CN105632251B (en) 3D virtual teacher system and method with phonetic function
CN113253836A (en) Teaching method and system based on artificial intelligence and virtual reality
Kennaway et al. Providing signed content on the Internet by synthesized animation
CN110488975B (en) Data processing method based on artificial intelligence and related device
US6526395B1 (en) Application of personality models and interaction with synthetic characters in a computing system
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
Mukherjee Role of multimedia in education
CN115731751A (en) Online teaching system integrating artificial intelligence and virtual reality technology
CN117055724A (en) Generating type teaching resource system in virtual teaching scene and working method thereof
Solina et al. Multimedia dictionary and synthesis of sign language
Bruhn et al. What is mediality, and (how) does it matter? Theoretical terms and methodology
JP3930402B2 (en) ONLINE EDUCATION SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROVIDING METHOD, AND PROGRAM
Wolfe et al. A survey of facial nonmanual signals portrayed by avatar
CN114666307B (en) Conference interaction method, conference interaction device, equipment and storage medium
Steeples et al. Enabling professional learning in distributed communities of practice: descriptors for multimedia objects
Smuseva et al. Research and software development using AR technology
Hersh et al. Representing contextual features of subtitles in an educational context
Frantiska Jr Interface development for learning environments: Establishing connections between users and learning
Xiao et al. Computer Animation for EFL Learning Environments.
Montgomery et al. Enabling real-time 3D display of lifelike fingerspelling in a web app
Mirri Rich media content adaptation in e-learning systems
KR102575820B1 (en) Digital actor management system for exercise trainer
US20220301250A1 (en) Avatar-based interaction service method and apparatus
CN116980643A (en) Data processing method, device, equipment and readable storage medium
Bonamico et al. Virtual talking heads for tele-education applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210813