CN106683501A - AR children scene play projection teaching method and system - Google Patents
AR children scene play projection teaching method and system Download PDFInfo
- Publication number
- CN106683501A CN106683501A CN201611213221.2A CN201611213221A CN106683501A CN 106683501 A CN106683501 A CN 106683501A CN 201611213221 A CN201611213221 A CN 201611213221A CN 106683501 A CN106683501 A CN 106683501A
- Authority
- CN
- China
- Prior art keywords
- user
- projection
- image
- face
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/067—Combinations of audio and projected visual presentation, e.g. film, slides
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an AR children scene play projection teaching method and system. The AR children scene play projection teaching method includes the steps: acquiring an AR interactive card image, a user face image, user real-time body movement data, and user voice; identifying the information of the AR interactive card image, calling a corresponding 3D scene play template which includes a 3D role model and a background model, wherein the 3D role model is formed by a face model and a body model; dividing the user face image and synthesizing the user face image with the face model, performing data interaction between the real-time body movement of the user and the body model, and controlling the body movement of the 3D role model; and converting the called 3D scene play template into projection, and projecting the projected 3D scene play template to a screen. The AR children scene play projection teaching method and system have the advantages that the role in the scene play can have part of face characteristics of the user and can make corresponding movements according to the movement of the user so as to realize man-machine interaction and strong story immersing feeling by synthesizing the face image with the face model and by means of interaction between the body movement data of the user and the body model.
Description
Technical field
The present invention relates to AR projection arts, play the part of projection teaching's method and are more particularly, to a kind of AR children scene
System.
Background technology
With the fast development of information technology, video teaching has been entered in child teaching, its education activities to child
Serve very big assosting effect.Extensively, thinking is in concrete image Thinking Stage to interest the child, to relatively more abstract things
A large amount of acceptance are also difficult to, the development of promotion concrete image thinking is only the approach of early development children for learning potentiality.Video teaching
With its visual in image form of expression, child teaching requirement in this respect is more disclosure satisfy that.
At present the video teaching major part of child is directly to play video to complete using player, children stories immersion sense
Not enough, the interaction of people and machine is lacked, interesting inadequate, teaching process is more dull.
The content of the invention
It is an object of the invention to overcome above-mentioned technical deficiency, propose a kind of AR children scene play the part of projection teaching's method and
System, solves child's video teaching immersion sense in prior art and not enough, lacks the interactive technical problem of people and machine.
To reach above-mentioned technical purpose, technical scheme provides a kind of AR children scene and plays the part of projection teaching side
Method, wherein, including:
S1, collection AR interaction card images, user's face image, the real-time limb action data of user, user speech, utilize
Depth sensing equipment gathers the real-time limb action data of the user;
S2, the information for recognizing the AR interactions card image, call the corresponding 3D scenes play template of the AR interactions card,
The 3D scenes play template includes 3D actor models and background model, and the 3D actor models are by mask and limbs model group
Into the background model is for dynamically or statically;
S3, the user's face image is cut, the face-image after cutting is synthesized to into the 3D actor models
The mask;
S4, the real-time limb action data of the user are carried out into data friendship with the limbs model of the 3D actor models
Mutually, the limb motion of the 3D actor models is controlled;
S5, voice change process is carried out to the user speech;
S6, by the 3D scenes called in S2 play template be converted into projection on the projection screen, wherein, the back of the body
Scape model conversation is background plane dynamically or statically, and the 3D actor models are corresponding according to the real-time limb action of the user
Be converted into dynamic 3D role projection, the user speech after voice change process is played while projection.
Technical scheme also provides a kind of AR children scene and plays the part of projection teaching's system, wherein, including:
Acquisition module:Collection AR interaction card images, user's face image, the real-time limb action data of user, Yong Huyu
Sound, using depth sensing equipment the real-time limb action data of the user are gathered;
Scene play selecting module:The information of the AR interactions card image is recognized, calls the AR interactions card corresponding
3D scenes play template, the 3D scenes play template includes 3D actor models and background model, and the 3D actor models are by facial mould
Type and limbs model are constituted, and the background model is for dynamically or statically;
Face-image synthesis module:The user's face image is cut, the face-image after cutting is synthesized to
The mask of the 3D actor models;
Limb action synthesis module:By the limbs of the real-time limb action data of the user and the 3D actor models
Model carries out data interaction, controls the limb motion of the 3D actor models;
Acoustic processing module:Voice change process is carried out to the user speech;
The acute projection module of scene:The 3D scenes play template called in scene play selecting module is converted into into projection
On the projection screen, wherein, the background model is converted into background plane dynamically or statically, the 3D actor models according to
The real-time limb action of the user is correspondingly converted into dynamic 3D role's projection, and the institute after voice change process is played while projection
State user speech.
Compared with prior art, beneficial effects of the present invention include:Can be switched by switching AR interaction cards and not sympathized with
Scape play is projected on projection screen, by the way that face-image is synthesized to into 3D actor models, by the real-time limb action data of user with
3D actor models are interacted, and the role in making scene acute possesses the facial characteristics of user, and can be made according to the action of user
Corresponding actions, it is possible to achieve stronger man-machine interaction, allow child to have very strong story immersion sense, interesting very strong.
Description of the drawings
Fig. 1 is that a kind of AR children scene that the present invention is provided plays the part of projection teaching's method flow diagram;
Fig. 2 is that a kind of AR children scene that the present invention is provided plays the part of projection teaching's system architecture diagram.
In accompanying drawing:1st, AR children scene plays the part of projection teaching's system, 11, acquisition module, 12, scene play selecting module, 13,
Face-image synthesis module, 14, limb action synthesis module, 15, acoustic processing module, 16, the acute projection module of scene.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, and
It is not used in the restriction present invention.
Such as Fig. 1, the invention provides a kind of AR children scene plays the part of projection teaching's method, wherein, including:
S1, collection AR interaction card images, user's face image, the real-time limb action data of user, user speech, utilize
The real-time limb action data of depth sensing equipment collection user;
The information of S2, identification AR interaction card images, calls the corresponding 3D scenes play template of AR interaction cards, and 3D scenes are acute
Template includes 3D actor models and background model, and 3D actor models are made up of mask and limbs model, and background model is
State or static state;
S3, user's face image is cut, the face-image after cutting is synthesized to into the facial mould of 3D actor models
Type;
S4, the limbs model of the real-time limb action data of user and 3D actor models is carried out data interaction, control 3D angles
The limb motion of color model;
S5, voice change process is carried out to user speech;
S6, by the 3D scenes called in S2 play template be converted into projection on the projection screen, wherein, background model turn
Background plane dynamically or statically is turned to, 3D actor models are correspondingly converted into dynamic 3D according to the real-time limb action of user
Role projects, and the user speech after voice change process is played while projection.
AR children scene of the present invention plays the part of projection teaching's method, and step S1 includes:
The one side that AR interaction cards have image is placed at the 10cm-15cm of photographic head front, using photographic head AR is gathered
Interactive card image, after the information of AR interaction card images is recognized, the interactive card in front of photographic head is removed, and profit
User's face image is gathered with photographic head, using voice acquisition device user speech is gathered.
AR children scene of the present invention plays the part of projection teaching's method, and step S2 includes:
Specific image is provided with each AR interaction card, the information of specific image is recognized, specific 3D feelings are called
Scape play template, and then it is projected out specific background and 3D role;
Each 3D scenes play template is available comprising multiple 3D actor models;
Switching AR interaction cards can switch 3D scenes play template, and acute so as to switch the scene being projected out, scene play is by angle
Background outside normal complexion role is constituted.
AR children scene of the present invention plays the part of projection teaching's method, and step S3 includes:
Using Molioopencv technical finesse user's face images, facial contour is identified, then to eyes, face, nose
Son is marked cutting, and the face-image after by cutting is synthesized to before the mask of 3D actor models, after cutting
Face-image carry out Q versions later stage process.
AR children scene of the present invention plays the part of projection teaching's method, and step S5 includes:
According to scene play needs, the user speech to gathering carries out voice change process.
AR children scene of the present invention plays the part of projection teaching's method, and step S6 includes:
The broadcasting speed of the projection of 3D scenes play template can be adjusted, and the projection of 3D scenes play template can suspend, stop;
Through S3, S4, S5 step, the mask of 3D actor models is combined with the user's face image after cutting,
Limbs model is interacted with the real-time limb action data of user, when 3D scenes play template is converted into projection, the angle in projection
Color possesses the facial characteristics of user, can make corresponding actions according to the action of user, and projection includes background, and project it is same
When play voice change process after user speech.
Such as Fig. 2, the present invention provides a kind of AR children scene and plays the part of projection teaching's system 1, wherein, including:
Acquisition module 11:Collection AR interaction card images, user's face image, the real-time limb action data of user, user
Voice, using the real-time limb action data of depth sensing equipment collection user;
Scene play selecting module 12:The information of identification AR interaction card images, calls the corresponding 3D scenes of AR interaction cards
Acute template, 3D scenes play template includes 3D actor models and background model, and 3D actor models are by mask and limbs model group
Into background model is for dynamically or statically;
Face-image synthesis module 13:User's face image is cut, the face-image after cutting is synthesized to into 3D
The mask of actor model;
Limb action synthesis module 14:The limbs model of the real-time limb action data of user and 3D actor models is entered into line number
According to interaction, the limb motion of 3D actor models is controlled;
Acoustic processing module 15:Voice change process is carried out to user speech;
The acute projection module 16 of scene:The 3D scenes play template called in scene play selecting module 12 is converted into into projection
On the projection screen, wherein, background model is converted into background plane dynamically or statically, and 3D actor models are real-time according to user
Limb action is correspondingly converted into dynamic 3D role's projection, and the user speech after voice change process is played while projection.
AR children scene of the present invention is played the part of in projection teaching's system 1, scene play selecting module 12:
Specific image is provided with each AR interaction card, the information of specific image is recognized, specific 3D feelings are called
Scape play template, and then it is projected out specific background and 3D role;
Each 3D scenes play template is available comprising multiple 3D actor models.
AR children scene of the present invention is played the part of in projection teaching's system 1, face-image synthesis module 13:
Using Molioopencv technical finesse user's face images, facial contour is identified, then to eyes, face, nose
Son is marked cutting, and the face-image after by cutting is synthesized to before the mask of 3D actor models, after cutting
Face-image carry out Q versions later stage process.
AR children scene of the present invention is played the part of in projection teaching's system 1, the acute projection module 16 of scene:
The broadcasting speed of the projection of 3D scenes play template can be adjusted, and the projection of 3D scenes play template can suspend, stop.
The present invention in use, gathers AR interaction card images, user's face image, the real-time limb action number of user
According to, user speech, using the real-time limb action data of depth sensing equipment collection user;The letter of identification AR interaction card images
Breath, calls the corresponding 3D scenes play template of AR interaction cards, 3D scenes play template to include 3D actor models and background model, 3D angles
Color model is made up of mask and limbs model, and background model is for dynamically or statically;User's face image is cut,
Face-image after cutting is synthesized to into the mask of 3D actor models;By the real-time limb action data of user and 3D role's mould
The limbs model of type carries out data interaction, controls the limb motion of 3D actor models;Voice change process is carried out to user speech;To adjust
3D scenes play template is converted into projection on the projection screen, wherein, background model is converted into dynamically or statically
Background plane, 3D actor models are correspondingly converted into dynamic 3D role projection according to the real-time limb action of user, projection it is same
When play voice change process after user speech.
Beneficial effects of the present invention include:Different scene plays can be switched by switching AR interaction cards and project to projection screen
On curtain, by the way that face-image is synthesized to into 3D actor models, the real-time limb action data of user are interacted with 3D actor models, allowed
Role in scene play possesses the facial characteristics of user, and corresponding actions can be made according to the action of user, it is possible to achieve
Stronger man-machine interaction, allows child to have very strong story immersion sense, interesting very strong.
The specific embodiment of present invention described above, does not constitute limiting the scope of the present invention.Any basis
Various other corresponding change and deformation that the technology design of the present invention is made, should be included in the guarantor of the claims in the present invention
In the range of shield.
Claims (8)
1. a kind of AR children scene plays the part of projection teaching's method, it is characterised in that include:
S1, collection AR interaction card images, user's face image, the real-time limb action data of user, user speech, using depth
Sensing equipment gathers the real-time limb action data of the user;
S2, the information for recognizing the AR interactions card image, call the corresponding 3D scenes play template of the AR interactions card, described
3D scenes play template includes 3D actor models and background model, and the 3D actor models are made up of mask and limbs model,
The background model is for dynamically or statically;
S3, the user's face image is cut, the face-image after cutting is synthesized to into the institute of the 3D actor models
State mask;
S4, the real-time limb action data of the user are carried out into data interaction with the limbs model of the 3D actor models,
Control the limb motion of the 3D actor models;
S5, voice change process is carried out to the user speech;
S6, by the 3D scenes called in S2 play template be converted into projection on the projection screen, wherein, the background mould
Type is converted into background plane dynamically or statically, and the 3D actor models correspondingly turn according to the real-time limb action of the user
Dynamic 3D role's projection is turned to, the user speech after voice change process is played while projection.
2. AR children scene as claimed in claim 1 plays the part of projection teaching's method, it is characterised in that step S2 includes:
Specific image is provided with each Zhang Suoshu AR interactions card, the information of specific image is recognized, is called specific described
3D scenes play template, and then it is projected out specific background and 3D role;
Each described 3D scenes play template is available comprising multiple 3D actor models.
3. AR children scene as claimed in claim 1 plays the part of projection teaching's method, it is characterised in that step S3 includes:
Using user's face image described in Molioopencv technical finesses, facial contour is identified, then to eyes, face, nose
Son is marked cutting, and the face-image after by cutting is synthesized to before the mask of the 3D actor models,
The later stage that face-image after cutting carries out Q versions is processed.
4. AR children scene as claimed in claim 1 plays the part of projection teaching's method, it is characterised in that step S6 includes:
The broadcasting speed of the projection of 3D scenes play template can be adjusted, the projection of the 3D scenes play template can suspend,
Stop.
5. a kind of AR children scene plays the part of projection teaching's system, it is characterised in that include:
Acquisition module:Collection AR interaction card images, user's face image, the real-time limb action data of user, user speech, profit
The real-time limb action data of the user are gathered with depth sensing equipment;
Scene play selecting module:The information of the AR interactions card image is recognized, the corresponding 3D feelings of the AR interactions card are called
Scape play template, 3D scenes play template includes 3D actor models and background model, the 3D actor models by mask and
Limbs model is constituted, and the background model is for dynamically or statically;
Face-image synthesis module:The user's face image is cut, the face-image after cutting is synthesized to described
The mask of 3D actor models;
Limb action synthesis module:By the limbs model of the real-time limb action data of the user and the 3D actor models
Data interaction is carried out, the limb motion of the 3D actor models is controlled;
Acoustic processing module:Voice change process is carried out to the user speech;
The acute projection module of scene:The 3D scenes play template called in scene play selecting module is converted into into projection to throw
On shadow screen, wherein, the background model is converted into background plane dynamically or statically, and the 3D actor models are according to described
The real-time limb action of user is correspondingly converted into dynamic 3D role's projection, and the use after voice change process is played while projection
Family voice.
6. AR children scene as claimed in claim 5 plays the part of projection teaching's system, it is characterised in that scene play selecting module:
Specific image is provided with each Zhang Suoshu AR interactions card, the information of specific image is recognized, is called specific described
3D scenes play template, and then it is projected out specific background and 3D role;
Each described 3D scenes play template is available comprising multiple 3D actor models.
7. AR children scene as claimed in claim 5 plays the part of projection teaching's system, it is characterised in that face-image synthesizes mould
Block:
Using user's face image described in Molioopencv technical finesses, facial contour is identified, then to eyes, face, nose
Son is marked cutting, and the face-image after by cutting is synthesized to before the mask of the 3D actor models,
The later stage that face-image after cutting carries out Q versions is processed.
8. AR children scene as claimed in claim 5 plays the part of projection teaching's system, it is characterised in that the acute projection module of scene:
The broadcasting speed of the projection of 3D scenes play template can be adjusted, the projection of the 3D scenes play template can suspend,
Stop.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611213221.2A CN106683501B (en) | 2016-12-23 | 2016-12-23 | A kind of AR children scene plays the part of projection teaching's method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611213221.2A CN106683501B (en) | 2016-12-23 | 2016-12-23 | A kind of AR children scene plays the part of projection teaching's method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106683501A true CN106683501A (en) | 2017-05-17 |
CN106683501B CN106683501B (en) | 2019-05-14 |
Family
ID=58870494
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611213221.2A Active CN106683501B (en) | 2016-12-23 | 2016-12-23 | A kind of AR children scene plays the part of projection teaching's method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106683501B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107240319A (en) * | 2017-07-25 | 2017-10-10 | 深圳市鹰硕技术有限公司 | A kind of interactive Scene Teaching system for the K12 stages |
CN107396001A (en) * | 2017-08-30 | 2017-11-24 | 郝翻翻 | A kind of method of record personal |
CN108245881A (en) * | 2017-12-29 | 2018-07-06 | 武汉市马里欧网络有限公司 | Three-dimensional jointed plate model buildings system based on AR |
CN108288419A (en) * | 2017-12-31 | 2018-07-17 | 广州市坤腾软件技术有限公司 | A kind of vocational education craftsman's platform based on AR/VR technologies |
CN108509473A (en) * | 2017-08-28 | 2018-09-07 | 胜典科技股份有限公司 | It is combined from the audio-visual works system and record media for creating element with augmented reality technology |
CN109039851A (en) * | 2017-06-12 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Interaction data processing method, device, computer equipment and storage medium |
CN109255990A (en) * | 2018-09-30 | 2019-01-22 | 杭州乔智科技有限公司 | A kind of tutoring system based on AR augmented reality |
CN109326154A (en) * | 2018-12-05 | 2019-02-12 | 北京汉谷教育科技有限公司 | A method of human-computer interaction teaching is carried out by speech recognition engine |
CN109917907A (en) * | 2019-01-29 | 2019-06-21 | 长安大学 | A kind of feed stories plate exchange method based on card |
CN112068709A (en) * | 2020-11-12 | 2020-12-11 | 广州志胜游艺设备有限公司 | AR display interactive learning method based on books for children |
CN114650365A (en) * | 2020-12-18 | 2022-06-21 | 丰田自动车株式会社 | Image display system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130162532A1 (en) * | 2010-09-17 | 2013-06-27 | Tencent Technology (Shenzhen) Company Limited | Method and system for gesture-based human-machine interaction and computer-readable medium thereof |
CN104102412A (en) * | 2014-07-24 | 2014-10-15 | 央数文化(上海)股份有限公司 | Augmented reality technology-based handheld reading equipment and reading method thereof |
CN104346451A (en) * | 2014-10-29 | 2015-02-11 | 山东大学 | Situation awareness system based on user feedback, as well as operating method and application thereof |
CN105139701A (en) * | 2015-09-16 | 2015-12-09 | 华中师范大学 | Interactive children teaching system |
CN105306862A (en) * | 2015-11-17 | 2016-02-03 | 广州市英途信息技术有限公司 | Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method |
CN205622745U (en) * | 2016-05-10 | 2016-10-05 | 倪宏伟 | Real -time synthesis system of virtual reality true man |
CN106131530A (en) * | 2016-08-26 | 2016-11-16 | 万象三维视觉科技(北京)有限公司 | A kind of bore hole 3D virtual reality display system and methods of exhibiting thereof |
-
2016
- 2016-12-23 CN CN201611213221.2A patent/CN106683501B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130162532A1 (en) * | 2010-09-17 | 2013-06-27 | Tencent Technology (Shenzhen) Company Limited | Method and system for gesture-based human-machine interaction and computer-readable medium thereof |
CN104102412A (en) * | 2014-07-24 | 2014-10-15 | 央数文化(上海)股份有限公司 | Augmented reality technology-based handheld reading equipment and reading method thereof |
CN104346451A (en) * | 2014-10-29 | 2015-02-11 | 山东大学 | Situation awareness system based on user feedback, as well as operating method and application thereof |
CN105139701A (en) * | 2015-09-16 | 2015-12-09 | 华中师范大学 | Interactive children teaching system |
CN105306862A (en) * | 2015-11-17 | 2016-02-03 | 广州市英途信息技术有限公司 | Scenario video recording system and method based on 3D virtual synthesis technology and scenario training learning method |
CN205622745U (en) * | 2016-05-10 | 2016-10-05 | 倪宏伟 | Real -time synthesis system of virtual reality true man |
CN106131530A (en) * | 2016-08-26 | 2016-11-16 | 万象三维视觉科技(北京)有限公司 | A kind of bore hole 3D virtual reality display system and methods of exhibiting thereof |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109039851A (en) * | 2017-06-12 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Interaction data processing method, device, computer equipment and storage medium |
CN107240319A (en) * | 2017-07-25 | 2017-10-10 | 深圳市鹰硕技术有限公司 | A kind of interactive Scene Teaching system for the K12 stages |
CN108509473B (en) * | 2017-08-28 | 2022-07-12 | 胜典科技股份有限公司 | Video and audio works system and recording medium combining self-creation elements by augmented reality technology |
CN108509473A (en) * | 2017-08-28 | 2018-09-07 | 胜典科技股份有限公司 | It is combined from the audio-visual works system and record media for creating element with augmented reality technology |
CN107396001A (en) * | 2017-08-30 | 2017-11-24 | 郝翻翻 | A kind of method of record personal |
CN108245881A (en) * | 2017-12-29 | 2018-07-06 | 武汉市马里欧网络有限公司 | Three-dimensional jointed plate model buildings system based on AR |
CN108288419A (en) * | 2017-12-31 | 2018-07-17 | 广州市坤腾软件技术有限公司 | A kind of vocational education craftsman's platform based on AR/VR technologies |
CN109255990A (en) * | 2018-09-30 | 2019-01-22 | 杭州乔智科技有限公司 | A kind of tutoring system based on AR augmented reality |
CN109326154A (en) * | 2018-12-05 | 2019-02-12 | 北京汉谷教育科技有限公司 | A method of human-computer interaction teaching is carried out by speech recognition engine |
CN109917907A (en) * | 2019-01-29 | 2019-06-21 | 长安大学 | A kind of feed stories plate exchange method based on card |
CN109917907B (en) * | 2019-01-29 | 2022-05-03 | 长安大学 | Card-based dynamic storyboard interaction method |
CN112068709A (en) * | 2020-11-12 | 2020-12-11 | 广州志胜游艺设备有限公司 | AR display interactive learning method based on books for children |
CN114650365A (en) * | 2020-12-18 | 2022-06-21 | 丰田自动车株式会社 | Image display system |
CN114650365B (en) * | 2020-12-18 | 2023-12-01 | 丰田自动车株式会社 | image display system |
Also Published As
Publication number | Publication date |
---|---|
CN106683501B (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106683501B (en) | A kind of AR children scene plays the part of projection teaching's method and system | |
US12002236B2 (en) | Automated gesture identification using neural networks | |
JP6902683B2 (en) | Virtual robot interaction methods, devices, storage media and electronic devices | |
JP2024019736A (en) | Massive simultaneous remote digital presence world | |
US11151796B2 (en) | Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements | |
CN110418095B (en) | Virtual scene processing method and device, electronic equipment and storage medium | |
US11017575B2 (en) | Method and system for generating data to provide an animated visual representation | |
CN107274464A (en) | A kind of methods, devices and systems of real-time, interactive 3D animations | |
JP7143847B2 (en) | Information processing system, information processing method, and program | |
US20100146052A1 (en) | method and a system for setting up encounters between persons in a telecommunications system | |
CN109333544B (en) | Doll interaction method for marionette performance participated by audience | |
CN109978975A (en) | A kind of moving method and device, computer equipment of movement | |
CN106570473A (en) | Deaf-mute sign language recognition interactive system based on robot | |
CN106582005A (en) | Data synchronous interaction method and device in virtual games | |
CN115049016B (en) | Model driving method and device based on emotion recognition | |
WO2018139203A1 (en) | Information processing device, information processing method, and program | |
CN111383642B (en) | Voice response method based on neural network, storage medium and terminal equipment | |
Yargıç et al. | A lip reading application on MS Kinect camera | |
US7257538B2 (en) | Generating animation from visual and audio input | |
CN107918482A (en) | The method and system of overstimulation is avoided in immersion VR systems | |
CN109739353A (en) | A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus | |
US12020389B2 (en) | Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements | |
CN109343695A (en) | Exchange method and system based on visual human's behavioral standard | |
Malleson et al. | Rapid one-shot acquisition of dynamic VR avatars | |
CN106502382A (en) | Active exchange method and system for intelligent robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |