CN109213304A - Gesture interaction method and system for live broadcast teaching - Google Patents
Gesture interaction method and system for live broadcast teaching Download PDFInfo
- Publication number
- CN109213304A CN109213304A CN201710517913.4A CN201710517913A CN109213304A CN 109213304 A CN109213304 A CN 109213304A CN 201710517913 A CN201710517913 A CN 201710517913A CN 109213304 A CN109213304 A CN 109213304A
- Authority
- CN
- China
- Prior art keywords
- gesture
- scene
- teaching
- live broadcast
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000003993 interaction Effects 0.000 title claims abstract description 30
- 230000033001 locomotion Effects 0.000 claims description 12
- 230000002452 interceptive effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 6
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000010899 nucleation Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 5
- 210000002310 elbow joint Anatomy 0.000 description 4
- 210000002478 hand joint Anatomy 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000245 forearm Anatomy 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of gesture interaction method and systems for live broadcast teaching, are related to direct seeding technique field.Wherein, which comprises S1 obtains the gesture information of user;S2 is based on the gesture information and gesture database, identifies the default gesture in the gesture information;S3 obtains the operational order for corresponding to each client based on the Run-time scenario of the default gesture and each client in scene database;The operational order corresponding to each client is respectively sent to corresponding client by S4, to realize that live broadcast teaching interacts.Pass through the default gesture in identification user gesture information, to send operational order to the client of each user in conjunction with the Run-time scenario for each client for participating in live streaming, to realize the interaction in live broadcast teaching, without inputting text or voice, user is not influenced to give lessons or listen to the teacher, and does not influence live broadcast teaching effect again while enhancing live broadcast teaching sense of participation.
Description
Technical Field
The embodiment of the invention relates to the technical field of live broadcasting, in particular to a gesture interaction method and system for live broadcasting teaching.
Background
The existing education system in China is huge, but excellent education resources are not plentiful, and the education quality of a plurality of areas, especially remote areas, is not compared with the education quality of developed areas at all due to the lack of good teachers and resources, so that the unfairness of the education resource distribution is caused to a great extent. With the continuous development of the technology, the application layer of the internet gradually blends into the teaching field, and the above problems of the live webcast teaching are well relieved.
The existing network live broadcast teaching system is a server system which shares live broadcast video streams sent by a main broadcast client to a plurality of live broadcast clients for watching. A plurality of live broadcast rooms are provided in the live broadcast system, and after a live broadcast client enters one live broadcast room, a live broadcast video stream sent by a main broadcast client in the current live broadcast room can be watched. In the live broadcast room, the interaction form between the live broadcast client and the anchor client is mainly to issue comment messages through characters and voice. The audience client sends comment information to the server, the server sends the comment information to the anchor client and other live clients in the same live broadcast room, and the anchor client and other live clients receive and display the comment information and reply the comment information when needed.
However, in the prior art, the interactive form is performed by using characters or voice, and the characters or voice input is often performed by using external equipment, so that a user who sends a character message or a voice message is distracted in the live broadcast teaching process, and the live broadcast teaching effect is influenced to a certain extent.
Disclosure of Invention
Embodiments of the present invention provide a gesture interaction method and system for live teaching, which overcome the above problems or at least partially solve the above problems.
In one aspect, an embodiment of the present invention provides a gesture interaction method for live teaching, where the method includes:
s1, acquiring gesture information of the user;
s2, recognizing a preset gesture in the gesture information based on the gesture information and the gesture database;
s3, acquiring operation instructions corresponding to the clients from a scene database based on the preset gestures and the operation scenes of the clients;
and S4, respectively sending the operation instructions corresponding to the clients to the corresponding clients so as to realize live broadcast teaching interaction.
Wherein the gesture information at least comprises a hand type and a hand motion state.
And storing a first corresponding relation between the gesture information and the preset gesture in the gesture database.
The operation scenes at least comprise a scene that teaching is not started, a scene that teaching is not interrupted, a scene that teaching is interactive during proceeding, and a scene that questions asked after teaching is finished.
And the scene database stores a second corresponding relation between the preset gesture and the operation instruction in each operation scene.
Wherein, the step S1 specifically includes:
acquiring the gesture information based on a gesture image shot by a camera; or,
and acquiring the gesture information based on the hand state information sent by the wearable device.
Wherein, after the step S2 and before the step S3, the method further comprises:
and carrying out stack sorting on the plurality of recognized preset gestures according to the time sequence of the received gestures.
Wherein, step S3 specifically includes:
identifying the operation scene of each client and acquiring a scene database corresponding to each client;
and respectively searching the corresponding operation instruction of each client in the scene database based on the preset gesture.
In another aspect, an embodiment of the present invention provides a gesture interaction system for live teaching, where the system includes:
the first acquisition module is used for acquiring gesture information of a user;
the gesture recognition module is used for recognizing a preset gesture in the gesture information based on the gesture information and a gesture database;
the second acquisition module is used for acquiring operation instructions corresponding to the clients from the scene database based on the preset gestures and the operation scenes of the clients;
and the instruction execution module is used for respectively sending the operation instructions corresponding to the clients to the corresponding clients so as to realize live broadcast teaching interaction.
Wherein the first obtaining module further comprises:
the active acquisition module is used for acquiring the gesture information based on the gesture image shot by the camera; or,
and the passive acquisition module is used for acquiring the gesture information based on the hand state information sent by the wearable equipment.
According to the gesture interaction method and system for live broadcast teaching provided by the embodiment of the invention, the preset gesture in the gesture information of the user is recognized, and the operation instruction is sent to the client of each user by combining the operation scene of each client participating in live broadcast, so that the interaction in the live broadcast teaching is realized, the input of characters or voice is not needed, the teaching or listening of the user is not influenced, and the live broadcast teaching effect is not influenced while the participation sense of the live broadcast teaching is enhanced.
Drawings
Fig. 1 is a flowchart of a gesture interaction method for live teaching according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a hand-lifting gesture recognition according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the specific process of obtaining the gesture information based on the gesture image according to the embodiment of the present invention;
FIG. 4 is a detailed flowchart of step S3 in the embodiment of FIG. 1;
fig. 5 is a block diagram of a gesture interaction system for live teaching according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a gesture interaction method for live teaching according to an embodiment of the present invention, and as shown in fig. 1, the method includes: s1, acquiring gesture information of the user; s2, recognizing a preset gesture in the gesture information based on the gesture information and the gesture database; s3, acquiring operation instructions corresponding to the clients from a scene database based on the preset gestures and the operation scenes of the clients; and S4, respectively sending the operation instructions corresponding to the clients to the corresponding clients so as to realize live broadcast teaching interaction.
In the live broadcast teaching, a plurality of users using the live broadcast teaching system log in through respective clients, and interaction can be carried out among the users after logging in to increase the effect of the live broadcast teaching. The whole live broadcast teaching system generally comprises a lecturer client, a plurality of lecturer clients and a general control room. The master control room is used for carrying out coordination control on the whole live broadcast system. In the interaction process, a user can complete specific interaction operation at each client by making a preset gesture.
Specifically, firstly, after receiving gesture information of a user, the master control room compares the gesture information with the gesture database to identify a preset gesture in the gesture information. Then, the master control room identifies the user types and the operation scenes of the clients participating in the live broadcast, and acquires operation instructions corresponding to the clients participating in the live broadcast from the scene database based on the preset gestures, the user types and the operation scenes of the clients participating in the live broadcast. And finally, sending corresponding operation instructions to the clients participating in live broadcasting. And each client executes corresponding operation according to the operation instruction, and the executed operation can be displaying corresponding dynamic images, characters or playing voice and the like on each client during specific implementation.
According to the gesture interaction method for live broadcast teaching provided by the embodiment of the invention, the preset gesture in the gesture information of the user is recognized, and the operation instruction is sent to the client of each user by combining the operation scene of each client participating in live broadcast, so that the interaction in the live broadcast teaching is realized, characters or voice does not need to be input, the teaching or listening of the user is not influenced, and the live broadcast teaching effect is not influenced while the participation sense of the live broadcast teaching is enhanced.
In the above embodiments, the gesture information includes at least a hand shape and a hand motion state.
Specifically, the gesture information at least includes a hand type and a hand motion state, such as: raising hands, holding cheeks, clapping hands, making fingers, hissing, like, whisper, sign language, etc.
In the above embodiment, the gesture database stores a first corresponding relationship between the gesture information and the preset gesture.
Specifically, after the gesture information is obtained, whether a preset gesture is included in the gesture information is identified through a second corresponding relation.
For example, as shown in fig. 2, when identifying whether the gesture information is a hand-lifting gesture, a hand-lifting action may be defined as: starting from the arm and bending at the elbow, the forearm is perpendicular to the posterior arm and the ground.
By counting the number of times the hand leaves the middle gesture area. The middle zone is a zone that is centered on the elbow and given a certain threshold. If the user completes the gesture within a certain period of time, the recognition will fail. The algorithm maintains its own state and informs the user of the recognition result in the form of an event when the recognition is completed. Swipe recognition monitors gestures of multiple users and both hands. The recognition algorithm computes each frame of newly generated bone data and therefore must record the status of these recognitions.
When the user's hand fails to reach the basic motion condition of the hand-lifting gesture, such as when the hand is below the elbow, the Reset method may be invoked to Reset the data used in gesture recognition.
The most basic structure of the gesture recognition class: it defines five constants: a middle region threshold, a gesture motion duration, a number of left and right movements of the gesture out of the middle region, and left and right hand identification constants. These constants should be stored as configuration items of the configuration file, and are therefore declared as constants here for simplicity. The wavegeturetracker array holds the recognition results of the hand gestures of both hands of each possible player. When the holding gesture is detected, a GestureDeprotected event is triggered.
When the main program receives a new data frame, the Update method of WaveGesture is called. The method circularly traverses the skeletal data frame of each user, and then calls a TrackWave method to perform hand-lifting gesture recognition on the left hand and the right hand. When skeletal data is not in a tracking state, the gesture recognition state is reset.
The main logical method of hand-lifting gesture recognition is the body part of TrackWave. It verifies the conditions we previously defined to make up the hand-lifting gesture and updates the state of the gesture recognition. The method recognizes a left or right hand gesture, the first condition being verification whether the hand and elbow joints are in a tracked state. If the two joint information is not available, the tracking state is reset, otherwise, the next verification is performed.
If the gesture duration exceeds the threshold and the next step has not been entered, the trace data is reset at gesture trace timeout. The next verifies if the hand joint point is above the elbow joint point. If not, the hand-lifting gesture recognition fails or the recognition condition is reset according to the current tracking state. If the hand joint point is on the Y-axis and above the elbow joint point, the method continues to determine the position of the hand relative to the elbow joint on the Y-axis. The UpdatePosition method is invoked and the position where the appropriate hand joint point is located is passed in. After the positions of the joints of the hands are updated, whether the defined repetition times are met or not is judged finally, if the conditions are met, the gesture recognition of the hands-up is successful, and a GetstureDeprotected event is triggered.
In the above embodiments, the operation scenes at least include a scene where a lecture is not started, a scene where a lecture is not interruptible, a scene where an interactive lecture is ongoing, and a scene where a question is asked after a lecture is finished.
Specifically, the operation scenes are determined by the content of the live broadcast teaching process, the duration of each scene can be preset before the live broadcast teaching, and the lessee can temporarily switch the operation scenes in the live broadcast teaching process.
Furthermore, the operation scene corresponds to a scene container, the scene container is of a stack structure and comprises operation tasks of a plurality of similar application programs, wherein the application program which is operated at the bottom of the stack firstly is gradually pressed to the bottom of the stack by the similar application programs in the same operation scene, and the top of the stack is the application program which is operated at the last time, namely the application program which is visible and operable by a user in the operation scene at present. The scene container is responsible for interpreting gesture information input into the scene container, translating the gesture information into a unique operation command aiming at the running scene, and further handing the operation command to the related application program under the running scene, wherein the related application program is generally a running task at the top of the stack in the scene container. When the application program is closed, the application program is also deleted from the corresponding scene container, so that the application programs contained in the scene container are all currently running application programs.
In the above embodiment, the scene database stores a second corresponding relationship between the preset gesture and the operation instruction in each operation scene.
Specifically, each operation scene corresponds to one scene database, and after the operation scene of the client is determined, the scene database corresponding to the client in the operation scene can be determined. And inquiring a corresponding operation instruction in a scene database corresponding to the operation scene through the recognized preset gesture.
In the foregoing embodiment, the step S1 specifically includes:
acquiring the gesture information based on a gesture image shot by a camera; or,
and acquiring the gesture information based on the hand state information sent by the wearable device.
Specifically, as shown in fig. 3, the motion information of the palm in the real world three-dimensional space is reconstructed by using the pictures captured from different angles by the 3D camera or the video camera equipped with the gesture tracking sensor, the real-time images of the hand motions of the lecturer or the lecturer are identified and tracked by using the camera, the hand images are identified from the images by the control system in the total control room after the gesture images are simply preprocessed, and the gesture information is obtained from the hand images.
Or receiving hand state information sent by wearable equipment (a smart watch, a smart bracelet or a smart ring) through wireless receiving equipment, and acquiring the gesture information from the hand state information.
Furthermore, two or more cameras are used for simultaneously collecting images, and depth information is calculated by comparing the difference of the images obtained by the different cameras at the same time, so that multi-angle three-dimensional imaging is realized.
In the above embodiment, after step S2 and before step S3, further comprising:
and carrying out stack sorting on the plurality of recognized preset gestures according to the time sequence of the received gestures.
Specifically, the plurality of recognized preset gestures are limited gesture information, and the preset gestures may be sent by the same user or multiple users. After the preset gestures in the gesture information are recognized, a plurality of preset gestures are required to be stacked and sorted according to the time of the preset gestures sent by the user.
In the foregoing embodiment, as shown in fig. 4, step S3 specifically includes: and S31, identifying the operation scene of each client, and acquiring a scene database corresponding to each client. And S32, respectively searching the corresponding operation instruction of each client in the scene database based on the preset gesture.
Wherein, an operation scene of each client participating in live broadcast corresponds to a scene database, and the number of the scene databases is the permutation and combination of the client type number and the operation scene number. And a scene database corresponding to each operation scene stores a second corresponding relation between the preset gesture and the operation instruction in the operation scene.
In step S31, the user types and the operation scenes of the clients participating in the live broadcast are identified, and each operation scene in each user type corresponds to one scene database, so after the user types and the operation scenes of the clients participating in the live broadcast are identified, a corresponding scene database can be obtained.
In step S32, the second corresponding relationship between the presets and the operation instructions in the operation scenario is stored in the scenario database corresponding to each operation scenario, so that the corresponding operation instructions can be queried in the scenario database as long as the presets are known. And sending the inquired operation instruction to each client, and executing corresponding operation by each client according to the operation instruction.
Fig. 5 is a gesture interaction system for live teaching according to an embodiment of the present invention, and as shown in fig. 5, the system includes a first obtaining module 1, a gesture recognition module 2, a second obtaining module 3, and an instruction execution module 4. Wherein,
the first obtaining module 1 is used for obtaining gesture information of a user. The gesture recognition module 2 is configured to recognize a preset gesture in the gesture information based on the gesture information and the gesture database. The second obtaining module 3 is configured to obtain an operation instruction corresponding to each client from the scene database based on the preset gesture and the operation scene of each client. And the instruction execution module 4 is used for respectively sending the operation instructions corresponding to the clients to the corresponding clients so as to realize live broadcast teaching interaction.
Specifically, firstly, after receiving gesture information of a user, the master control room compares the gesture information with the gesture database to identify a preset gesture in the gesture information. Then, the master control room identifies the user types and the operation scenes of the clients participating in the live broadcast, and acquires operation instructions corresponding to the clients participating in the live broadcast from the scene database based on the preset gestures, the user types and the operation scenes of the clients participating in the live broadcast. And finally, sending corresponding operation instructions to the clients participating in live broadcasting. And each client executes corresponding operation according to the operation instruction, and the executed operation can be displaying corresponding dynamic images, characters or playing voice and the like on each client during specific implementation.
According to the gesture interaction system for live broadcast teaching provided by the embodiment of the invention, the preset gesture in the gesture information of the user is recognized, and the operation instruction is sent to the client of each user by combining the operation scene of each client participating in live broadcast, so that the interaction in the live broadcast teaching is realized, characters or voice does not need to be input, the teaching or listening of the user is not influenced, and the live broadcast teaching effect is not influenced while the participation sense of the live broadcast teaching is enhanced.
In the above embodiment, the first obtaining module further includes an active obtaining module and a passive obtaining module. Wherein:
the active acquisition module is used for acquiring the gesture information based on the gesture image shot by the camera. Or the passive acquisition module is used for acquiring the gesture information based on the hand state information sent by the wearable device.
Specifically, the active acquisition module reconstructs motion information of a palm in a three-dimensional space of the real world by using pictures captured from different angles by a 3D camera or a video camera provided with a gesture tracking sensor, real-time images of hand motions of a lecturer or a lecturer are identified and tracked by using the camera, hand images are identified from the images through a control system in a master control room after the gesture images are simply preprocessed, and the gesture information is acquired from the hand images, wherein the gesture information at least comprises a hand shape and a hand motion state. Or the passive acquisition module receives hand state information sent by the wearable device through wireless receiving equipment, and acquires the gesture information from the hand state information.
Furthermore, two or more cameras are used for simultaneously collecting images, and depth information is calculated by comparing the difference of the images obtained by the different cameras at the same time, so that multi-angle three-dimensional imaging is realized.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A gesture interaction method for live teaching, comprising the following steps:
s1, acquiring gesture information of the user;
s2, recognizing a preset gesture in the gesture information based on the gesture information and the gesture database;
s3, acquiring operation instructions corresponding to the clients from a scene database based on the preset gestures and the operation scenes of the clients;
and S4, respectively sending the operation instructions corresponding to the clients to the corresponding clients so as to realize live broadcast teaching interaction.
2. The method of claim 1, wherein the gesture information comprises at least a hand type and a hand motion state.
3. The method according to claim 1, wherein the gesture database stores a first corresponding relationship between the gesture information and the preset gesture.
4. The method according to claim 1, wherein the operation scenes at least comprise a scene that teaching is not started, a scene that teaching is not interrupted, a scene that teaching is interactive, and a scene that questions after teaching is finished.
5. The method according to claim 1, wherein a second corresponding relationship between the preset gesture and the operation instruction in each operation scene is stored in the scene database.
6. The method according to any one of claims 1 to 5, wherein the step S1 specifically includes:
acquiring the gesture information based on a gesture image shot by a camera; or,
and acquiring the gesture information based on the hand state information sent by the wearable device.
7. The method according to any one of claims 1-5, further comprising, after step S2 and before step S3:
and carrying out stack sorting on the plurality of recognized preset gestures according to the time sequence of the received gestures.
8. The method according to any one of claims 1 to 5, wherein step S3 specifically comprises:
identifying the operation scene of each client and acquiring a scene database corresponding to each client;
and respectively searching the corresponding operation instruction of each client in the scene database based on the preset gesture.
9. A gesture interaction system for live teaching, the system comprising:
the first acquisition module is used for acquiring gesture information of a user;
the gesture recognition module is used for recognizing a preset gesture in the gesture information based on the gesture information and a gesture database;
the second acquisition module is used for acquiring operation instructions corresponding to the clients from the scene database based on the preset gestures and the operation scenes of the clients;
and the instruction execution module is used for respectively sending the operation instructions corresponding to the clients to the corresponding clients so as to realize live broadcast teaching interaction.
10. The system of claim 9, wherein the first obtaining module further comprises:
the active acquisition module is used for acquiring the gesture information based on the gesture image shot by the camera; or,
and the passive acquisition module is used for acquiring the gesture information based on the hand state information sent by the wearable equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710517913.4A CN109213304A (en) | 2017-06-29 | 2017-06-29 | Gesture interaction method and system for live broadcast teaching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710517913.4A CN109213304A (en) | 2017-06-29 | 2017-06-29 | Gesture interaction method and system for live broadcast teaching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109213304A true CN109213304A (en) | 2019-01-15 |
Family
ID=64960812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710517913.4A Pending CN109213304A (en) | 2017-06-29 | 2017-06-29 | Gesture interaction method and system for live broadcast teaching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109213304A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110390898A (en) * | 2019-06-27 | 2019-10-29 | 安徽国耀通信科技有限公司 | A kind of indoor and outdoor full-color screen display control program |
CN110442240A (en) * | 2019-08-09 | 2019-11-12 | 杭州学两手网络科技有限公司 | A kind of teaching interaction system based on gesture identification |
CN110602516A (en) * | 2019-09-16 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Information interaction method and device based on live video and electronic equipment |
CN113391703A (en) * | 2021-06-16 | 2021-09-14 | 咙咙信息技术(沈阳)有限公司 | System for operating air writing based on media application |
CN113453032A (en) * | 2021-06-28 | 2021-09-28 | 广州虎牙科技有限公司 | Gesture interaction method, device, system, server and storage medium |
CN113784046A (en) * | 2021-08-31 | 2021-12-10 | 北京安博盛赢教育科技有限责任公司 | Follow-up shooting method, device, medium and electronic equipment |
TWI760635B (en) * | 2019-05-17 | 2022-04-11 | 麥奇數位股份有限公司 | Remote real-time multimedia teaching method, device and system, electronic equipment and computer-readable recording medium |
CN116266872A (en) * | 2021-12-17 | 2023-06-20 | 成都拟合未来科技有限公司 | Body-building live broadcast interaction method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8898243B2 (en) * | 2013-04-08 | 2014-11-25 | Jani Turkia | Device relay control system and method |
CN104539436A (en) * | 2014-12-22 | 2015-04-22 | 杭州施强网络科技有限公司 | Lesson content real-time live broadcasting method and system |
CN105657024A (en) * | 2016-01-21 | 2016-06-08 | 北京师科阳光信息技术有限公司 | Online information interaction method |
CN105989753A (en) * | 2015-01-29 | 2016-10-05 | 宁波状元郎电子科技有限公司 | Cloud computing-based intelligent interaction teaching system |
CN106227350A (en) * | 2016-07-28 | 2016-12-14 | 青岛海信电器股份有限公司 | Method and the smart machine that operation controls is carried out based on gesture |
CN106774894A (en) * | 2016-12-16 | 2017-05-31 | 重庆大学 | Interactive teaching methods and interactive system based on gesture |
-
2017
- 2017-06-29 CN CN201710517913.4A patent/CN109213304A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8898243B2 (en) * | 2013-04-08 | 2014-11-25 | Jani Turkia | Device relay control system and method |
CN104539436A (en) * | 2014-12-22 | 2015-04-22 | 杭州施强网络科技有限公司 | Lesson content real-time live broadcasting method and system |
CN105989753A (en) * | 2015-01-29 | 2016-10-05 | 宁波状元郎电子科技有限公司 | Cloud computing-based intelligent interaction teaching system |
CN105657024A (en) * | 2016-01-21 | 2016-06-08 | 北京师科阳光信息技术有限公司 | Online information interaction method |
CN106227350A (en) * | 2016-07-28 | 2016-12-14 | 青岛海信电器股份有限公司 | Method and the smart machine that operation controls is carried out based on gesture |
CN106774894A (en) * | 2016-12-16 | 2017-05-31 | 重庆大学 | Interactive teaching methods and interactive system based on gesture |
Non-Patent Citations (1)
Title |
---|
简琤峰; 陈嘉诚; 任炜斌; 张美玉: "《支持手势识别的云黑板教学平台研究与实现》", 《现代教育技术》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI760635B (en) * | 2019-05-17 | 2022-04-11 | 麥奇數位股份有限公司 | Remote real-time multimedia teaching method, device and system, electronic equipment and computer-readable recording medium |
CN110390898A (en) * | 2019-06-27 | 2019-10-29 | 安徽国耀通信科技有限公司 | A kind of indoor and outdoor full-color screen display control program |
CN110442240A (en) * | 2019-08-09 | 2019-11-12 | 杭州学两手网络科技有限公司 | A kind of teaching interaction system based on gesture identification |
CN110602516A (en) * | 2019-09-16 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Information interaction method and device based on live video and electronic equipment |
CN113391703A (en) * | 2021-06-16 | 2021-09-14 | 咙咙信息技术(沈阳)有限公司 | System for operating air writing based on media application |
CN113453032A (en) * | 2021-06-28 | 2021-09-28 | 广州虎牙科技有限公司 | Gesture interaction method, device, system, server and storage medium |
CN113453032B (en) * | 2021-06-28 | 2022-09-30 | 广州虎牙科技有限公司 | Gesture interaction method, device, system, server and storage medium |
CN113784046A (en) * | 2021-08-31 | 2021-12-10 | 北京安博盛赢教育科技有限责任公司 | Follow-up shooting method, device, medium and electronic equipment |
CN116266872A (en) * | 2021-12-17 | 2023-06-20 | 成都拟合未来科技有限公司 | Body-building live broadcast interaction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109213304A (en) | Gesture interaction method and system for live broadcast teaching | |
CN111935491B (en) | Live broadcast special effect processing method and device and server | |
CN108986189B (en) | Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation | |
CN110703913B (en) | Object interaction method and device, storage medium and electronic device | |
CN114097248B (en) | Video stream processing method, device, equipment and medium | |
CN110298220B (en) | Action video live broadcast method, system, electronic equipment and storage medium | |
CN109982054B (en) | Projection method and device based on positioning tracking, projector and projection system | |
US10617945B1 (en) | Game video analysis and information system | |
CN111274910A (en) | Scene interaction method and device and electronic equipment | |
CN110472099B (en) | Interactive video generation method and device and storage medium | |
CN107895161B (en) | Real-time attitude identification method and device based on video data and computing equipment | |
CN106572359A (en) | Method and device for synchronously playing panoramic video on multiple terminals | |
CN109739353A (en) | A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus | |
CN113840177B (en) | Live interaction method and device, storage medium and electronic equipment | |
CN109240492A (en) | The method for controlling studio packaging and comment system by gesture identification | |
CN115985461A (en) | Rehabilitation training system based on virtual reality | |
CN103959805B (en) | A kind of method and apparatus of display image | |
CN113515187B (en) | Virtual reality scene generation method and network side equipment | |
CN111757140B (en) | Teaching method and device based on live classroom | |
CN113784059A (en) | Video generation and splicing method, equipment and storage medium for clothing production | |
CN117544808A (en) | Device control method, storage medium, and electronic device | |
CN114425162A (en) | Video processing method and related device | |
JP2009519539A (en) | Method and system for creating event data and making it serviceable | |
CN112423035A (en) | Method for automatically extracting visual attention points of user when watching panoramic video in VR head display | |
CN108563328B (en) | Method for selecting cartoon character based on children demand and interaction system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190115 |
|
RJ01 | Rejection of invention patent application after publication |