CN111857338A - Method suitable for using mobile application on large screen - Google Patents
Method suitable for using mobile application on large screen Download PDFInfo
- Publication number
- CN111857338A CN111857338A CN202010674968.8A CN202010674968A CN111857338A CN 111857338 A CN111857338 A CN 111857338A CN 202010674968 A CN202010674968 A CN 202010674968A CN 111857338 A CN111857338 A CN 111857338A
- Authority
- CN
- China
- Prior art keywords
- hand
- gesture
- processing unit
- instruction
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 47
- 238000004458 analytical method Methods 0.000 claims abstract description 28
- 238000004148 unit process Methods 0.000 claims abstract description 4
- 230000033001 locomotion Effects 0.000 claims description 23
- 238000013473 artificial intelligence Methods 0.000 claims description 15
- 238000005516 engineering process Methods 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 238000010801 machine learning Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 230000004379 myopia Effects 0.000 abstract description 6
- 208000001491 myopia Diseases 0.000 abstract description 6
- 230000002452 interceptive effect Effects 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43632—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wired protocol, e.g. IEEE 1394
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43637—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A method suitable for using the mobile application on the large screen, adopt the small-scale mobile Internet apparatus, large screen display installed with learning knowledge point APP as the carrier of study; the internet equipment is internally provided with a video acquisition processing unit, a gesture recognition unit, an image signal processing unit, a gesture action analysis unit and an instruction output unit; the video acquisition processing unit collects hand action data of a user, the gesture recognition unit processes the input data and then outputs the processed data to the image signal processing unit, the image signal processing unit performs color gradation adjustment on the input image and improves the contrast of the input image, and the gesture action analysis unit outputs a determined key click instruction and a page turning key click instruction in learning APP through the instruction output unit based on the hand image signal data stored in the gesture action analysis unit in advance and according to corresponding image data information. The invention does not need to manually click the confirm key and the page turning key, brings convenience to the study of students, improves the study effect and reduces the probability of suffering from myopia.
Description
Technical Field
The invention relates to the technical field of education system application, in particular to a method suitable for using mobile application on a large screen.
Background
Along with the intelligent equipment (such as cell-phone etc.) that conveniently carries widely obtains using in the education field, a large amount of interactive education APP that have learning knowledge point have been developed to effectively promoted student's academic score, received the extensive good comment including student, head of a family and mr. However, the mobile phone and the like have small screens, so that the visual field is effective when students study interactively, and therefore the study effect is influenced to a certain extent.
With the development of science and technology, a deep learning method based on artificial intelligence adopts an Artificial Intelligence (AI) to simulate a human brain neural network algorithm, the technology for realizing various control and recognition functions is applied in many fields, the intelligence level of controlled equipment is greatly expanded, and the production efficiency is effectively improved. However, in the prior art, the AI technology is still a blank when applied to interactive education equipment based on mobile phone APP, so that technical support cannot be provided for improving the visual field range and the interactive learning effect in student learning and reducing the probability of myopia occurrence. Based on the above, the method based on the AI technology is provided, especially in the family environment, when students use mobile phone APP and other interactive learning, the display of the home television screen and the like can be effectively utilized, the content displayed on the mobile phone interface can be conveniently controlled, a better visual field range is provided for the learning of the students, a good learning effect is achieved, and the probability of the students suffering from myopia is reduced.
Disclosure of Invention
In order to overcome the problem that the existing interactive education APP based on the mobile phone has influence on the learning effect because the display screen is too small and the visual field of the student is limited during interactive learning in the practical use, and the defect that the eyesight of the students is affected after the students use the video camera for a long time, the invention provides the technology based on AI, the application can not only display the video displayed by the mobile phone through a large screen display at home (such as a television), under the combined action of the application units, students can control two main option interfaces, a confirm key cursor and a page turning key cursor in learning software through gesture actions in a near-contact and non-contact manner during learning by using the APP, thereby effectively controlling the interactive learning process in the APP in a non-contact way, bringing convenience to the learning of students, improving the learning effect, and the probability of the student suffering from myopia is reduced, and the method is suitable for mobile application on a large screen.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a method suitable for using the mobile application on the large screen, adopt the small-scale mobile Internet apparatus, large screen display installed with learning knowledge point APP as the carrier of study; the system is characterized in that a video acquisition processing unit, a gesture recognition unit, an image signal processing unit, a gesture action analysis unit and an instruction output unit are installed in the Internet equipment, in application, the Internet equipment is positioned at the front end of the body of a user, and the large screen display is positioned at the far end; in the application of the video acquisition processing unit, the hand action data of a user, which is acquired by the Internet equipment through the camera, is acquired, the hand action data comprises three main actions of fist making, fist making movement and palm stretching, and the acquired data is input to the hand gesture recognition unit; the gesture recognition unit processes the data input by the video acquisition processing unit and outputs the processed data to the image signal processing unit, the image signal processing unit performs color gradation adjustment on the input image and improves the contrast of the input image, and then the processed image data is input to the gesture action analysis unit; the gesture motion analysis unit identifies and compares the input image data based on hand image signal data pre-stored in the gesture motion analysis unit, and outputs an instruction through the instruction output unit according to corresponding image data information; the instruction output unit outputs an instruction comprising two main option interfaces in the learning APP, a confirming key click instruction and a page turning key click instruction, and after the instruction output unit outputs the confirming key click instruction and the page turning key click instruction respectively, the learning APP can display the current page and display the next page content in a page turning mode in a maximized mode.
Furthermore, what people's hand clenches the fist action and is exported through instruction output unit is the confirm key click instruction in the study APP interface, and what clenches the fist and remove the action and export through instruction output unit is the page turning key click instruction in the study APP interface, and the hand palm stretches out the action and does not export any instruction to study APP.
Further, in the application of the video acquisition processing unit, a camera of the mobile phone can be automatically opened to acquire video signals, hand picture information is displayed through a small-screen display interface of the mobile phone, the hand position can be detected, hand confidence coefficient can be given, when the confidence coefficient is larger than a certain value, the acquired hand action video information is output to the hand recognition unit, otherwise, acquisition is carried out again, the acquired image is prevented from being fuzzy when the hand moves fast, and the hand can not be effectively recognized by the gesture recognition unit.
Furthermore, in the application of the gesture recognition unit, if a plurality of gesture actions are provided from top to bottom, one gesture with the highest point of the selected gesture is output to the image signal processing unit, so that the accuracy of the instruction is ensured.
Furthermore, the image signal processing unit performs color level adjustment on the input image and improves the contrast of the input image, so that the reliability of machine learning work in the gesture motion analysis unit is ensured.
Further, before the gesture motion analysis unit is applied, the first step is: firstly, pre-collected characteristics of a plurality of pictures of fist making and fist making movement of a hand and stretching of a palm of a person are subjected to data annotation through a marking module of a gesture action analysis unit based on an artificial intelligence deep learning technology, and the data are divided into a training set and a testing set according to a certain proportion; the second step is that: based on an artificial intelligence deep learning technology, designing a proper artificial intelligence deep learning type aiming at the characteristics of pictures of hand fist making, fist making movement and palm stretching; the third step: training the model by using the marked picture data of the training set; the fourth step: and recognizing three gestures in a real scene by using the obtained model.
The invention has the beneficial effects that: based on AI technology, the invention displays the video displayed by the mobile phone through a large screen display at home (such as a television), and under the combined action of all application units, students can control the main two option interfaces, the determined key cursor and the page-turning key cursor in learning software through gesture actions in a non-contact way in the front vicinity of the mobile phone and at a distance from a large screen during using APP, wherein, after the hand fist-making action is processed through the combined action of the corresponding units, the determined key click command in the learning APP interface is output through the command output unit, and after the fist-making movement action is processed through the combined action of the corresponding units, the page-turning key click command in the learning APP interface is output through the command output unit, so that users (students) can effectively control the interactive learning process in APP in a non-contact way (without manually clicking the determined key and the page-turning key), thereby bringing convenience to the learning of the students, the study effect is improved, and the probability of the student suffering from myopia is reduced. Based on the above, the invention has good application prospect.
Drawings
The invention is further illustrated below with reference to the figures and examples.
FIG. 1 is a block diagram illustration of a software cell architecture used by the present invention.
Detailed Description
As shown in fig. 1, a method suitable for using mobile applications on a large screen, which uses a small mobile internet device (in this embodiment, a mobile phone) equipped with a learning knowledge point APP and a larger display as a learning carrier (in this embodiment, a home liquid crystal television is used), video content displayed by the small mobile internet device is transmitted to a display via one of wired and wireless screen projection modes, and the display can synchronously display content displayed by a display screen of the small mobile internet device; the small mobile internet equipment is internally provided with a video acquisition processing unit, a gesture recognition unit, an image signal processing unit, a gesture action analysis unit and an instruction output unit, and in application, the small mobile internet equipment is positioned at the front end of the body of a user, and the hand of the user can conveniently approach the front end of the camera of the small mobile internet equipment; in the application of the video acquisition processing unit, hand action data of a user, acquired by the small mobile internet equipment through a camera, of the small mobile internet equipment are acquired, the hand action data comprise three main actions of fist making, fist making movement and palm stretching, and the acquired data are input to the hand gesture recognition unit; the gesture recognition unit processes the data input by the video acquisition processing unit and outputs the processed data to the image signal processing unit, the image signal processing unit performs color gradation adjustment on the input image and improves the contrast of the input image, and then the processed image data is input to the gesture action analysis unit; the gesture motion analysis unit identifies and compares the input image data based on hand image signal data pre-stored in the gesture motion analysis unit, and outputs an instruction through the instruction output unit according to corresponding image data information; the instruction output unit outputs an instruction comprising two main option interfaces in the learning APP, a confirmation key click instruction and a page turning key click instruction, and after the instruction output unit outputs the confirmation key click instruction and the page turning key click instruction respectively, the learning APP can display the current page and display the next page content in a maximized mode respectively.
Shown in fig. 1, the hand fist action is through video acquisition processing unit, the gesture recognition unit, image signal processing unit, gesture action analysis unit combined action is handled the back, what output through the instruction output unit is the confirm key click instruction in the study APP interface, the fist removal action is through video acquisition processing unit, the gesture recognition unit, image signal processing unit, gesture action analysis unit combined action is handled the back, what output through the instruction output unit is the page turning key click instruction in the study APP interface, hand palm stretches out the action and does not output any instruction to study APP, as the buffering between confirm key click instruction and the page turning key click instruction. In the application of the video acquisition processing unit, a camera of the mobile phone can be automatically opened to acquire video signals, hand picture information is displayed through a small-screen display interface of the mobile phone, the hand position can be detected, hand confidence coefficient can be given, when the confidence coefficient (indicating probability, for example, if the correct action coefficient of the hand is 1, the confidence coefficient is qualified if the confidence coefficient is more than 0.8, and the confidence coefficient is unqualified if the confidence coefficient is less than) is larger than a certain value, the acquired hand action video information is output to the hand recognition unit, otherwise, small-size mobile internet equipment is carried out again, data of the hand action of a user is acquired through the camera, and the acquired image is fuzzy when the hand moves fast and cannot be effectively recognized by the gesture recognition unit. In the application of the gesture recognition unit, if a plurality of gesture actions are arranged from top to bottom, only one gesture with the highest point of the gesture is selected and output to the image signal processing unit, and the accuracy of the instruction is guaranteed. The image signal processing unit adjusts the color level of the input image and improves the contrast, thereby ensuring the reliability of machine learning work in the gesture action analysis unit. Before the gesture analysis unit is applied (actually, the process in software development, the terminal user does not need the following operations, and the application is direct), the first step is: firstly, pre-collected characteristics of a plurality of pictures of fist making and fist making movement of a hand and stretching of a palm of a person are subjected to data annotation through a marking module of a gesture action analysis unit based on an artificial intelligence deep learning technology, and the data are divided into a training set and a testing set according to a certain proportion; the second step is that: based on an artificial intelligence deep learning technology, designing a proper artificial intelligence deep learning type aiming at the characteristics of pictures of hand fist making, fist making movement and palm stretching; the third step: training the model by using the marked picture data of the training set; the fourth step: and recognizing three gestures in a real scene by using the obtained model.
As shown in fig. 1, before the present invention is used, the video content displayed by the small mobile internet device is transmitted to the display through one of the wired and wireless screen projection modes (in the prior art, the smart phone has a screen projection function, and the home lcd has a video function of receiving screen projection transmission of the mobile phone), the learning content displayed by the mobile phone AAP and the two determination keys and the page turning key of each page of the display interface are synchronously displayed on the screen; the user (student) then places the phone on a table spaced a distance from the display, etc. Before a user uses the device to control learning of the APP corresponding interface, the distance between the video acquisition processing unit and the hand video data acquisition processing unit needs to be calibrated; in practical situations, before the video acquisition processing unit displays hand picture information through a small screen display interface of the mobile phone, a user stretches arms left and right and measures left and right intervals, the left and right intervals are recorded by 2.5 times (and 2.5 times of what is established) as the width of the recorded image, the up and down intervals are recorded by 1.5 times (and 1.5 times of what is established) as the height, and then gestures are recognized in the range and in front of a camera of the mobile phone. When the user need control the current study content that liquid crystal display shows and open, people's hand is clenched a fist, the action is through video acquisition processing unit after clenching a fist, gesture recognition unit, image signal processing unit, gesture action analysis unit combined action handles the back, confirm key click instruction in the study APP interface is exported through instruction output unit, then, confirm the key by the non-contact click, the study content that corresponding cell-phone APP interface display shows is at cell-phone interface and large screen display interface simultaneous display, like this, the user just can watch the study content that the large-screen displayed. When the user needs to control the study content that liquid crystal display shows and turns over the page or leaf, the hand fist displacement left, this action is handled through video acquisition processing unit, gesture recognition unit, image signal processing unit, gesture action analysis unit combined action, through the page turning key click instruction in instruction output unit output study APP interface, then, the page turning key is clicked by the non-contact, the study content that corresponding cell-phone APP interface shows will realize turning over the page or leaf, like this, the user just can watch the study content of selecting next page. This application non-contact operation cell-phone APP corresponding interface turns over the page or clicks behind the enter key, people's hand palm stretches out (palm), thus, instruction output unit does not export any instruction to study APP, the user just can leave the image acquisition scope of video acquisition processing unit with the hand, video acquisition processing unit no longer gathers people's hand action information, or before needing to operate cell-phone APP corresponding interface, people's hand is close video acquisition processing unit's image acquisition scope, follow-up video acquisition processing unit gathers people's hand action information, and then export different control instruction to cell-phone APP interface. The user (student) can realize non-contact and relatively long distance separation between the mobile phone and the display screen of the display (the user does not use a hand to touch the display interface of the mobile phone, the use is more convenient, and the use is more intelligent), the interactive learning process in the APP is effectively controlled, the convenience is brought to the learning of the student, the learning effect is improved, and the probability of the student suffering from myopia is reduced.
While there have been shown and described what are at present considered the fundamental principles and essential features of the invention and its advantages, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, but is capable of other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present description refers to embodiments, the embodiments do not include only one independent technical solution, and such description is only for clarity, and those skilled in the art should take the description as a whole, and the technical solutions in the embodiments may be appropriately combined to form other embodiments that can be understood by those skilled in the art.
Claims (6)
1. A method suitable for using the mobile application on the large screen, adopt the small-scale mobile Internet apparatus, large screen display installed with learning knowledge point APP as the carrier of study; the system is characterized in that a video acquisition processing unit, a gesture recognition unit, an image signal processing unit, a gesture action analysis unit and an instruction output unit are installed in the Internet equipment, in application, the Internet equipment is positioned at the front end of the body of a user, and the large screen display is positioned at the far end; in the application of the video acquisition processing unit, the hand action data of a user, which is acquired by the Internet equipment through the camera, is acquired, the hand action data comprises three main actions of fist making, fist making movement and palm stretching, and the acquired data is input to the hand gesture recognition unit; the gesture recognition unit processes the data input by the video acquisition processing unit and outputs the processed data to the image signal processing unit, the image signal processing unit performs color gradation adjustment on the input image and improves the contrast of the input image, and then the processed image data is input to the gesture action analysis unit; the gesture motion analysis unit identifies and compares the input image data based on hand image signal data pre-stored in the gesture motion analysis unit, and outputs an instruction through the instruction output unit according to corresponding image data information; the instruction output unit outputs an instruction comprising two main option interfaces in the learning APP, a confirming key click instruction and a page turning key click instruction, and after the instruction output unit outputs the confirming key click instruction and the page turning key click instruction respectively, the learning APP can display the current page and display the next page content in a page turning mode in a maximized mode.
2. The method for using mobile applications on a large screen according to claim 1, wherein the instruction output unit outputs a click command of a fixed key in the learning APP interface for the hand-gripping movement, the instruction output unit outputs a click command of a page-turning key in the learning APP interface for the hand-gripping movement, and the palm stretching movement does not output any instruction to the learning APP.
3. The method as claimed in claim 1, wherein in the application of the video capture processing unit, the camera of the mobile phone can be automatically turned on to capture video signals, the hand picture information is displayed through the small-screen display interface of the mobile phone, the hand position can be detected, the hand confidence level can be given, when the confidence level is higher than a certain value, the captured hand motion video information is output to the hand recognition unit, otherwise, the capture is carried out again, and the captured image is prevented from being blurred when the hand moves fast and cannot be effectively recognized by the gesture recognition unit.
4. The method as claimed in claim 1, wherein if there are multiple gesture actions on the gesture recognition unit from top to bottom, the gesture with the highest gesture point is selected and output to the image signal processing unit, so as to ensure the accuracy of the command.
5. The method as claimed in claim 1, wherein the image signal processing unit performs a color level adjustment on the input image to improve the contrast thereof, so as to ensure the reliability of the machine learning operation in the gesture analysis unit.
6. The method for using mobile application on large screen according to claim 1, wherein before the gesture motion analysis unit is applied, the first step is: firstly, pre-collected characteristics of a plurality of pictures of fist making and fist making movement of a hand and stretching of a palm of a person are subjected to data annotation through a marking module of a gesture action analysis unit based on an artificial intelligence deep learning technology, and the data are divided into a training set and a testing set according to a certain proportion; the second step is that: based on an artificial intelligence deep learning technology, designing a proper artificial intelligence deep learning type aiming at the characteristics of pictures of hand fist making, fist making movement and palm stretching; the third step: training the model by using the marked picture data of the training set; the fourth step: and recognizing three gestures in a real scene by using the obtained model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010674968.8A CN111857338A (en) | 2020-07-14 | 2020-07-14 | Method suitable for using mobile application on large screen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010674968.8A CN111857338A (en) | 2020-07-14 | 2020-07-14 | Method suitable for using mobile application on large screen |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111857338A true CN111857338A (en) | 2020-10-30 |
Family
ID=72984181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010674968.8A Withdrawn CN111857338A (en) | 2020-07-14 | 2020-07-14 | Method suitable for using mobile application on large screen |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111857338A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114020192A (en) * | 2021-09-18 | 2022-02-08 | 特斯联科技集团有限公司 | Interaction method and system for realizing non-metal plane based on curved surface capacitor |
US11757951B2 (en) | 2021-05-28 | 2023-09-12 | Vizio, Inc. | System and method for configuring video watch parties with gesture-specific telemojis |
US12126661B2 (en) | 2023-07-24 | 2024-10-22 | Vizio, Inc. | System and method for configuring video watch parties with gesture-specific telemojis |
-
2020
- 2020-07-14 CN CN202010674968.8A patent/CN111857338A/en not_active Withdrawn
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11757951B2 (en) | 2021-05-28 | 2023-09-12 | Vizio, Inc. | System and method for configuring video watch parties with gesture-specific telemojis |
CN114020192A (en) * | 2021-09-18 | 2022-02-08 | 特斯联科技集团有限公司 | Interaction method and system for realizing non-metal plane based on curved surface capacitor |
CN114020192B (en) * | 2021-09-18 | 2024-04-02 | 特斯联科技集团有限公司 | Interaction method and system for realizing nonmetal plane based on curved surface capacitor |
US12126661B2 (en) | 2023-07-24 | 2024-10-22 | Vizio, Inc. | System and method for configuring video watch parties with gesture-specific telemojis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106462242B (en) | Use the user interface control of eye tracking | |
JP2020102194A (en) | System, method and program for context based deep knowledge tracking | |
CN111796752B (en) | Interactive teaching system based on PC | |
McIntee | A Task Model of Free-Space Movement-Based Gestures | |
CN108939532B (en) | Autism rehabilitation training guiding game type man-machine interaction system and method | |
CN111857338A (en) | Method suitable for using mobile application on large screen | |
CN109240504A (en) | Control method, model training method, device and electronic equipment | |
CN112199015B (en) | Intelligent interaction all-in-one machine and writing method and device thereof | |
CN112286347A (en) | Eyesight protection method, device, storage medium and terminal | |
CN106409033A (en) | Remote teaching assisting system and remote teaching method and device for system | |
CN112286411A (en) | Display mode control method and device, storage medium and electronic equipment | |
CN114821753B (en) | Eye movement interaction system based on visual image information | |
CN107391015B (en) | Control method, device and equipment of intelligent tablet and storage medium | |
CN109426342B (en) | Document reading method and device based on augmented reality | |
JP2022027477A (en) | Program, method, and information processing device | |
CN109727299A (en) | A kind of control mechanical arm combines the method drawn a picture, electronic equipment and storage medium | |
Alam et al. | ASL champ!: a virtual reality game with deep-learning driven sign recognition | |
CN111695496A (en) | Intelligent interactive learning method, learning programming method and robot | |
Zholshiyeva et al. | Human-machine interactions based on hand gesture recognition using deep learning methods. | |
CN114296627B (en) | Content display method, device, equipment and storage medium | |
Ni et al. | Classroom Roll Call System Based on Face Detection Technology | |
CN118587950B (en) | Interactive LED display system for intelligent education | |
CN210119873U (en) | Supervision device based on VR equipment | |
Wang et al. | A structural design and interaction algorithm of smart microscope embedded on virtual and real fusion technologies | |
Qu et al. | Development of a real-time pen-holding gesture recognition system based on improved YOLOv8 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20201030 |
|
WW01 | Invention patent application withdrawn after publication |