CN113171606A - Man-machine interaction method, system, computer readable storage medium and interaction device - Google Patents

Man-machine interaction method, system, computer readable storage medium and interaction device Download PDF

Info

Publication number
CN113171606A
CN113171606A CN202110586078.6A CN202110586078A CN113171606A CN 113171606 A CN113171606 A CN 113171606A CN 202110586078 A CN202110586078 A CN 202110586078A CN 113171606 A CN113171606 A CN 113171606A
Authority
CN
China
Prior art keywords
data
user
human
position area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110586078.6A
Other languages
Chinese (zh)
Other versions
CN113171606B (en
Inventor
朱明晰
李燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110586078.6A priority Critical patent/CN113171606B/en
Publication of CN113171606A publication Critical patent/CN113171606A/en
Application granted granted Critical
Publication of CN113171606B publication Critical patent/CN113171606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The man-machine interaction method comprises the steps of collecting user images; storing position area data corresponding to the user image, and storing display data corresponding to the position area data; judging whether the user instruction is located in an area corresponding to the position area data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, otherwise, adjusting first preset time, and displaying corresponding animation and/or scores when the first preset time is over; counting a first time of ending the first preset time; when the times reach a third preset threshold value, summarizing all displayed scores to obtain a total score; and matching the total score with a preset score rating, and displaying the grade of the score on the display unit. The man-machine interaction system provided by the disclosure improves the entertainment effect of entertainment, and further improves the user experience.

Description

Man-machine interaction method, system, computer readable storage medium and interaction device
Technical Field
The invention relates to the field of artificial intelligence, in particular to a man-machine interaction method for enhancing user experience by combining education and entertainment.
Background
Most software aims at entertainment, can provide less harvest for users, and has stronger teaching professional and higher difficulty in teaching and popularization.
Therefore, there is a need for a human-computer interaction method that can combine education and entertainment and enhance the user experience.
Disclosure of Invention
The invention aims to provide a man-machine interaction method which can combine education and entertainment and enhance user experience.
The man-machine interaction method comprises
1. A human-computer interaction method is characterized in that: comprises that
Collecting a user image;
storing position area data corresponding to the user image, and storing display data corresponding to the position area data;
outputting corresponding position area data according to the user image, and marking the position area data on the user image to generate first image data;
displaying prompt data and first image data within first preset time, judging whether a user instruction is located in a region corresponding to position region data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, otherwise, adjusting the first preset time, and displaying corresponding animation and/or scores when the first preset time is over;
counting a first time of ending the first preset time;
when the times reach a third preset threshold value, summarizing all displayed scores to obtain a total score;
and matching the total score with a preset score rating, and displaying the grade of the score on the display unit.
The utility model provides a human-computer interaction system, show first image data and prompt data on the display element (the display element can be display screen etc.), and examine whether the user knows the position regional data that corresponds with the prompt data, the user goes to match with first image data and prompt data, and simultaneously, gather user's image in real time, can show user's image data of self on the first image data in real time, and let the user know the position regional data of self that corresponds with the display data, make the user can be in the prompt data of knowing the position regional data of self and corresponding, play the guide effect to the user who does not know the prompt data that the position regional data of self corresponds very much, the amusement effect of amusement has been improved, and then user experience is improved.
The man-machine interaction method comprises the steps that the user instruction is an instruction for clicking a designated area of a screen, and the position area data is a display button of the designated area displayed in the display screen.
According to the invention, the user clicks within the range specified by the position area data by using the mouse to serve as the user instruction, so that the user instruction can be conveniently input by a clicking mode.
The man-machine interaction method comprises the steps that the user instruction is a voice instruction, the voice instruction is converted into a character instruction by the processor, and the position area data is character data.
The invention converts the words spoken by the user into the voice command, identifies the voice command as the character command, and can facilitate the user to input the user command when judging whether the voice command is matched with the position area data of the character data.
The human-computer interaction method comprises the steps that the detection module detects the distance between the human machines, and when the distance between the human machines is within a preset range, the lifting data and the first image data are ready to be displayed on the display screen.
According to the invention, the detection module starts the timing of the first preset time, so that a user can conveniently and quickly start the human-computer interaction system.
The man-machine interaction method comprises the following steps:
the image acquisition module is used for acquiring a user image;
a database for storing position area data corresponding to the user image and storing display data corresponding to the position area data;
the processing module is used for outputting corresponding position area data according to the user image, marking the position area data on the user image and generating first image data;
the interaction module is used for displaying prompt data and first image data within first preset time, judging whether a user instruction is located in an area corresponding to position area data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, if not, adjusting the first preset time, and displaying the corresponding animation and/or scores when the first preset time is over;
the counting module is used for counting the first times of the ending of the first preset time;
the summarizing module is used for summarizing all displayed scores to obtain a total score when the times reach a third preset threshold;
and the rating module is used for matching the total score with a preset score rating and displaying the grade of the score on the display unit.
The man-machine interaction method of the invention also comprises the following steps:
and the screen clicking module is used for configuring the display buttons for the specified areas displayed in the display screen by the position area data.
According to the invention, the user clicks within the range specified by the position area data by using the mouse to serve as the user instruction, so that the user instruction can be conveniently input by a clicking mode.
The man-machine interaction method of the invention also comprises the following steps:
and the voice recognition module is used for converting the voice instruction into a character instruction, and the position area data is character data.
The invention converts the words spoken by the user into the voice command, identifies the voice command as the character command, and can facilitate the user to input the user command when judging whether the voice command is matched with the position area data of the character data.
The man-machine interaction method of the invention also comprises the following steps:
and the identification module is used for detecting the distance between the human machines, and when the distance between the human machines is within a preset range, the lifting data and the first image data are ready to be displayed on the display screen.
According to the invention, the detection module starts the timing of the first preset time, so that a user can conveniently and quickly start the human-computer interaction system.
Wherein the preset range is 1m to 10m, preferably 2m to 5 m.
The invention relates to a computer-readable storage medium, on which a computer program is stored, wherein the program realizes the steps of the human-computer interaction method when executed by a processor.
The invention relates to a human-computer interaction device, which comprises a memory, a processor and a program stored on the memory and capable of running on the processor.
The man-machine interaction method is different from the man-machine interaction system provided by the prior art in that a display unit (the display unit can be a display screen and the like) displays first image data and prompt data, whether a user knows position area data corresponding to the prompt data is examined, the user matches the first image data and the prompt data, and simultaneously, the user image is collected in real time, the user image data can be displayed on the first image data in real time, the user knows the position area data corresponding to the display data, the user can know the prompt data corresponding to the position area data, the user can guide the user who does not know the prompt data corresponding to the position area data, entertainment effect is improved, and user experience is improved.
The man-machine interaction method of the present invention will be further described with reference to the accompanying drawings.
Drawings
FIG. 1 is a circuit diagram of a human-computer interaction method;
FIG. 2 is a schematic diagram of an application interface of a human-computer interaction method.
Detailed Description
As shown in FIGS. 1-2, the man-machine interaction method of the invention comprises
Collecting a user image;
the user image is acquired in a mode that the camera shoots the user posture.
The invention can realize the real-time collection of different actions and positions of the user and generate the user image by the mode.
Storing position area data corresponding to the user image, and storing display data corresponding to the position area data;
outputting corresponding position area data according to the user image, and marking the position area data on the user image to generate first image data;
for example, a user image in which a person is photographed while standing upright may be marked with different position area data by position area data stored in a database, such as: skull region, humeral region, sternum region.
The invention stores the position area data corresponding to the user image one by one through the database, and marks a plurality of position area data on the user image when the user adopts any action and position to generate the first image data for subsequent processing.
Displaying prompt data and first image data within first preset time, judging whether a user instruction is located in a region corresponding to position region data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, otherwise, adjusting the first preset time, and displaying corresponding animation and/or scores when the first preset time is over;
according to the invention, the prompt data and the first image data are displayed within the first preset time in the mode, so that the user can clearly browse the prompt data and the first image data.
Wherein the first preset time is as follows: (1s, 100s), preferably 10 s.
For example: the data is displayed as a skull, and when the user instructs the data to be located in the position area of the skull, the incentive animation and/or 1 point are displayed.
The data is displayed as a sternum, and when the user instructs data of the position area of the skull, failure animation and/or failure 1 time are displayed. And converting the first preset time of 10s into the first preset time of 8 s.
Of course, the next time the data displayed is sternum, the user command is located at the location area data of the skull, then a failure animation and/or failure is displayed 1 time. And converting the first preset time of 8s into the first preset time of 6 s.
Counting a first time of ending the first preset time;
and when the first preset time is over, adding 1 to the number of times of the user click errors.
When the times reach a third preset threshold value, summarizing all displayed scores to obtain a total score;
wherein, when the invention displays 4 times plus 1 minute, the display total is divided into 4 minutes.
Wherein the third preset threshold is [1, + ∞ ], preferably 3 times.
And matching the total score with a preset score rating, and displaying the grade of the score on the display unit.
According to the total score actually obtained by the user, the score rating corresponding to the total score is output, so that the user can conveniently know the operation level of the operation instruction.
The utility model provides a human-computer interaction system, show first image data and prompt data on the display element (the display element can be display screen etc.), and examine whether the user knows the position regional data that corresponds with the prompt data, the user goes to match with first image data and prompt data, and simultaneously, gather user's image in real time, can show user's image data of self on the first image data in real time, and let the user know the position regional data of self that corresponds with the display data, make the user can be in the prompt data of knowing the position regional data of self and corresponding, play the guide effect to the user who does not know the prompt data that the position regional data of self corresponds very much, the amusement effect of amusement has been improved, and then user experience is improved.
As a further explanation of the present invention, the user instruction is an instruction to click a designated area of a screen, and the position area data is a display button of the designated area displayed within the display screen.
According to the invention, the user clicks within the range specified by the position area data by using the mouse to serve as the user instruction, so that the user instruction can be conveniently input by a clicking mode.
As a further explanation of the present invention, the user instruction is a voice instruction, the processor converts the voice instruction into a text instruction, and the position area data is text data.
The invention converts the words spoken by the user into the voice command, identifies the voice command as the character command, and can facilitate the user to input the user command when judging whether the voice command is matched with the position area data of the character data.
As a further explanation of the present invention, the detection module detects a distance between the human machines, and prepares to start displaying the lifting data and the first image data on the display screen when the distance between the human machines is within a preset range.
According to the invention, the detection module starts the timing of the first preset time, so that a user can conveniently and quickly start the human-computer interaction system.
As shown in fig. 1, the human-computer interaction system of the present invention includes:
the image acquisition module is used for acquiring a user image;
a database for storing position area data corresponding to the user image and storing display data corresponding to the position area data;
the processing module is used for outputting corresponding position area data according to the user image, marking the position area data on the user image and generating first image data;
the interaction module is used for displaying prompt data and first image data within first preset time, judging whether a user instruction is located in an area corresponding to position area data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, if not, adjusting the first preset time, and displaying the corresponding animation and/or scores when the first preset time is over;
the counting module is used for counting the first times of the ending of the first preset time;
the summarizing module is used for summarizing all displayed scores to obtain a total score when the times reach a third preset threshold;
and the rating module is used for matching the total score with a preset score rating and displaying the grade of the score on the display unit.
As a further explanation of the present invention, the present invention also includes:
and the screen clicking module is used for configuring the display buttons for the specified areas displayed in the display screen by the position area data.
According to the invention, the user clicks within the range specified by the position area data by using the mouse to serve as the user instruction, so that the user instruction can be conveniently input by a clicking mode.
As a further explanation of the present invention, the present invention also includes:
and the voice recognition module is used for converting the voice instruction into a character instruction, and the position area data is character data.
The invention converts the words spoken by the user into the voice command, identifies the voice command as the character command, and can facilitate the user to input the user command when judging whether the voice command is matched with the position area data of the character data.
As a further explanation of the present invention, the present invention also includes:
and the identification module is used for detecting the distance between the human machines, and when the distance between the human machines is within a preset range, the lifting data and the first image data are ready to be displayed on the display screen.
According to the invention, the detection module starts the timing of the first preset time, so that a user can conveniently and quickly start the human-computer interaction system.
Wherein the preset range is 1m to 10m, preferably 2m to 5 m.
The invention relates to a computer-readable storage medium, on which a computer program is stored, wherein the program realizes the steps of the human-computer interaction method when being executed by a processor.
The invention relates to a human-computer interaction device, which comprises a memory, a processor and a program stored on the memory and capable of running on the processor.
Example one
Referring to fig. 2, the invention can be made into a 'Xiaoxiaole' for human body.
The login interface (displaying software and name of copyright owner) can be used for binding WeChat, QQ or mobile phone short message for login;
a game mode: level 1: a bone cave: the main interface drops the display data of the building block with the name, for example: sternum, humerus, etc. Description of the drawings: the human skeleton comprises position area data 1 serving as a skull, position area data 2 of a sternum and position area data 3 of a humerus, and any position area data can be clicked or clicked through voice recognition, so that characters corresponding to display data in corresponding bats can be displayed, and the corresponding bats can be killed. The mistaking can cause the bat to fall down in an accelerated way, namely, the first preset time is reduced. The bat falling into contact with the protective wall produces a "gnawing" effect, i.e., an increase in the first number of times. Each bat gnaws one layer of thickness of the protective wall, if the bat enters the protective wall to reach the heart, namely the first time exceeds a third preset threshold value, the bat is declared to fail. Each office predicts 8-10 bats, namely the number of display data, and bats with different names fall down randomly.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.

Claims (10)

1. A human-computer interaction method is characterized in that: comprises that
Collecting a user image;
storing position area data corresponding to the user image, and storing display data corresponding to the position area data;
outputting corresponding position area data according to the user image, and marking the position area data on the user image to generate first image data;
displaying prompt data and first image data within first preset time, judging whether a user instruction is located in a region corresponding to position region data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, otherwise, adjusting the first preset time, and displaying corresponding animation and/or scores when the first preset time is over;
counting a first time of ending the first preset time;
when the times reach a third preset threshold value, summarizing all displayed scores to obtain a total score;
and matching the total score with a preset score rating, and displaying the grade of the score on the display unit.
2. The human-computer interaction method according to claim 1, wherein: the user instruction is an instruction for clicking a designated area of a screen, and the position area data is a display button of the designated area displayed in the display screen.
3. The human-computer interaction method of claim 2, wherein: the user instruction is a voice instruction, the processor converts the voice instruction into a text instruction, and the position area data is text data.
4. The human-computer interaction method of claim 3, wherein: the detection module detects the distance between the human-computer, and when the distance between the human-computer is within a preset range, the lifting data and the first image data are ready to be displayed on the display screen.
5. A human-computer interaction system, comprising:
the image acquisition module is used for acquiring a user image;
a database for storing position area data corresponding to the user image and storing display data corresponding to the position area data;
the processing module is used for outputting corresponding position area data according to the user image, marking the position area data on the user image and generating first image data;
the interaction module is used for displaying prompt data and first image data within first preset time, judging whether a user instruction is located in an area corresponding to position area data corresponding to the prompt data, if so, displaying corresponding animation and/or scores, if not, adjusting the first preset time, and displaying the corresponding animation and/or scores when the first preset time is over;
the counting module is used for counting the first times of the ending of the first preset time;
the summarizing module is used for summarizing all displayed scores to obtain a total score when the times reach a third preset threshold;
and the rating module is used for matching the total score with a preset score rating and displaying the grade of the score on the display unit.
6. The human-computer interaction method of claim 5, further comprising:
and the screen clicking module is used for configuring the display buttons for the specified areas displayed in the display screen by the position area data.
7. The human-computer interaction method of claim 6, further comprising:
and the voice recognition module is used for converting the voice instruction into a character instruction, and the position area data is character data.
8. The human-computer interaction method of claim 7, further comprising:
and the identification module is used for detecting the distance between the human machines, and when the distance between the human machines is within a preset range, the lifting data and the first image data are ready to be displayed on the display screen.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the human-computer interaction method according to any one of claims 1 to 4.
10. A human-computer interaction device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the human-computer interaction method according to any one of claims 1 to 4.
CN202110586078.6A 2021-05-27 2021-05-27 Man-machine interaction method, system, computer readable storage medium and interaction device Active CN113171606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110586078.6A CN113171606B (en) 2021-05-27 2021-05-27 Man-machine interaction method, system, computer readable storage medium and interaction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586078.6A CN113171606B (en) 2021-05-27 2021-05-27 Man-machine interaction method, system, computer readable storage medium and interaction device

Publications (2)

Publication Number Publication Date
CN113171606A true CN113171606A (en) 2021-07-27
CN113171606B CN113171606B (en) 2024-03-08

Family

ID=76927573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110586078.6A Active CN113171606B (en) 2021-05-27 2021-05-27 Man-machine interaction method, system, computer readable storage medium and interaction device

Country Status (1)

Country Link
CN (1) CN113171606B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105228708A (en) * 2013-04-02 2016-01-06 日本电气方案创新株式会社 Body action scoring apparatus, dancing scoring apparatus, Caraok device and game device
KR20180067993A (en) * 2016-12-13 2018-06-21 주식회사 시공미디어 Method and apparatus for providing coding education game service
CN108320252A (en) * 2018-03-02 2018-07-24 深圳大图科创技术开发有限公司 A kind of good intelligent tutoring system of interaction effect
CN108550396A (en) * 2018-04-18 2018-09-18 湘潭大学 A kind of device and method of child's intelligent health-care and intellectual development
CN108888949A (en) * 2018-09-17 2018-11-27 龙岩学院 A kind of game interaction means and method based on Arduino
CN109243208A (en) * 2018-10-22 2019-01-18 潍坊医学院 A kind of microbiology craps game system and its learning method
CN109859324A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of motion teaching method and device based on visual human
CN110689781A (en) * 2019-10-31 2020-01-14 北京光年无限科技有限公司 Data processing method and system based on children education
CN111462557A (en) * 2020-04-09 2020-07-28 中国人民解放军陆军军医大学第二附属医院 Cardiovascular disease clinical case breakthrough game type teaching application system
US20200302810A1 (en) * 2011-04-08 2020-09-24 Case Western Reserve University Automated assessment of cognitive, fine-motor, and memory skills
CN111882932A (en) * 2020-07-31 2020-11-03 托普爱英(北京)科技有限公司 Method and device for assisting language learning, electronic equipment and storage medium
CN112099637A (en) * 2020-09-27 2020-12-18 成都佳发教育科技有限公司 Wearable information acquisition system based on AR interaction
US20210069574A1 (en) * 2015-04-23 2021-03-11 Win Reality, Llc Virtual reality sports training systems and methods
CN112675527A (en) * 2020-12-29 2021-04-20 重庆医科大学 Family education game system and method based on VR technology
CN112785888A (en) * 2021-01-11 2021-05-11 重庆三峡医药高等专科学校 System for medical student learns basic knowledge based on recreation

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302810A1 (en) * 2011-04-08 2020-09-24 Case Western Reserve University Automated assessment of cognitive, fine-motor, and memory skills
CN105228708A (en) * 2013-04-02 2016-01-06 日本电气方案创新株式会社 Body action scoring apparatus, dancing scoring apparatus, Caraok device and game device
US20210069574A1 (en) * 2015-04-23 2021-03-11 Win Reality, Llc Virtual reality sports training systems and methods
KR20180067993A (en) * 2016-12-13 2018-06-21 주식회사 시공미디어 Method and apparatus for providing coding education game service
CN108320252A (en) * 2018-03-02 2018-07-24 深圳大图科创技术开发有限公司 A kind of good intelligent tutoring system of interaction effect
CN108550396A (en) * 2018-04-18 2018-09-18 湘潭大学 A kind of device and method of child's intelligent health-care and intellectual development
CN108888949A (en) * 2018-09-17 2018-11-27 龙岩学院 A kind of game interaction means and method based on Arduino
CN109243208A (en) * 2018-10-22 2019-01-18 潍坊医学院 A kind of microbiology craps game system and its learning method
CN109859324A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of motion teaching method and device based on visual human
CN110689781A (en) * 2019-10-31 2020-01-14 北京光年无限科技有限公司 Data processing method and system based on children education
CN111462557A (en) * 2020-04-09 2020-07-28 中国人民解放军陆军军医大学第二附属医院 Cardiovascular disease clinical case breakthrough game type teaching application system
CN111882932A (en) * 2020-07-31 2020-11-03 托普爱英(北京)科技有限公司 Method and device for assisting language learning, electronic equipment and storage medium
CN112099637A (en) * 2020-09-27 2020-12-18 成都佳发教育科技有限公司 Wearable information acquisition system based on AR interaction
CN112675527A (en) * 2020-12-29 2021-04-20 重庆医科大学 Family education game system and method based on VR technology
CN112785888A (en) * 2021-01-11 2021-05-11 重庆三峡医药高等专科学校 System for medical student learns basic knowledge based on recreation

Also Published As

Publication number Publication date
CN113171606B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN108053838B (en) In conjunction with fraud recognition methods, device and the storage medium of audio analysis and video analysis
CN110705390A (en) Body posture recognition method and device based on LSTM and storage medium
CN105556594B (en) Voice recognition processing unit, voice recognition processing method and display device
CN111563487A (en) Dance scoring method based on gesture recognition model and related equipment
CN110390841A (en) Interrogation training method, terminal and the system of digital patient
CN106775198A (en) A kind of method and device for realizing accompanying based on mixed reality technology
CN106529379A (en) Method and device for recognizing living body
CN110390068B (en) Knowledge competition method, system, equipment and storage medium
CN109656465A (en) A kind of content acquisition method and private tutor's equipment applied to private tutor's equipment
CN110547756A (en) Vision test method, device and system
CN112349380A (en) Body-building guidance method, device, equipment and storage medium based on cloud computing
CN115331314A (en) Exercise effect evaluation method and system based on APP screening function
CN111223549A (en) Mobile end system and method for disease prevention based on posture correction
Baranyi et al. Analysis, design, and prototypical implementation of a serious game reha@ stroke to support rehabilitation of stroke patients with the help of a mobile phone
CN113419886B (en) Method, apparatus and computer-readable storage medium for handling program crash
CN109885461A (en) A kind of system of Anti-addiction, method and smart machine
CN113171606A (en) Man-machine interaction method, system, computer readable storage medium and interaction device
CN109377577A (en) A kind of Work attendance method based on recognition of face, system and storage device
CN105797375A (en) Method and terminal for changing role model expressions along with user facial expressions
CN116841394A (en) Exercise control method for displaying user movement state information and electronic equipment
CN106445654A (en) Method and device for determining response priorities of control commands
CN105833475A (en) Running machine parameter setting device and running machine parameter setting method
CN109635214A (en) A kind of method for pushing and electronic equipment of education resource
CN111967333B (en) Signal generation method, system, storage medium and brain-computer interface spelling device
CN105288993A (en) Intelligent picture guessing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant