CN110718119A - Educational ability support method and system based on wearable intelligent equipment special for children - Google Patents

Educational ability support method and system based on wearable intelligent equipment special for children Download PDF

Info

Publication number
CN110718119A
CN110718119A CN201910916725.8A CN201910916725A CN110718119A CN 110718119 A CN110718119 A CN 110718119A CN 201910916725 A CN201910916725 A CN 201910916725A CN 110718119 A CN110718119 A CN 110718119A
Authority
CN
China
Prior art keywords
user
data
skill
answer
children
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910916725.8A
Other languages
Chinese (zh)
Inventor
俞志晨
郭家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201910916725.8A priority Critical patent/CN110718119A/en
Publication of CN110718119A publication Critical patent/CN110718119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/22Games, e.g. card games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Acoustics & Sound (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides an educational ability support method based on a wearable intelligent device special for children, which comprises the following steps: receiving multimodal input data of a user; determining an education interaction mode based on the multi-mode input data, and acquiring a skill level of the user; calling a game interaction module and returning problem data corresponding to the current skill level of the user through a cloud server; receiving multi-modal answer data given by a user aiming at the question data, and further transmitting the multi-modal answer data to a cloud server for processing to obtain an answer result; and feeding back skill forward data by the game interaction module according to the answer result so as to correct the current skill level of the user. The invention can realize the positive feedback of the game on education, provides more convenient interactive service for the child user, improves the use experience of the user and achieves the purpose of edutainment.

Description

Educational ability support method and system based on wearable intelligent equipment special for children
Technical Field
The invention relates to the field of artificial intelligence, in particular to an educational ability support method and system based on wearable intelligent equipment special for children.
Background
With the continuous development of science and technology, the introduction of information technology, computer technology and artificial intelligence technology, the research on intelligent equipment has gradually gone out of the industrial field and gradually expanded to the fields of medical treatment, health care, family, entertainment, service industry and the like. The requirements of people on intelligent equipment are also improved from simple and repeated mechanical actions to equipment with anthropomorphic question answering and autonomy and capable of interacting with other intelligent equipment, and human-computer interaction becomes an important factor for determining the development of the intelligent equipment. Therefore, the interactive capability of the intelligent device is improved, the human-like performance and the intelligence of the intelligent device are improved, and the important problem to be solved is urgently needed.
Therefore, the invention provides an educational ability support method and system based on the wearable intelligent equipment specially used for children.
Disclosure of Invention
In order to solve the above problems, the present invention provides an educational ability support method based on a wearable intelligent device dedicated for children, the method comprising the steps of:
receiving multimodal input data of a user;
determining an education interaction mode based on the multi-modal input data, and acquiring a skill level of the user;
calling a game interaction module and returning question data corresponding to the current skill level of the user through a cloud server;
receiving multi-modal answer data given by a user aiming at the question data, and further transmitting the multi-modal answer data to a cloud server for processing to obtain an answer result;
and feeding back skill forward data by the game interaction module according to the answer result so as to modify the current skill level of the user.
According to one embodiment of the present invention, the cloud server comprises education support capability, which comprises: chinese speech recognition capability, spoken English evaluation capability, speech synthesis capability and sound wave transmission communication capability.
According to one embodiment of the invention, in the educational interaction mode, the child-specific wearable smart device provides at least one of the following educational interaction modes: chinese character recognition, word spelling, and english conversation.
According to an embodiment of the present invention, in the english language dialogue interactive mode, the method includes the following steps:
receiving recording data input by a user, and uploading the recording data to a skill server in the cloud server;
the skill server distributes the recorded data to the English oral evaluation capability;
analyzing the recording data by the English spoken language evaluation capability to obtain a corresponding evaluation result, and returning the evaluation result to the skill server;
and the skill server determines the skill level of the user according to the evaluation result.
According to an embodiment of the present invention, the game interaction module, when being in interaction with a user, specifically comprises the following steps:
the skill server generates the question data and converts the question data into corresponding audio skill question data;
receiving the multi-modal answer data, uploading the multi-modal answer data to the Chinese speech recognition capability, and generating a corresponding answer result;
and the skill server generates skill forward data according to the answer result and feeds the skill forward data back to the special wearable intelligent equipment for the children.
According to an embodiment of the invention, the method further comprises:
acquiring identity characteristic information of a current user, judging user attributes of the current user, and determining the category of the current user, wherein the category of the user comprises: a child user.
According to another aspect of the invention, there is also provided a program product containing a series of instructions for carrying out the steps of the method according to any one of the above.
According to another aspect of the present invention, there is also provided an educational ability support apparatus based on a child-specific wearable smart device, the apparatus including:
a first module for receiving multimodal input data of a user;
a second module for determining an educational interaction mode based on the multi-modal input data, obtaining a skill level of a user;
the third module is used for calling a game interaction module and returning question data corresponding to the current skill level of the user through a cloud server;
the fourth module is used for receiving multi-modal answer data given by a user aiming at the question data, and further transmitting the multi-modal answer data to the cloud server for processing to obtain an answer result;
a fifth module for feeding back skill forward data by the game interaction module according to the answer result to modify the current skill level of the user.
According to another aspect of the invention, there is also provided a child-specific wearable smart device for executing a series of instructions of the method steps as defined in any one of the above.
According to another aspect of the present invention, there is also provided an educational ability support system based on a child-specific wearable intelligent device, the system comprising:
a wearable smart device dedicated for children as described above;
and the cloud server is provided with semantic understanding, visual recognition, cognitive computation and emotion computation so as to decide that the wearable intelligent equipment special for the children outputs multi-mode data.
The educational ability support method and system based on the wearable intelligent device special for the children provided by the invention can receive multi-mode input data input by a user to determine the skill level of the user, and generate corresponding question data according to the skill level of the user under a game interaction module so as to correct the skill level of the user according to the answer of the user. The invention can realize the positive feedback of the game on education, provides more convenient interactive service for the child user, improves the use experience of the user and achieves the purpose of edutainment.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 shows a flow diagram of a child-specific wearable intelligent device-based educational ability support method according to one embodiment of the present invention;
FIG. 2 shows a flow diagram of English language dialogue interaction in a child-specific wearable intelligent device-based educational ability support method according to an embodiment of the present invention;
FIG. 3 shows a flow chart of interaction under a game interaction module in a child-specific wearable intelligent device-based educational ability support method according to an embodiment of the present invention;
FIG. 4 shows a block diagram of interaction through a client in a child-specific wearable intelligent device-based educational ability support method according to an embodiment of the present invention;
FIG. 5 shows a block diagram of an educational ability support apparatus based on a child-specific wearable smart device according to an embodiment of the present invention;
FIG. 6 shows a block diagram of an educational ability support system based on a child-specific wearable intelligent device according to an embodiment of the present invention;
FIG. 7 shows a block diagram of an educational ability support system based on a child-specific wearable intelligent device according to another embodiment of the present invention;
FIG. 8 shows a flowchart of a child-specific wearable intelligent device-based educational ability support method according to another embodiment of the present invention; and
fig. 9 shows a three-way dataflow graph of a user, a child-specific wearable smart device, and a cloud in accordance with one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
For clarity, the following description is required before the examples:
the wearable intelligent equipment special for children supports multi-mode man-machine interaction, and has AI capabilities of natural language understanding, visual perception, language voice output, emotion expression action output and the like; the social attributes, personality attributes, character skills and the like can be configured, so that the user can enjoy intelligent and personalized smooth experience. In a specific embodiment, the wearable smart device dedicated for children may be a children story machine, a desk lamp, an alarm clock, a smart speaker, a children AI robot, a children watch, or the like.
The wearable intelligent device special for the children acquires multi-modal data of the user, and performs semantic understanding, visual recognition, cognitive computation and emotion computation on the multi-modal data under the support of the capability in the cloud server so as to complete the decision output process.
The cloud server (cloud) is a terminal which provides processing capability of semantic understanding (language semantic understanding, action semantic understanding, visual recognition, emotion calculation and cognitive calculation) of the interaction requirements of the wearable intelligent device special for the children on the basis of the cloud server (cloud), interaction with the user is achieved, and the wearable intelligent device special for the children outputs multi-mode data in a decision-making mode.
Various embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of an educational ability support method based on a child-specific wearable intelligent device according to an embodiment of the present invention.
As in fig. 1, in step S101, multimodal input data of a user is received. Specifically, the wearable intelligent device special for the child receives multi-mode input data of a user, a corresponding receiving device is arranged in the wearable intelligent device special for the child, and the multi-mode input data comprise voice data, video data, touch data, visual data and the like.
In addition, the special intelligent equipment of wearing of children can contain the touch-control interaction on the screen, and the special intelligent equipment of wearing of children contains panel computer, intelligent wrist-watch, smart mobile phone, intelligent electric intelligent robot etc..
In step S102, an educational interaction mode is determined based on the multimodal input data, and a skill level of the user is acquired.
The current skill level of the user can be obtained according to the current level stored in the account corresponding to the user, for example, if the initial education score of the user is 0, after multiple evaluation or education learning interaction, the skill level of the user can be obtained according to the obtained integral, score or physical strength value.
Furthermore, the cloud server comprises education support capacity including Chinese speech recognition capacity, spoken English evaluation capacity, speech synthesis capacity and sound wave transmission communication capacity.
Specifically, the Chinese speech recognition capability outputs a Chinese text by collecting user recordings and passing through a cloud recognition engine; the English spoken language evaluation capability carries out spoken language pronunciation evaluation based on the user recording file, and outputs a corresponding English recognition result, a pronunciation score and a corresponding analysis result (such as vowel pronunciation error); the voice synthesis capability outputs a playable audio file with the attributes of silver, speed and the like through a voice synthesis algorithm according to a text to be synthesized input by a user and a corresponding text; the sound wave transmission communication capacity comprises two main modules, namely an encoding module and a decoding module, wherein the encoding module is responsible for converting corresponding text information into playable audio with specified frequency, and the decoding module is responsible for decoding received sound waveforms and outputting corresponding text information.
Preferably, in the education interaction mode, the wearable intelligent device special for children provides at least one education interaction mode: chinese character reading, ancient poetry, three-character channel, word spelling and English conversation.
For different education interaction modes, different skill level determination modes can be selected for different learning contents, for example, in an english conversation interaction mode, in order to more accurately acquire the spoken language ability of the current user, so that the user can quickly acquire a matching spoken language level, as an example, the skill level of the user is determined as shown in fig. 2:
as shown in fig. 2, in step S201, sound recording data input by a user is received, and the sound recording data is uploaded to a skill server in a cloud server. Specifically, a user can record voice data (for example, I'm fine.) through a client on the wearable intelligent device special for children, and upload the voice data to the cloud server.
Then, in step S202, the skill server distributes the recorded data to the english spoken language evaluation capability. Specifically, the skill server includes a plurality of capabilities corresponding to an english spoken language interaction mode, and in this embodiment, the skill server distributes the recorded data to the spoken language evaluation capability.
Next, in step S203, the spoken english language evaluation capability analyzes the recorded data to obtain a corresponding evaluation result, and returns the evaluation data to the skill server. Specifically, the spoken language evaluation capability performs spoken language pronunciation evaluation (spoken language pronunciation evaluation for "I'm fine") based on the recording data uploaded by the user, identifies an english result, scores pronunciations, and generates a corresponding evaluation analysis result.
Finally, in step S204, the skill server determines the skill level of the user according to the evaluation result. Specifically, for an English interactive mode, the recording data of the user can be evaluated from the aspects of prosody, integrity, accuracy, pronunciation mode and the like, and an evaluation result is generated to determine the skill level of the user.
As shown in fig. 1, in step S103, a game interaction module is called and question data corresponding to the current skill level of the user is returned through the cloud server. Specifically, the relevant knowledge can be learned through the education interaction mode, the skill level of the user can also be determined, the game interaction module can generate question data according to the skill level of the user and display the question data in a game mode, the user can further check and consolidate the learned knowledge in the game, and the purpose of edutainment is achieved.
As shown in fig. 1, in step S104, multi-modal answer data given by the user for the question data is received, and the multi-modal answer data is further transmitted to the cloud server for processing, so as to obtain an answer result. Specifically, when the skill level of the user is defined as a beginner, question data ("How are you.
Specifically, interaction with the user may be conducted under the game interaction module in the manner shown in FIG. 3:
in step S301, the skills server generates question data and converts the question data into corresponding audio skills question data.
Specifically, if the user has previously learned ancient poetry and the user's skill level of the ancient poetry is determined to be a primary level, the skill server may generate corresponding question data "what is the next sentence from the best of the mountain on daytime? ", then, the question data is converted into audio data, played, and awaited for the user's answer.
In step S302, the multi-modal answer data is received and uploaded to the chinese speech recognition capability, and a corresponding answer result is generated. Specifically, multimodal answer data of the user may be received, "less remembered, possibly suspected of being a frost bar on the ground," and then uploaded to a cloud server.
In step S303, the skill server generates skill forward data according to the answer result and feeds the skill forward data back to the wearable intelligent device dedicated for the child. The skill forward data comprises: after the user answers correctly, the cloud server adds a score or a physical strength value which can be used for subsequent tests to the user account in the client of the special wearable intelligent equipment for children, so that the user can provide rewards for the current answer of the user to the question in an interactive process, namely positive feedback, and the positive feedback enables the user to play a motivation and promotion role in subsequent evaluation or interactive activities. Meanwhile, the Chinese speech recognition capability in the skill server performs Chinese recognition on the multi-modal answer data, recognizes the Chinese text contained in the speech data, and generates an answer result.
As shown in fig. 1, in step S105, skill forward data is fed back by the game interaction module according to the answer result to modify the current skill level of the user. Specifically, on the basis of step 104, the superposition of the forward data reflects that the user is fully informed of the current educational learning content, and the user is qualified and has a corresponding score or physical strength value, so that the user can enter into higher-order learning content evaluation or interaction.
Particularly, the wearable intelligent device special for the children receives multi-mode response data transmitted by the cloud, and the multi-mode response data are displayed through a loudspeaker, a display screen and the like.
According to one embodiment of the present invention, identity characteristic information of a current user is acquired, a user attribute of the current user is judged, and a category of the current user is determined, wherein the category of the user includes: a child user. The user group to which the invention is directed is mainly a child user, so the identity attribute of the user needs to be determined. There are many ways to determine the identity of the user, and generally, the identity of the user can be identified through a facial recognition function or a fingerprint recognition method. Other ways of determining the identity of the user may be applied to the present invention, and the present invention is not limited thereto.
Fig. 4 shows a block diagram of interaction by a client in the educational ability support method based on a child-specific wearable intelligent device according to an embodiment of the present invention.
As shown in fig. 4, the capability support in the cloud server includes speech synthesis, speech recognition, spoken language evaluation, etc., and also includes educational skill cloud logic. Contain education module and recreation module in children special use dress smart machine's customer end, still possess the recording function.
After the wearable intelligent equipment special for children is started, the wearable intelligent equipment can be interacted with a user in a visual, voice, touch and physical button interaction mode. Specifically, the user can open the interaction with the wearable smart device dedicated for children by means of body motions such as gestures, voice, touching a specific area of the smart device, pressing a physical button, and the like.
When the user carries out english dialogue study through children special use wearing smart machine, the user can click the recording button, begins the recording, and after the user unclamped the recording button, the recording was ended, and the customer end uploads recording data (I'm fine.) to the speech recognition ability of high in the clouds server. The voice recognition capability can recognize the recording file to obtain a recognition result, and then the recognition result is returned to the client.
The client uploads the recognition result to education skill cloud logic of the cloud, evaluates the recording file and/or the recognition result through spoken language evaluation capability to obtain an evaluation result (pronunciation score is 60 points, full score is 100 points, accuracy is 80 points, full score is 100 points), and returns a skill processing result (the skill level of the user is determined to be a beginner) through the education skill cloud logic. And the client calls TTS speech synthesis capability according to the skill processing result returned by the cloud, synthesizes an audio file, and plays the audio file (for example, the skill level of a playing user is the level of a beginner).
In game interaction, such as watch games, a user enters a game scene, a game module calls an educational skill module, a cloud server returns corresponding question data (what is the last sentence of mountain-after-daytime-nighttime) according to the skill level of the user, and text to be synthesized is synthesized into audio data through TTS speech synthesis capability and played.
In an embodiment, if the question data is language-related question data, after recording, the user uploads a recording file ("less remembered, possibly suspected of being a frosty bar on the ground") to the cloud Chinese speech recognition capability, and returns a Chinese speech recognition result. And the skill server judges whether the user answers correctly according to the Chinese text. And calling a TTS audio synthesis interface according to a result returned by the cloud server, playing a corresponding prompt language (the next sentence of mountain-based in daytime is yellow river incoming ocean current because you answer the wrong sentence), and correcting the skill level of the user.
Fig. 5 shows a block diagram of an educational ability support apparatus based on a child-specific wearable intelligent device according to an embodiment of the present invention.
As shown in fig. 5, the education ability support apparatus includes a first module 501, a second module 502, a third module 503, a fourth module 504 and a fifth module 505. The first module 501 comprises an obtaining unit 5011. The second module 502 comprises a transmission unit 5021, an evaluation unit 5022 and a result unit 5023. The fourth module 504 comprises a communication unit 5041 and a reply unit 5042. The fifth module 505 contains a feedback unit 5051.
The first module 501 is for receiving multimodal input data of a user. The obtaining unit 5011 is configured to obtain multimodal input data input by the user after the wearable smart device dedicated for children is started.
A second module 502 is for determining an educational interaction pattern based on the multimodal input data, obtaining a skill level of the user. The transmission unit 5021 is used for receiving the recording data of the user and uploading the recording data to a skill server in the cloud server. The evaluation unit 5022 receives the recording data distributed by the skill server, analyzes the recording data, and obtains a corresponding evaluation result. The result unit 5023 is used for determining the skill level of the user according to the evaluation result.
The third module 503 is configured to invoke a game interaction module and return question data corresponding to the current skill level of the user through the cloud server.
The fourth module 504 is configured to receive multi-modal answer data given by a user for the question data, and further transmit the multi-modal answer data to the cloud server for processing, so as to obtain answer data. The communication unit 5041 is configured to receive the multi-modal answer data returned by the user. The answer unit 5042 is configured to generate corresponding answer results based on the chinese speech recognition capabilities.
A fifth module 505 is used for feeding back skill forward data by the game interaction module based on the answer data to modify the user's current skill level. The feedback unit 5051 is used for generating skill forward data according to the answer result and feeding the skill forward data back to the wearable intelligent device special for the child.
Fig. 6 shows a block diagram of an educational ability support system based on a child-specific wearable intelligent device according to an embodiment of the present invention. As shown in fig. 6, accomplishing multi-modal interactions requires the co-participation of a user 601, a child-specific wearable smart device 602, and a cloud (cloud server) 603. The wearable smart device for children 602 includes an input/output device 6021, a data processing unit 6022, and an interface unit 6023. The cloud 603 includes semantic understanding interface 6031, visual recognition interface 6032, cognitive computing interface 6033, and emotion computing interface 6034.
The educational ability support system based on the wearable intelligent device special for children provided by the invention comprises the wearable intelligent device special for children 602 and a cloud terminal 603. The wearable smart device 602 specially used for children comprises a smart device supporting input and output modules such as vision, perception and control, can access the internet, for example, a child story machine, a desk lamp, an alarm clock, a smart sound box, a child AI robot, a child watch and the like, has a multi-mode interaction function, can receive multi-mode data input by a user, transmits the multi-mode data to a cloud end for analysis, obtains multi-mode response data, and outputs the data on the wearable smart device specially used for children.
The client in the special wearable intelligent device 602 for children can be loaded under an android system environment, the special wearable intelligent device for children can be an android system child watch and the like with 4G or even 5G communication capacity, the game interaction module feeds back skill forward data according to the answer result in the process, the interface of the android system child watch can pass through the game interface through an IP character representative user, the process is performed through the process of education learning of the user, if answering an evaluation question, skill forward data is obtained, if a physical strength value or a reward score is obtained, the next unit or higher-order education interaction content is performed, and the realization degree of lively education in music is greatly improved.
The cloud 603 has semantic understanding, visual recognition, cognitive computation and emotion computation so as to make a decision that the wearable intelligent device special for the child outputs multi-modal data.
The input and output device 6021 is used for acquiring the inputted multi-modal data and outputting the multi-modal data required to be outputted. The multimodal data entered may be entered by the user 601 or by the surrounding environment. Examples of input and output means 6021 include microphones, speakers, scanners, cameras, sensory devices for voice operation, such as using visible or invisible wavelengths of radiation, signals, environmental data, and so forth. Multimodal data can be acquired through the above-mentioned input devices. The multimodal data may include one or more of text, audio, visual, and perceptual data, and the present invention is not limited thereto.
The data processing unit 6022 is used to process data generated in performing multi-modal interaction. The Processor may be a data Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the terminal, and various interfaces and lines connecting the various parts of the overall terminal.
The wearable smart device 602 dedicated for children includes a memory, which mainly includes a program storage area and a data storage area, where the program storage area may store an operating system, an application program (such as a sound playing function and an image playing function) required by at least one function, and the like; the storage data area may store data (such as audio data, browsing recordings, etc.) created from use of the child-specific wearable smart device 602, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Cloud 603 includes semantic understanding interface 6031, visual recognition interface 6032, cognitive computing interface 6033, and emotion computing interface 6034. These interfaces above are in communication with the interface unit 6023 in the child-specific wearable smart device 602. Cloud 603 also includes semantic understanding logic corresponding to semantic understanding interface 6031, visual recognition logic corresponding to visual recognition interface 6032, cognitive computing logic corresponding to cognitive computing interface 6033, and emotion computing logic corresponding to emotion computing interface 6034.
As shown in fig. 6, each capability interface calls a corresponding logic process. The following is a description of the various interfaces:
a semantic understanding interface that receives the specific voice instruction forwarded from the interface unit 6023, performs voice recognition thereon, and natural language processing based on a large corpus.
The visual identification interface can detect, identify, track and the like the video content according to a computer visual algorithm, a deep learning algorithm and the like aiming at the human body, the human face and the scene. Namely, the image is identified according to a preset algorithm, and a quantitative detection result is given. The system has an image preprocessing function, a feature extraction function, a decision function and a specific application function;
the image preprocessing function can be basic processing of the acquired visual acquisition data, including color space conversion, edge extraction, image transformation and image thresholding;
the characteristic extraction function can extract characteristic information of complexion, color, texture, motion, coordinates and the like of a target in the image;
the decision function can be that the feature information is distributed to specific multi-mode output equipment or multi-mode output application needing the feature information according to a certain decision strategy, such as the functions of face detection, person limb identification, motion detection and the like are realized.
The cognitive computing interface 6033 is used for processing the multimodal data to perform data acquisition, recognition and learning so as to obtain a user portrait, a knowledge graph and the like, so as to make a reasonable decision on the multimodal output data.
And an emotion calculation interface which receives the multimodal data forwarded from interface unit 6023 and calculates the current emotional state of the user using emotion calculation logic (which may be emotion recognition technology). The emotion recognition technology is an important component of emotion calculation, the content of emotion recognition research comprises the aspects of facial expression, voice, behavior, text, physiological signal recognition and the like, and the emotional state of a user can be judged through the content. The emotion recognition technology may monitor the emotional state of the user only through the visual emotion recognition technology, or may monitor the emotional state of the user in a manner of combining the visual emotion recognition technology and the voice emotion recognition technology, and is not limited thereto.
The emotion calculation interface collects human facial expression images by using image acquisition equipment during visual emotion recognition, converts the human facial expression images into analyzable data, and then performs expression emotion analysis by using technologies such as image processing and the like. Understanding facial expressions typically requires detecting subtle changes in the expression, such as changes in cheek muscles, mouth, and eyebrow plucking.
In addition, the educational ability support system based on the child-specific wearable intelligent device provided by the invention can be matched with a program product which comprises a series of instructions for executing the steps of the educational ability support method based on the child-specific wearable intelligent device. The program product is capable of executing computer instructions comprising computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The program product may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like.
It should be noted that the program product may include content that is appropriately increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, the program product does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
Fig. 7 shows a block diagram of an educational ability support system based on a child-specific wearable intelligent device according to another embodiment of the present invention. Completing the multi-modal interaction requires the user 601, the child-specific wearable smart device 602, and the cloud 603. The wearable intelligent device 602 specially used for children comprises a signal acquisition device 701, a display screen 702, a signal output device 703 and a central processing unit 704.
The signal collecting device 701 is used for collecting signals output by a user or an external environment. The signal acquisition device 701 may be a device capable of acquiring a sound signal, such as a microphone, or may be a touch panel. The display screen 702 can present multimodal data input by the user and multimodal response data output. The signal output device 703 is used to output audio data. The signal output device 703 may be a device capable of outputting audio data, such as a power amplifier and a speaker. The central processor 704 can process data generated during the multimodal interaction.
According to an embodiment of the present invention, the wearable smart device for children 602 supports smart devices with input/output modules such as a children story machine, a desk lamp, an alarm clock, a smart speaker, a children AI robot, and a children watch, and has a multi-modal interaction function, and is capable of receiving multi-modal data input by a user, transmitting the multi-modal data to a cloud for analysis, obtaining multi-modal response data, and outputting the multi-modal response data on the wearable smart device for children.
Fig. 8 shows a flowchart of an educational ability support method based on a child-specific wearable intelligent device according to another embodiment of the present invention.
As shown in fig. 8, in step S801, the child-specific wearable smart device 602 issues a request to the cloud 603. Thereafter, in step S802, the wearable smart device for child 602 is in a state of waiting for the cloud 603 to reply. During the waiting period, the wearable smart device 602 dedicated to children times the time it takes to return data.
In step S803, if the returned response data is not obtained for a long time, for example, the predetermined time length is more than 5S, the wearable smart device for children 602 selects to perform local reply, and generates local general response data. Then, in step S804, the local common response is output, and the voice playing device is invoked for voice playing.
Fig. 9 shows a three-way dataflow graph of a user, a child-specific wearable smart device, and a cloud in accordance with one embodiment of the invention.
In order to realize multi-modal interaction between the child-specific wearable smart device 602 and the user 601, a communication connection needs to be established between the user 601, the child-specific wearable smart device 602, and the cloud 603. The communication connection should be real-time and unobstructed to ensure that the interaction is not affected.
In order to complete the interaction, some conditions or preconditions need to be met. These conditions or preconditions include the presence of clients in the wearable smart device 602, and the presence of hardware for visual, sensory, and control functions in the wearable smart device 602.
After the previous preparation is completed, the wearable smart device 602 starts to interact with the user 601, and first, the wearable smart device 602 receives the multi-modal input data input by the user. The multimodal input data may be speech data, visual data, tactile data, or may be a user pressing a physical button. The child-specific wearable smart device 602 is configured with a corresponding device for receiving multimodal input data, and is configured to receive multimodal input data sent by the user 601. At this time, the child-dedicated wearable smart device 602 and the user 601 are both parties of the communication, and the direction of data transfer is from the user 601 to the child-dedicated wearable smart device 602.
The child-specific wearable smart device 602 then transmits the multimodal input data to the cloud 603. And determining an educational interaction mode through the multi-modal input data, and acquiring a skill level of the user. The multimodal input data may include various forms of data, for example, text data, speech data, perceptual data, and motion data. At this time, two sides of the data transmission are the wearable smart device 602 dedicated to children and the cloud 603, and the data transmission direction is from the wearable smart device 602 dedicated to children to the cloud 603.
Cloud 603 then returns the question data to child-specific wearable smart device 602. The cloud 603 returns corresponding question data based on the user's current skill level. At this time, the cloud 603 and the child-specific wearable smart device 602 are two parties of communication, and the data is transmitted from the cloud 603 to the child-specific wearable smart device 602.
Then, the child-specific wearable smart device 602 returns the question data to the user 601, and waits for receiving the answer data returned by the user.
Finally, when the child-specific wearable smart device 602 returns the answer data to the cloud 603, the cloud 603 generates skill forward data, and the child-specific wearable smart device 602 outputs the skill forward data to the user 601 to correct the skill level data of the user.
In summary, the educational ability support method and system based on the wearable intelligent device for children provided by the invention provide a wearable intelligent device for children, which can receive multi-modal input data input by a user to determine the skill level of the user, and generate corresponding question data according to the skill level of the user under a game interaction module, so as to correct the skill level of the user according to the answer of the user. The invention can realize the positive feedback of the game on education, provides more convenient interactive service for the child user, improves the use experience of the user and achieves the purpose of edutainment.
It is to be understood that the disclosed embodiments of the invention are not limited to the particular structures, process steps, or materials disclosed herein but are extended to equivalents thereof as would be understood by those ordinarily skilled in the relevant arts. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An educational ability support method based on a wearable intelligent device specially used for children, the method is characterized by comprising the following steps:
receiving multimodal input data of a user;
determining an education interaction mode based on the multi-modal input data, and acquiring a skill level of the user;
calling a game interaction module and returning question data corresponding to the current skill level of the user through a cloud server;
receiving multi-modal answer data given by a user aiming at the question data, and further transmitting the multi-modal answer data to a cloud server for processing to obtain an answer result;
and feeding back skill forward data by the game interaction module according to the answer result so as to modify the current skill level of the user.
2. The method of claim 1, wherein the cloud server comprises educational support capabilities comprising: chinese speech recognition capability, spoken English evaluation capability, speech synthesis capability and sound wave transmission communication capability.
3. The method of claim 2, wherein in the educational interaction mode, the child-specific wearable smart device provides at least one of the following educational interaction modes: chinese character recognition, word spelling, and english conversation.
4. The method of claim 3, wherein in the English language dialogue interaction mode, the method comprises the steps of:
receiving recording data input by a user, and uploading the recording data to a skill server in the cloud server;
the skill server distributes the recorded data to the English oral evaluation capability;
analyzing the recording data by the English spoken language evaluation capability to obtain a corresponding evaluation result, and returning the evaluation result to the skill server;
and the skill server determines the skill level of the user according to the evaluation result.
5. The method of claim 2, wherein the step of interacting with the user via the game interaction module comprises the steps of:
the skill server generates the question data and converts the question data into corresponding audio skill question data;
receiving the multi-modal answer data, uploading the multi-modal answer data to the Chinese speech recognition capability, and generating a corresponding answer result;
and the skill server generates skill forward data according to the answer result and feeds the skill forward data back to the special wearable intelligent equipment for the children.
6. The method of any one of claims 1-5, further comprising:
acquiring identity characteristic information of a current user, judging user attributes of the current user, and determining the category of the current user, wherein the category of the user comprises: a child user.
7. A program product comprising a series of instructions for carrying out the method steps according to any one of claims 1 to 6.
8. An educational ability support apparatus based on a wearable smart device dedicated for children, the apparatus comprising:
a first module for receiving multimodal input data of a user;
a second module for determining an educational interaction mode based on the multi-modal input data, obtaining a skill level of a user;
the third module is used for calling a game interaction module and returning question data corresponding to the current skill level of the user through a cloud server;
the fourth module is used for receiving multi-modal answer data given by a user aiming at the question data, and further transmitting the multi-modal answer data to the cloud server for processing to obtain an answer result;
a fifth module for feeding back skill forward data by the game interaction module according to the answer result to modify the current skill level of the user.
9. A child-specific wearable smart device, characterized by a series of instructions for performing the method steps of any of claims 1-6.
10. An educational ability support system based on a wearable smart device dedicated for children, the system comprising:
the child-specific wearable smart device of claim 9;
and the cloud server is provided with semantic understanding, visual recognition, cognitive computation and emotion computation so as to decide that the wearable intelligent equipment special for the children outputs multi-mode data.
CN201910916725.8A 2019-09-26 2019-09-26 Educational ability support method and system based on wearable intelligent equipment special for children Pending CN110718119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910916725.8A CN110718119A (en) 2019-09-26 2019-09-26 Educational ability support method and system based on wearable intelligent equipment special for children

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910916725.8A CN110718119A (en) 2019-09-26 2019-09-26 Educational ability support method and system based on wearable intelligent equipment special for children

Publications (1)

Publication Number Publication Date
CN110718119A true CN110718119A (en) 2020-01-21

Family

ID=69210981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910916725.8A Pending CN110718119A (en) 2019-09-26 2019-09-26 Educational ability support method and system based on wearable intelligent equipment special for children

Country Status (1)

Country Link
CN (1) CN110718119A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031762A (en) * 2021-03-05 2021-06-25 马鞍山状元郎电子科技有限公司 Inductive interactive education method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781721A (en) * 2017-03-24 2017-05-31 北京光年无限科技有限公司 A kind of children English exchange method and robot based on robot
CN107958433A (en) * 2017-12-11 2018-04-24 吉林大学 A kind of online education man-machine interaction method and system based on artificial intelligence
CN109278051A (en) * 2018-08-09 2019-01-29 北京光年无限科技有限公司 Exchange method and system based on intelligent robot
CN109841122A (en) * 2019-03-19 2019-06-04 深圳市播闪科技有限公司 A kind of intelligent robot tutoring system and student's learning method
CN109951601A (en) * 2019-02-26 2019-06-28 广东小天才科技有限公司 System switching method and system of home teaching learning machine and home teaching learning machine
CN110191372A (en) * 2019-07-03 2019-08-30 百度在线网络技术(北京)有限公司 Multimedia interaction method, system and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781721A (en) * 2017-03-24 2017-05-31 北京光年无限科技有限公司 A kind of children English exchange method and robot based on robot
CN107958433A (en) * 2017-12-11 2018-04-24 吉林大学 A kind of online education man-machine interaction method and system based on artificial intelligence
CN109278051A (en) * 2018-08-09 2019-01-29 北京光年无限科技有限公司 Exchange method and system based on intelligent robot
CN109951601A (en) * 2019-02-26 2019-06-28 广东小天才科技有限公司 System switching method and system of home teaching learning machine and home teaching learning machine
CN109841122A (en) * 2019-03-19 2019-06-04 深圳市播闪科技有限公司 A kind of intelligent robot tutoring system and student's learning method
CN110191372A (en) * 2019-07-03 2019-08-30 百度在线网络技术(北京)有限公司 Multimedia interaction method, system and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031762A (en) * 2021-03-05 2021-06-25 马鞍山状元郎电子科技有限公司 Inductive interactive education method and system

Similar Documents

Publication Publication Date Title
CN110688911B (en) Video processing method, device, system, terminal equipment and storage medium
US20230042654A1 (en) Action synchronization for target object
CN110609620B (en) Human-computer interaction method and device based on virtual image and electronic equipment
US11151997B2 (en) Dialog system, dialog method, dialog apparatus and program
CN109871450B (en) Multi-mode interaction method and system based on textbook reading
US20200126566A1 (en) Method and apparatus for voice interaction
CN112162628A (en) Multi-mode interaction method, device and system based on virtual role, storage medium and terminal
CN110598576A (en) Sign language interaction method and device and computer medium
JP2018014094A (en) Virtual robot interaction method, system, and robot
US11501768B2 (en) Dialogue method, dialogue system, dialogue apparatus and program
CN106774845B (en) intelligent interaction method, device and terminal equipment
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN111327772B (en) Method, device, equipment and storage medium for automatic voice response processing
CN110825164A (en) Interaction method and system based on wearable intelligent equipment special for children
TW202138970A (en) Method and apparatus for driving interactive object, device and storage medium
CN109542389B (en) Sound effect control method and system for multi-mode story content output
JP2023552854A (en) Human-computer interaction methods, devices, systems, electronic devices, computer-readable media and programs
CN117541444B (en) Interactive virtual reality talent expression training method, device, equipment and medium
CN112182173A (en) Human-computer interaction method and device based on virtual life and electronic equipment
KR20220129989A (en) Avatar-based interaction service method and apparatus
CN110442867A (en) Image processing method, device, terminal and computer storage medium
CN117313785A (en) Intelligent digital human interaction method, device and medium based on weak population
CN116524791A (en) Lip language learning auxiliary training system based on meta universe and application thereof
CN113205569B (en) Image drawing method and device, computer readable medium and electronic equipment
CN114201596A (en) Virtual digital human use method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200121