CN110309570B - Multi-mode simulation experiment container with cognitive ability and method - Google Patents
Multi-mode simulation experiment container with cognitive ability and method Download PDFInfo
- Publication number
- CN110309570B CN110309570B CN201910544280.5A CN201910544280A CN110309570B CN 110309570 B CN110309570 B CN 110309570B CN 201910544280 A CN201910544280 A CN 201910544280A CN 110309570 B CN110309570 B CN 110309570B
- Authority
- CN
- China
- Prior art keywords
- user
- simulation experiment
- intention
- behavior
- modal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 69
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000003930 cognitive ability Effects 0.000 title claims abstract description 30
- 230000006399 behavior Effects 0.000 claims abstract description 81
- 238000002474 experimental method Methods 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 30
- 230000004927 fusion Effects 0.000 claims description 28
- 238000003756 stirring Methods 0.000 claims description 24
- 230000009471 action Effects 0.000 claims description 20
- 238000012360 testing method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 5
- 230000001149 cognitive effect Effects 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims 1
- 239000000463 material Substances 0.000 abstract description 3
- 239000002699 waste material Substances 0.000 abstract description 3
- 238000006243 chemical reaction Methods 0.000 description 11
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 11
- 239000000126 substance Substances 0.000 description 10
- DGAQECJNVWCQMB-PUAWFVPOSA-M Ilexoside XXIX Chemical compound C[C@@H]1CC[C@@]2(CC[C@@]3(C(=CC[C@H]4[C@]3(CC[C@@H]5[C@@]4(CC[C@@H](C5(C)C)OS(=O)(=O)[O-])C)C)[C@@H]2[C@]1(C)O)C)C(=O)O[C@H]6[C@@H]([C@H]([C@@H]([C@H](O6)CO)O)O)O.[Na+] DGAQECJNVWCQMB-PUAWFVPOSA-M 0.000 description 9
- 239000011734 sodium Substances 0.000 description 9
- 229910052708 sodium Inorganic materials 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000010790 dilution Methods 0.000 description 5
- 239000012895 dilution Substances 0.000 description 5
- QAOWNCQODCNURD-UHFFFAOYSA-N sulfuric acid Substances OS(O)(=O)=O QAOWNCQODCNURD-UHFFFAOYSA-N 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 239000003153 chemical reaction reagent Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- KJFMBFZCATUALV-UHFFFAOYSA-N phenolphthalein Chemical compound C1=CC(O)=CC=C1C1(C=2C=CC(O)=CC=2)C2=CC=CC=C2C(=O)O1 KJFMBFZCATUALV-UHFFFAOYSA-N 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/247—Thesauruses; Synonyms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Educational Administration (AREA)
- Tourism & Hospitality (AREA)
- Educational Technology (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- Computer Hardware Design (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Machine Translation (AREA)
Abstract
The invention provides a multi-modal simulation experiment container with cognitive ability, which comprises: the simulation experiment container body, the touch display, the photosensitive sensor, the singlechip, the sound sensor, the intelligent terminal, a multimode simulation experiment container method with cognitive ability is also provided, user behavior is sensed through the sensor, a first intention set is obtained, keyword extraction processing is carried out on voice, a second intention set is obtained, intersection operation is carried out on the first intention set and the second intention set, a third intention set is obtained, the user is provided with real feedback feeling, the user does not need to carry out deliberate memory steps when in use, understanding knowledge is carried out instead by using experiment operation, learning interest and knowledge application ability of students are increased, consumption of experiment materials is greatly reduced, experiment waste products left after experiment are not processed, and user operation is facilitated.
Description
Technical Field
The invention relates to the field of simulation tests, in particular to a multi-mode simulation experiment container with cognitive ability and a method.
Background
Many chemical experiments in the chemical textbook of middle school have the experimental characteristics of strong destructiveness, high consumption, relative danger and the like, so many teachers simply take the experiments when teaching the experiments, and students can only memorize the experiments by means of chemical equations and reaction phenomena. The problems that are easy to occur are that the memory of knowledge points is not firm, phenomena cannot be understood and are easy to be confused, and the practical ability is not available. With the modern technological development and the application of numerical simulation technology, the virtual simulation experiment platform solves the problem, and many chemical experiments can be completed through a virtual simulation system. Most of the existing virtual simulation systems use a mouse and a keyboard as input, and a few of test systems using VR and AR technologies also require a user to memorize a great number of commands such as gestures and pen tests, so that the memory load of the user is greatly increased.
In the aspect of multi-mode fusion, allport and the like propose a multi-channel assumption, and show that different perceptual and perceptual channels of a human occupy different mental resources by using practical verification. Mayer et al proposed the cognitive theory of multimedia learning, explored the human learning efficiency problem under the two-channel condition from the perspective of vision and sense of hearing, and proposed the human information processing system model under the two-channel condition of seeing and hearing.
Most of the existing virtual simulation experiment systems take a mouse and a keyboard as input and a display as output (such as simulation laboratory software like Nboost) to solve a lot of chemical experiments which are difficult to be actually operated, but the operation of a user is limited to a great extent, a real feedback feeling cannot be provided for the user, the user needs to carry out a deliberate memory step when using the system, and the operation is inconvenient.
Disclosure of Invention
The invention provides a multi-mode simulation experiment container with cognitive ability and a method thereof in order to solve the problems in the prior art, and effectively solves the problems that the existing virtual experiment can not provide a real feedback sense for a user, the user needs to go to a deliberate memory step during use, and the operation is inconvenient, the feedback sense of reality is provided for the user, and the operation of the user is convenient.
The invention provides a multi-modal simulation experiment container with cognitive ability in a first aspect, which comprises: simulation experiment vessel body, touch display, photosensitive sensor, singlechip, sound sensor, intelligent terminal, simulation experiment vessel body entry border evenly is provided with a plurality of photosensitive sensor, a plurality of photosensitive sensor is used for confirming the operating condition that the user is going on this moment through the number that judges to shelter from photosensitive sensor, touch display set up in simulation experiment vessel body container outer wall, singlechip, sound sensor all set up in simulation experiment vessel body container inner wall bottom, the first input and the touch display screen of singlechip are connected, and the second input is connected with photosensitive sensor's output, and the third input is connected with sound sensor's output, the data communication end of singlechip is connected through wireless transmission and intelligent terminal's data communication end.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the touch display includes a function setting button for setting an experiment condition and a used test article.
The second aspect of the present invention provides a method for a multi-modal simulation experiment container with cognitive ability, which is implemented based on the multi-modal simulation experiment container with cognitive ability of the first aspect of the present invention, and includes:
prompting the operation behavior of the user;
sensing user behaviors through a sensor, and comparing the user behaviors with a first label in behavior intentions in a pre-established user behavior library to obtain a first intention set;
extracting keywords from voice input by a user, and comparing the keywords with a second label in the action intention in a pre-established user action library to obtain a second intention set;
and taking intersection operation on the first intention set and the second intention set, if the intersection operation result is not an empty set, successfully performing modal fusion, outputting a first result of the modal fusion, and if the intersection operation result is an empty set, identifying the error type.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the method further includes:
judging whether the user behavior meets the specification or not according to the first result output by the modal fusion, if so, outputting a second result of the user behavior operation, and prompting the next operation; and if the current user behavior is not in accordance with the error indication, outputting a prompt of the current user behavior error and a corresponding user error behavior, and performing error indication feedback.
Further, the error feedback specifically includes: if the user continues the current operation, outputting a third result generated according to the current operation behavior of the user, and explaining a principle of generating the third result; and if the user does not perform the current operation, the user behavior is acquired again.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the sensing, by a sensor, a user behavior specifically includes:
the single chip microcomputer acquires the state of the test article by detecting the shielded number of the photosensitive sensors;
the single chip microcomputer obtains the duration time t1 of the user behavior sound and the maximum audio amplitude f1 in the duration time through the sound sensor, and obtains the user stirring behavior and the stirring speed.
Further, the state that the singlechip was sheltered from through detecting photosensitive sensor is specifically:
the single chip microcomputer detects the signals of the photosensitive sensors, obtains the number M of activated photosensitive sensors and the numbers i and j, and calculates the number d of the shielded photosensitive sensors:
wherein N +1 is the total number of pressure sensors on the container model, mod is a modulo arithmetic operator, max is a maximum value operator, and | l.
With reference to the second aspect, in a third possible implementation manner of the second aspect, the acquiring, by the single chip microcomputer, the duration t1 of the user behavior sound and the maximum amplitude f1 of the audio frequency within the duration through the sound sensor, and the acquiring of the user stirring behavior and the stirring speed specifically includes:
when a user stirs in the simulation experiment container, sound on the inner wall of the container model is sensed by the sound sensor; the duration t1 of the sound and the maximum amplitude f1 of the audio within this duration are acquired,
if the conditions T1> τ 1 and f1> κ 1 are met, the user is performing a stirring action, where τ 1, κ 1 are empirical parameters, τ 1>, 0, κ 1 >;
calculating the speed v:
v=βf1,
if the condition v > v1 is satisfied, the speed v is the stirring speed, wherein v1 is a stirring speed threshold value, beta and v1 are empirical parameters, beta >0, and v1>0.
With reference to the second aspect, in a fourth possible implementation manner of the second aspect, the extracting a keyword from a speech input by a user specifically includes:
segmenting words of the text converted from the voice, filtering stop words, fictitious words and combining synonyms to obtain candidate words;
the formula for calculating the distance between the candidate word and the keyword is as follows:
wherein, the point (x) i ,y i ) Representing the coordinates of the candidate word in the database, point (x) j ,y j ) Representing the coordinates of the keyword in the database, d ij Is a point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between;
then, according to the point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between the candidate words gives a weight to each word in the candidate words, and the formula is as follows:
if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate words is 0, the weight of the candidate words is 1, and the candidate words are used as keywords for extraction processing; if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate words is not 0, the candidate word weight is 0, and the candidate word is discarded.
With reference to the second aspect, in a fifth possible implementation manner of the second aspect, the intersection operation of the first intention set and the second intention set is specifically:
the intersection operation adopts a vector multiplication rule, and the first intention set and the second intention set are respectively mapped into an intention library; the intention library is a pre-established database and contains all intentions;
representing the first intention set as a vector A1 by adopting a coding mode, and representing the first intention set as a vector A2 by adopting a coding mode;
multiplying the vector A1 and the vector A2 to obtain a new vector A, and further obtaining a fused third intention set, wherein the formula specifically comprises:
A=A 1 A 2 T 。
the technical scheme adopted by the invention comprises the following technical effects:
1. the invention is separated from the control of a mouse and a keyboard, provides a real feedback feeling for a user, does not need to carry out an intentional memory step when the user uses the mouse and the keyboard, changes the operation into the application of experiment operation to understand knowledge, increases the learning interest of students and the application capability of the knowledge, greatly reduces the consumption of experiment materials, does not need to process experiment waste products left after the experiment, and is convenient for the user to operate.
2. The invention integrates the algorithms such as intention fusion and the like, greatly improves the intelligence of the system and has more natural and convenient operation.
3. Can carry out accurate perception to user's action, realize operation action control, guide and the visualization of wrong action consequence and show wrong feedback to wrong action, even the student has done wrong operation step and also has corresponding wrong result, let the student to the understanding of experiment more deeply, also have clear understanding to wrong step, easier to understand chemical experiment's reaction mechanism is more intelligent, promotes the effect that virtual experiment experienced.
4. The voice guidance system can perform whole-course voice guidance on user behaviors, not only tells the user about the next operation steps, but also tells the user how to operate in detail, realizes that the whole experience process of a virtual experiment can be automatically completed even if no teacher guides the user, and enhances the manual operation and the automatic exploration capability of students.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the embodiments or technical solutions in the prior art are briefly described below, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a multi-modal simulation experiment container with cognitive ability according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a multi-modal simulation experiment container touch display screen input with cognitive capability according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 4 is a schematic flow chart of step S2 of a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 5 is a schematic flow chart of step S3 of a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 6 is a schematic flow chart of speech recognition in a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 7 is a schematic flow chart of keyword extraction in a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 8 is a schematic flow chart of the inside of a user behavior library of a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 9 is a schematic flow chart of step S4 of a method for implementing a multi-modal simulation experiment container with cognitive ability according to a second embodiment of the present invention;
FIG. 10 is a schematic flowchart illustrating an intersection operation of a first intention set and a second intention set of a multi-modal simulation experiment container method with cognitive ability according to a second embodiment of the present invention;
FIG. 11 is a schematic flow chart of a method for implementing a multi-modal simulation experiment container with cognitive ability according to the third embodiment of the present invention;
fig. 12 is a schematic flow chart of step S9 of a method for implementing a multi-modal simulation experiment container with cognitive ability according to a third embodiment of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the present invention will be explained in detail by the following embodiments and the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and processes are omitted so as to not unnecessarily limit the invention.
Example one
As shown in FIG. 1, the invention provides a multi-modal simulation experiment container with cognitive ability, comprising: simulation experiment vessel 1, touch display 2, photosensitive sensor 3, single-chip microcomputer 4, sound sensor 5, intelligent terminal 6, 1 entry border of simulation experiment vessel evenly is provided with a plurality of photosensitive sensor 3, a plurality of photosensitive sensor 3 is used for confirming the operating condition that the user was going on this moment through the number that judges to shelter from photosensitive sensor 3, touch display 2 sets up in 1 container outer wall of simulation experiment vessel, single-chip microcomputer 4, sound sensor 5 all sets up in 1 container inner wall bottom of simulation experiment vessel, single-chip microcomputer 4's first input is connected with touch display 2, the second input is connected with photosensitive sensor 3's output, the third input is connected with sound sensor 5's output, single-chip microcomputer 4's data communication end is connected through wireless transmission and intelligent terminal 6's data communication end.
The touch display 2 includes a function setting button 21 for setting the experiment conditions and the used test items. The function button 21 specifically includes: the weight setting button 211, the volume setting button 212, the temperature setting button 213, the concentration setting button 214, the alcohol burner setting button 215, the clear button 216, and the experiment name product setting button 217, although other types or functions of buttons may be set according to the actual situation, and the invention is not limited herein. The weight, volume, temperature, concentration of the test substance (solid/liquid) poured into the simulation test container, and other desired test conditions may be manually set on the touch display 2. The user can set the test object to be used through the test object name setting button 217 on the touch screen display.
Each photosensitive sensor 3 is numbered by a unique one number (increasing numbers starting from 0, i.e. 0,1,2, \ 8230;), consecutively in sequence according to the position of the photosensitive sensor 3 at the inlet rim of the simulated experimental container body 1. When a user pours from one simulation experiment container body 1 to the other simulation experiment container body 1, one simulation experiment container body 1 is pressed on the edge of the inlet of the other simulation experiment container body 1; when a user places a solid experimental model into the simulation experiment container body 1 by using the forceps model, the forceps are pressed against the edge of the inlet of the simulation experiment container body 1. The number of the shading photosensitive sensors 3 is judged to judge the ongoing operation state at the moment. The bottom of the simulation experiment container body 1 is provided with the sound sensor 5, the sound sensor 5 is connected with the singlechip 4, the more the number of the photosensitive sensors 3 is, the more accurate the judgment result is, but the cost and other factors are considered, and the judgment result has an optimal value.
As shown in fig. 2, the touch display screen 2 is an Android apk (Android package) opened in an Android environment. The method comprises the steps that the mobile terminal is firstly connected with a Web server through an Application Programming Interface (API), then input data are packaged and sent to the Web server strictly according to an HTTP protocol format, and the Web server splits and analyzes information and then accesses a database. After the information is stored in the database, the system of the intelligent terminal 6 reads the information of the database system to finish the data transmission.
Example two
As shown in FIG. 3, the invention provides a multi-modal simulation experiment method with cognitive ability, which comprises the following steps:
s1, prompting a user operation behavior;
s2, sensing user behaviors through a sensor, and comparing the user behaviors with a first label in behavior intentions in a pre-established user behavior library to obtain a first intention set;
s3, extracting keywords from the voice input by the user, and comparing the keywords with a second label in the action intention in a pre-established user action library to obtain a second intention set;
s4, performing intersection operation on the first intention set and the second intention set, and judging whether an intersection operation result is not an empty set;
s5, if the judgment result is yes, the mode fusion is successful, and a first result of the mode fusion is output;
and S6, if the judgment result is negative, the mode fusion fails, and the error type is identified.
In step S1, the user operation behavior may be prompted in a form of voice or text, on one hand, the user behavior may be prompted in the operation steps under the guidance of a teacher, and on the other hand, the user behavior may be given full-range voice guidance, which not only tells the user about the next operation step, but also tells the user how to perform the operation in detail: 1) Prompting key phenomena or important results of the operation results through characters or voice; 2) Prompting the next operation step or method; 3) The unattended transmission function is realized, and the method for operating the equipment is mainly guided from beginning to end. For example, the user is prompted by voice as to which switch to press to enter the system and which to press to turn the system off; how well the device should be handled, etc. Therefore, a teacher does not need to guide the operation on site, and the user can smoothly complete the operation according to the operation prompt. Taking the reaction of sodium with water and the dilution experiment of concentrated sulfuric acid as an example: the user selects the operation related to the system voice prompt after the experiment, if the user selects sodium to react with water, the system voice prompt requests to take up the second simulation experiment container and carry out water pouring action in the first simulation experiment container; after the user finishes the water pouring action, the system voice continuously prompts that the user inputs the size of the sodium, then the user takes the tweezers to clamp the sodium with the corresponding size and puts the sodium into the first simulation experiment container, the tweezers are put in the beaker to shield the photosensitive sensor, and finally the system voice prompts that the user takes the rubber head dropper and presses the rubber head dropper to drop the phenolphthalein solution; if concentrated sulfuric acid dilution is selected, a system voice prompt requests to select a reagent in a first simulation experiment container and place a second simulation experiment container on the first simulation experiment container to pour water, a glass rod is used for stirring in the first simulation experiment container, a user finally speaks out to finish, the whole experiment system finishes operating, the whole process can automatically complete the whole experience process of a virtual experiment even if no teacher guides, and the manual operation and the independent exploration capacity of students are enhanced.
As shown in fig. 4, in step S2, sensing the user behavior through the sensor specifically includes:
s21, the single chip microcomputer obtains the state of the test article by detecting the shielded number of the photosensitive sensors;
s22, the single chip acquires the duration t1 of the behavior sound of the user and the maximum amplitude f1 of the audio frequency in the duration through the sound sensor, and acquires the stirring behavior and the stirring speed of the user.
In step S21, the state of the test article obtained by the single chip microcomputer by detecting the number of shielded photosensors is specifically:
the single chip microcomputer detects the signals of the photosensitive sensors, obtains the number M of activated photosensitive sensors and the numbers i and j, and calculates the number d of the shielded photosensitive sensors:
wherein N +1 is the total number of pressure sensors on the container model, M is less than or equal to N, mod is a modulus operator, max is a maximum value operator, and | is. And finally, the intelligent terminal determines the operation performed by the user according to the number d of the shielded photosensitive sensors and a corresponding relation database established in advance between the number d of the shielded photosensitive sensors and the operation steps.
In step S22, step S22 specifically includes:
when a user stirs in the simulation experiment container, sound on the inner wall of the container model is sensed by the sound sensor; the duration t1 of the sound and the maximum amplitude f1 of the audio within this duration are captured,
if the conditions T1> τ 1 and f1> κ 1 are met, the user is performing a stirring action, where τ 1, κ 1 are empirical parameters, τ 1>, 0, κ 1 >;
calculating the speed v:
v=βf1,
if the condition v > v1 is satisfied, the speed v is the stirring speed, wherein v1 is a stirring speed threshold value, β and v1 are empirical parameters, β >0, and v1>0.
As shown in fig. 5, in step S3, extracting the keywords from the speech input by the user specifically includes:
s31, segmenting the text converted from the voice, filtering stop words, fictitious words and merging synonyms to obtain candidate words;
s32, calculating the distance between the candidate word and the keyword, wherein the formula is as follows:
wherein, point (x) i ,y i ) Representing the coordinates of the candidate word in the database, point (x) j ,y j ) Representing the coordinates of the keyword in the database, d ij Is a point (x) i ,y i ) And point (x) j ,y j ) The euclidean distance between them;
s33, according to the point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between the candidate words gives a weight to each word in the candidate words, and the formula is as follows:
if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate words is 0, the weight of the candidate words is 1, and the candidate words are used as keywords for extraction processing; if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate word and the candidate word is not 0, the candidate word is discarded if the weight of the candidate word is 0.
In step S31, the speech input by the user is extracted and converted into a text, which involves a speech recognition technology, and the invention adopts cloud speech recognition, which is a technology for recognizing and processing speech in a "cloud computing" manner, and puts computing and storage pressures on the cloud side, thereby reducing the cost of development of embedded devices, enabling developers to put more efforts on the application requirements, and shortening the period of application development. The core of the cloud speech recognition technology consists of the following three parts: and (1) applying an access service. The voice application server based on the http protocol is realized, a recognition feature library used in voice recognition, complex calculation and storage of voice data are all put into a configuration management server for processing, and then a processing result is returned to a client. And (2) configuring a management service. Including management services and configuration management database services. And (3) voice function service. The voice recognition system is composed of a voice dictation server and a voice synthesis server, wherein the voice dictation server is responsible for voice file conversion to text recognition service, and the voice synthesis server is responsible for text conversion voice file synthesis service.
The cloud speech recognition technology also provides a speech client subsystem, an internal integrated audio processing and audio coding and decoding module, and provides a perfect API (application programming interface). The user can call and complete the service voice function service of different scenes through the combined interface.
As shown in fig. 6, the invention employs hundred-degree speech, and the input speech is uploaded to the cloud server through audio decoding, and the cloud server recognizes the speech into text characters, and then sends the text characters back to the experiment terminal through the internet.
In step S31, as shown in fig. 7, the text converted from speech is segmented by the chinese segmentation module, and then candidate words are obtained by the filtering stop word and the synonym merging module.
Chinese word segmentation module: chinese text is typically divided into sets of words with part-of-speech tags by means of a topic dictionary, a segmentation dictionary, or the like.
A stop word and particle word filtering module: firstly, according to the word property labeling of a Chinese word segmentation device, prepositions, articles, conjunctions, pronouns and the like are deleted, namely all the dummy words; and step two, removing stop words, namely creating a stop word list, adding some real words with higher frequency in all document texts into a frequent word list, and filtering the stop words according to the stop word list.
In step S32, keyword matching requires solving two problems: first, a matching algorithm is determined, namely: determining the matching degree between the two candidate words according to distance calculation; secondly, extracting keywords, and assigning a weight to each word in the candidate words as a keyword extraction standard according to the matching degree.
Distance measures are one way to measure the similarity between samples using distance, samples distributed over different regions. The smaller the distance between samples, the more similar, and conversely, the larger the distance between samples, the greater the difference. Some of the most commonly used important distance measures are: euclidean distance, mahalanobis distance, minkowski distance, squared distance, non-linear measurements, etc. The invention adopts Euclidean distance as a mode for calculating the similarity of the participles, and the formula for calculating the distance between the candidate words and the keywords is as follows:
wherein, the point (x) i ,y i ) Representing the coordinates of the candidate word in the database, point (x) j ,y j ) Representing the coordinates of the keyword in the database, d ij Is a point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between; the method for calculating the similarity between the word segments according to the present invention can also be expressed in other ways, and the present invention is not limited herein.
In step S33, according to the point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between the candidate words gives a weight to each word in the candidate words, and the formula is as follows:
if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate words is 0, the weight of the candidate words is 1, and the candidate words are used as keywords for extraction processing; if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate word and the candidate word is not 0, the candidate word is discarded if the weight of the candidate word is 0.
In which, a user behavior library is pre-established in steps S2 and S3, and the user behavior library includes a behavior intention that may exist in an experiment of a user, and the embodiment of the present invention mainly takes a reaction of sodium with water and dilution with concentrated sulfuric acid as an example. (e.g., a pouring action, a pinching action, etc.).
As shown in fig. 8, the user behavior library is a pre-established intention expert knowledge library, and the intention inference rule is stored in the user behavior library. Namely: a sufficient or necessary condition between the user's behavioral intent and the user's behavior. For example, when the user action is "touching the switch with the hand", it is a necessary condition for the intention of "having prepared an experiment". Each behavior intention set in the user behavior library comprises two labels, namely a first label, namely a sensing intention label, wherein the label exists in a digital number form, the sensing intention label is used for matching a received sensor signal with the user behavior, namely, each behavior intention has a corresponding number in the user behavior library, the number exists uniquely, and the number is used for matching with a signal received by sensing, so that the signal received by sensing can be converted into the behavior intention of the user to be output in a system, and the conversion from the user behavior to the sensor parameter to the behavior intention of the user is completed through the label;
the other part is a second label, namely a voice intention label, the label exists in the form of keyword text, each behavior intention is endowed with a keyword (such as stirring, quickness and the like), the intention matching is completed by calculating the distance between the keywords extracted by the voice input by the user and the keywords in the user behavior library, and finally the user behavior intention is output.
In step S4, as shown in fig. 9, the intersection operation of the first intention set and the second intention set specifically includes:
s41, mapping the first intention set and the second intention set to an intention library respectively by adopting a vector multiplication rule in intersection operation; the intention library is a pre-established database and contains all intentions;
s42, representing the first intention set as a vector A1 by adopting a coding mode, and representing the first intention set as a vector A2 by adopting a coding mode;
s43, multiplying the vector A1 and the vector A2 to obtain a new vector A, and further obtaining a fused third intention set, wherein the formula specifically comprises:
A=A 1 A 2 T 。
in steps S41-S43, the dual-mode fusion process of the invention fuses information from two modes of voice and sensor, widens the coverage range of information contained in input data, and improves the precision and robustness of the system.
In a dual-mode system, according to different levels where information fusion is performed, the information fusion process can be divided into three levels, namely: data layer fusion, feature layer fusion and decision layer fusion. According to the characteristics of the virtual chemical simulation experiment system, decision layer fusion is selected. The decision layer fusion is to regard each mode as an independent part, firstly execute the matching and recognition process of each mode respectively, then synthesize the output or decision of the model to generate the final decision result, and complete the fusion of the decision layer.
As shown in fig. 10, the bimodal fusion process of the present system is: firstly, the decision results of each system are obtained, and in the system, a second intention set generated after voice recognition and a first intention set generated after sensor recognition are respectively obtained; then, the two decision results are fused, that is, the intersection of the two intention sets is taken. If the two sets have crossed parts, namely the mode fusion is successful, outputting a final result; if the intersection of the two sets is empty, the mode fusion is failed.
The fusion rule adopts a vector multiplication rule: firstly, mapping a second intention set and a first intention set obtained by voice data recognition and sensor data recognition into an intention library respectively; the intention library is a pre-established database and contains all intentions;
then, the invention adopts a 0 and 1 coding mode to represent the first intention set as a vector A1, and adopts a 0 and 1 coding mode to represent the first intention set as a vector A2, namely, an element position mark 1 existing in the intention library and the first intention set, and an element position mark 0 existing in the intention library but not existing in the first intention set; element position labels 1 that are present in both the intent library and the second intent set, and element position labels 0 that are present in the intent library and not present in the second intent set; and finally, multiplying the vector A1 by the vector A2 to obtain a new vector A, and further obtaining a fused third intention set. The formula is as follows:
A=A 1 A 2 T
wherein, the elements contained in the intent library are shown in the following table (the reaction of sodium with water and the dilution of concentrated sulfuric acid are taken as examples in the invention, and the conversion can be carried out according to specific experiments):
elements contained in intent libraries
If the second intention set generated by voice recognition is { toppling and pouring }, the first intention set generated by sensor recognition is { toppling and pouring }, and the first intention set and the second intention set are respectively mapped into the intention library (elements in the intention library are formed by sequentially connecting elements of intention types 1 to 7), then the vector forms of the second intention set and the first intention set are respectively [ 10 0 \82300 ], 0] and [ 10 0 \82300 ], a new vector of the third intention set obtained by multiplying two vectors is [ 10 0 0 \82300 ], 0], and at the moment, the fused third intention set is { toppling }, so the fusion is successful.
If the second intention set generated by voice recognition is { toppling over and pouring over water }, the first intention set generated by sensor recognition is { taking an appropriate amount of sodium }, the vector forms of the first intention set and the second intention set are respectively [ 10 0 \82300 ]; 0] and [0 0 0 0 \8230300 ], and the new vector of the third intention set obtained by multiplying the two vectors is [0 0 0 0 \82300 ], at this time, the fused third intention set is empty, and therefore fusion fails.
The elements contained in the library in the present invention may be deleted according to actual situations, and the present invention is not limited herein.
The invention is separated from the control of a mouse and a keyboard, provides a real feedback feeling for a user, does not need to carry out a deliberate memory step when the user uses the mouse and the keyboard, changes the use of experiment operation to understand knowledge, increases the learning interest of students and the application capability of the knowledge, greatly reduces the consumption of experiment materials, does not need to process experiment waste products left after the experiment, and is convenient for the user to operate.
The invention integrates the algorithms such as intention fusion and the like, greatly improves the intelligence of the system and has more natural and convenient operation.
EXAMPLE III
As shown in fig. 11, an embodiment of the present invention provides a multi-modal simulation experiment method with cognitive ability, including:
s1, prompting a user operation behavior;
s2, sensing user behaviors through a sensor, and comparing the user behaviors with first labels in behavior intentions in a user behavior library established in advance to obtain a first intention set;
s3, extracting keywords from the voice input by the user, and comparing the keywords with a second label in the action intention in a pre-established user action library to obtain a second intention set;
s4, performing intersection operation on the first intention set and the second intention set, and judging whether an intersection operation result is not an empty set;
s5, if the judgment result is yes, the mode fusion is successful, the first result of the mode fusion is output,
s6, if the judgment result is negative, the modal fusion fails, and the error type is identified;
s7, judging whether the user behavior meets the specification or not according to the first result output by the modal fusion,
s8, if the judgment result is yes, outputting a second result of the user behavior operation, and prompting the next operation;
and S9, if the judgment result is negative, outputting a prompt of the current user behavior error and the corresponding user error behavior, and performing error indication feedback.
In step S9, the error feedback specifically includes:
s91, by prompting the user of the operation error in the process of the current time, explaining a third result after the error operation and inquiring whether the user continues;
s92, if the user selects that the result is positive, outputting a third result generated according to the current operation behavior of the user, and explaining the principle of generating the third result;
and S93, if the user selection result is negative, re-acquiring the user behavior, and returning to the step S2.
In steps S8 and S9, the prompt may be in a form of voice or in a form of text, which is not limited herein.
According to the invention, through an error display feedback mechanism, the user behavior can be accurately sensed, the operation behavior monitoring, the guidance and the visualization of the error behavior consequence and the error display feedback of the error behavior are realized, even if a student makes an error operation step, a corresponding error display result is also generated, so that the student can understand the experiment more deeply, the error step can be clearly known, the understanding of the reaction mechanism of the chemical experiment can be realized more easily, and the effect of virtual experiment experience can be improved.
The invention designs and develops a virtual reality interaction system based on sensing equipment through analyzing the functions and requirements of the virtual reality system, completes the modeling of a virtual scene by means of 3D Max, and completes animation production on a Unity3D platform. A chemical experiment platform is established on the basis of two experiments of concentrated sulfuric acid dilution and sodium and water reaction. When the user finishes the operation, the system gives feedback to the user on the basis of vision and hearing. Such as: when the user finishes the water pouring operation, the virtual experiment scene can finish the corresponding water pouring action in the form of animation and video synchronously, so that the user feels that the user is performing a real chemical experiment.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (10)
1. A multi-modal simulation experiment container with cognitive ability, which is characterized by comprising: simulation experiment vessel body, touch display, photosensitive sensor, singlechip, sound sensor, intelligent terminal, simulation experiment vessel body entry border evenly is provided with a plurality of photosensitive sensor, a plurality of photosensitive sensor is used for confirming the operating condition that the user is going on this moment through the number that judges to shelter from photosensitive sensor, touch display set up in simulation experiment vessel body container outer wall, singlechip, sound sensor all set up in simulation experiment vessel body container inner wall bottom, the first input and the touch display screen of singlechip are connected, and the second input is connected with photosensitive sensor's output, and the third input is connected with sound sensor's output, the data communication end of singlechip is connected through wireless transmission and intelligent terminal's data communication end.
2. The cognitively capable multimodal simulated experiment container of claim 1, wherein said touch display includes function setup buttons for setting up experimental conditions and test items for use.
3. A multi-modal simulation experiment container method with cognitive ability, which is implemented based on the multi-modal simulation experiment container with cognitive ability of claim 1 or 2, and comprises:
prompting the operation behavior of the user;
sensing user behaviors through a sensor, and comparing the user behaviors with a first label in behavior intentions in a pre-established user behavior library to obtain a first intention set;
extracting keywords from voice input by a user, and comparing the keywords with a second label in the action intention in a pre-established user action library to obtain a second intention set;
and taking intersection operation on the first intention set and the second intention set, if the intersection operation result is not an empty set, successfully performing modal fusion, outputting a first result of the modal fusion, and if the intersection operation result is an empty set, identifying the error type.
4. The multi-modal simulation experiment container method with cognitive capabilities of claim 3, further comprising:
judging whether the user behavior meets the specification or not according to the first result output by the modal fusion, if so, outputting a second result of the user behavior operation, and prompting the next operation; and if the user behavior is not matched with the error behavior, outputting a prompt of the current user behavior error and the corresponding user error behavior, and performing error indication feedback.
5. The multi-modal simulation experimental container method with cognitive capabilities of claim 4, wherein the error feedback is specifically: if the user continues the current operation, outputting a third result generated according to the current operation behavior of the user, and explaining a principle of generating the third result; and if the user does not perform the current operation, the user behavior is acquired again.
6. The multi-modal simulation lab container approach with cognitive capabilities of claim 3, wherein the sensing user behavior via sensors specifically comprises:
the single chip microcomputer acquires the state of the test article by detecting the shielded number of the photosensitive sensors;
the single chip microcomputer obtains the duration time t1 of the user behavior sound and the maximum audio amplitude f1 in the duration time through the sound sensor, and obtains the user stirring behavior and the stirring speed.
7. The multi-modal simulation experiment container method with cognitive ability according to claim 6, wherein the state of the test object obtained by the single chip microcomputer by detecting the number of shielded photosensitive sensors is specifically as follows:
the single chip microcomputer detects the signals of the photosensitive sensors, obtains the number M of activated photosensitive sensors and the numbers i and j, and calculates the number d of the shielded photosensitive sensors:
wherein N +1 is the total number of pressure sensors on the container model, mod is a modulo arithmetic operator, max is a maximum value operator, and | l.
8. The multi-modal simulation experiment container method with cognitive ability of claim 6, wherein the single chip microcomputer obtains the duration t1 of the user behavior sound and the maximum amplitude f1 of the audio frequency in the duration through a sound sensor, and the obtaining of the user stirring behavior and the stirring speed is specifically as follows:
when a user stirs in the simulation experiment container, sound on the inner wall of the container model is sensed by the sound sensor; the duration t1 of the sound and the maximum amplitude f1 of the audio within this duration are captured,
if the conditions T1> τ 1 and f1> κ 1 are satisfied, the user is stirring, where τ 1 and κ 1 are empirical parameters, τ 1>, 0 and κ 1>, 0;
calculating the speed v:
v=βf1,
if the condition v > v1 is satisfied, the speed v is the stirring speed, wherein v1 is a stirring speed threshold value, beta and v1 are empirical parameters, beta >0, and v1>0.
9. The multi-modal simulation experiment container method with cognitive ability according to claim 3, wherein the extracting keywords by the voice inputted by the user is specifically:
segmenting words of the text converted from the voice, filtering stop words, fictitious words and combining synonyms to obtain candidate words;
the formula for calculating the distance between the candidate word and the keyword is as follows:
wherein, the point (x) i ,y i ) Representing the coordinates of the candidate word in the database, point (x) j ,y j ) Representing the coordinates of the keyword in the database, d ij Is a point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between;
according to the point (x) i ,y i ) And point (x) j ,y j ) The Euclidean distance between the candidate words gives a weight to each word in the candidate words, and the formula is as follows:
if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate words is 0, the weight of the candidate words is 1, and the candidate words are used as keywords for extraction processing; if (x) i ,y i ) And (x) j ,y j ) If the distance between the candidate word and the candidate word is not 0, the candidate word is discarded if the weight of the candidate word is 0.
10. The multi-modal simulation experiment container method with cognitive ability of claim 3, wherein the intersection operation of the first intention set and the second intention set is specifically:
the intersection operation adopts a vector multiplication rule, and the first intention set and the second intention set are respectively mapped into an intention library; the intention library is a pre-established database and contains all intentions;
representing the first intention set as a vector A1 in a coding mode, and representing the first intention set as a vector A2 in a coding mode;
multiplying the vector A1 and the vector A2 to obtain a new vector A, and further obtaining a fused third intention set, wherein the formula specifically comprises:
A=A 1 A 2 T 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910544280.5A CN110309570B (en) | 2019-06-21 | 2019-06-21 | Multi-mode simulation experiment container with cognitive ability and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910544280.5A CN110309570B (en) | 2019-06-21 | 2019-06-21 | Multi-mode simulation experiment container with cognitive ability and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110309570A CN110309570A (en) | 2019-10-08 |
CN110309570B true CN110309570B (en) | 2022-11-04 |
Family
ID=68076133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910544280.5A Active CN110309570B (en) | 2019-06-21 | 2019-06-21 | Multi-mode simulation experiment container with cognitive ability and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110309570B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111665941B (en) * | 2020-06-07 | 2023-12-22 | 济南大学 | Virtual experiment-oriented multi-mode semantic fusion human-computer interaction system and method |
CN111667733A (en) * | 2020-06-17 | 2020-09-15 | 济南大学 | Method and device for sensing container position in simulation experiment operation |
CN111968470B (en) * | 2020-09-02 | 2022-05-17 | 济南大学 | Pass-through interactive experimental method and system for virtual-real fusion |
US11494996B2 (en) * | 2020-11-30 | 2022-11-08 | International Business Machines Corporation | Dynamic interaction deployment within tangible mixed reality |
CN113296607B (en) * | 2021-05-27 | 2022-01-14 | 北京润尼尔网络科技有限公司 | VR-based multi-user virtual experiment teaching system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018113526A1 (en) * | 2016-12-20 | 2018-06-28 | 四川长虹电器股份有限公司 | Face recognition and voiceprint recognition-based interactive authentication system and method |
CN109545002A (en) * | 2018-12-05 | 2019-03-29 | 济南大学 | A kind of container suite and its application for virtual experimental |
-
2019
- 2019-06-21 CN CN201910544280.5A patent/CN110309570B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018113526A1 (en) * | 2016-12-20 | 2018-06-28 | 四川长虹电器股份有限公司 | Face recognition and voiceprint recognition-based interactive authentication system and method |
CN109545002A (en) * | 2018-12-05 | 2019-03-29 | 济南大学 | A kind of container suite and its application for virtual experimental |
Non-Patent Citations (1)
Title |
---|
课堂教学环境下学生学习兴趣智能化分析;陈靓影等;《电化教育研究》;20180731(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110309570A (en) | 2019-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110309570B (en) | Multi-mode simulation experiment container with cognitive ability and method | |
US10664060B2 (en) | Multimodal input-based interaction method and device | |
EP3652734B1 (en) | Voice data processing method and electronic device supporting the same | |
CN110598576B (en) | Sign language interaction method, device and computer medium | |
CN111651035B (en) | Multi-modal interaction-based virtual experiment system and method | |
CN110286762B (en) | Virtual experiment platform with multi-mode information processing function | |
KR20150005027A (en) | device for recognizing voice and method for recognizing voice | |
CN108227565A (en) | A kind of information processing method, terminal and computer-readable medium | |
CN104361896B (en) | Voice quality assessment equipment, method and system | |
KR20210032875A (en) | Voice information processing method, apparatus, program and storage medium | |
CN112562723B (en) | Pronunciation accuracy determination method and device, storage medium and electronic equipment | |
CN116303962B (en) | Dialogue generation method, training method, device and equipment for deep learning model | |
WO2022086654A1 (en) | Systems, methods, and apparatus for providing accessible user interfaces | |
CN111796926A (en) | Instruction execution method and device, storage medium and electronic equipment | |
CN113822076A (en) | Text generation method and device, computer equipment and storage medium | |
CN115757692A (en) | Data processing method and device | |
CN116662496A (en) | Information extraction method, and method and device for training question-answering processing model | |
CN116561275A (en) | Object understanding method, device, equipment and storage medium | |
CN116955568A (en) | Question answering method and device based on instruction manual, electronic equipment and storage medium | |
CN113205569B (en) | Image drawing method and device, computer readable medium and electronic equipment | |
CN113591495A (en) | Speech translation method, device and storage medium | |
CN111459443A (en) | Character point-reading method, device, equipment and readable medium | |
KR20200080389A (en) | Electronic apparatus and method for controlling the electronicy apparatus | |
EP4276827A1 (en) | Speech similarity determination method, device and program product | |
CN111462548A (en) | Paragraph point reading method, device, equipment and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |