CN115083222A - Information interaction method and device, electronic equipment and storage medium - Google Patents

Information interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115083222A
CN115083222A CN202211001562.9A CN202211001562A CN115083222A CN 115083222 A CN115083222 A CN 115083222A CN 202211001562 A CN202211001562 A CN 202211001562A CN 115083222 A CN115083222 A CN 115083222A
Authority
CN
China
Prior art keywords
electronic
sounding
book
historical
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211001562.9A
Other languages
Chinese (zh)
Other versions
CN115083222B (en
Inventor
宣果
容少运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinditai Electronic Co ltd
Original Assignee
Shenzhen Xinditai Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinditai Electronic Co ltd filed Critical Shenzhen Xinditai Electronic Co ltd
Priority to CN202211001562.9A priority Critical patent/CN115083222B/en
Publication of CN115083222A publication Critical patent/CN115083222A/en
Application granted granted Critical
Publication of CN115083222B publication Critical patent/CN115083222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/062Combinations of audio and printed presentations, e.g. magnetically striped cards, talking books, magnetic tapes with printed texts thereon
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides an information interaction method, an information interaction device, electronic equipment and a storage medium, wherein when the method is used for carrying out information interaction with intelligent sound-emitting equipment, the method can enter a corresponding electronic sound-emitting mode according to different user identifications, not only can a first electronic sound-emitting book be adjusted by using historical teaching data, but also a second electronic sound-emitting book can be played based on eye movement data, and therefore the information interaction form of the intelligent sound-emitting equipment is enriched.

Description

Information interaction method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video data processing, and in particular, to an information interaction method and apparatus, an electronic device, and a storage medium.
Background
With the progress and development of society, most of intelligent sound production devices (point reading machines or learning machines) on the market mostly have a point reading function, and can help parents to better guide students to study lessons, correct sound production of the students and facilitate self-learning of the students anytime and anywhere. In an actual interactive scene, taking a click-to-read scene as an example, the current intelligent sound-generating device outputs sound or paraphrase corresponding to click-to-read content according to the click position of a click-to-read pen or the click position of a finger.
However, in the current smart sound-generating device, the sounds and the paraphrases of the click-to-read content are usually metadata stored in advance, and are not distinguished for the user, so that the information interaction form of the current smart sound-generating device is relatively single.
Therefore, it is desirable to provide an information interaction method and apparatus to solve the above technical problems.
Disclosure of Invention
The embodiment of the invention provides an information interaction method and device, which can enrich the information interaction form of intelligent sound-generating equipment.
The embodiment of the invention provides an information interaction method, which comprises the following steps:
responding to interactive operation aiming at the intelligent sound-emitting equipment, and determining a user identifier corresponding to the interactive operation;
when the user identification is a student identification, triggering the intelligent sound-generating equipment to enter a teaching mode, and acquiring historical teaching data and a first electronic sound-generating book corresponding to the student identification in the teaching mode; adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic phonation book, and playing the first electronic phonation book according to the adjusted phonation parameters;
when the user identification is a child identification, triggering the intelligent sound-emitting device to enter a point reading mode, and acquiring a second electronic sound-emitting book corresponding to the child identification in the point reading mode; and responding to gesture operation aiming at the second electronic sound-emitting book, and playing the second electronic sound-emitting book based on the eye movement data corresponding to the child identification.
In the information interaction method of the present invention, the adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data includes:
extracting first teaching information corresponding to the first electronic sounding book in a historical time period from the historical teaching data, and;
extracting second teaching information corresponding to other historical sounding contents in a historical time period from the historical teaching data;
calculating the matching degree between the historical sounding content and the first electronic sounding book;
and adjusting the sound production parameters of the first electronic sound production book based on the first teaching information, the second teaching information and the matching degree.
In the information interaction method of the present invention, the adjusting the sound production parameters of the first electronic sound production book based on the first teaching information, the second teaching information, and the matching degree includes:
determining the historical sounding content with the matching degree larger than a first preset value as a first reference sounding content;
determining a historical sounding speed, a historical reading following speed and a historical reading following error rate corresponding to the first electronic sounding book according to the first teaching information;
determining a reference sounding speed, a reference reading-following speed and a reference reading-following error rate of the first reference sounding content according to the second teaching information;
converting the historical reading following error rate into a first parameter adjusting weight, and converting the matching degree corresponding to the first reference sounding content into a second parameter adjusting weight;
outputting a third parameter-adjusting weight based on the ratio between the historical sounding speed and the historical reading speed, and outputting a fourth parameter-adjusting weight based on the ratio between the reference sounding speed and the reference reading speed;
calculating the product of the historical sounding speed, the first parameter adjusting weight, the second parameter adjusting weight, the third parameter adjusting weight and the fourth parameter adjusting weight to obtain a target sounding speed corresponding to the playing of the first electronic sounding book;
and adjusting the playing times of the target sounding contents in the first electronic sounding book based on the historical follow-up reading error rate and the reference follow-up reading error rate.
In the information interaction method of the present invention, the determining, according to the first teaching information, a history follow-up reading error rate corresponding to the first electronic vocal book includes:
extracting reading-following information corresponding to the first electronic sounding book from the first teaching information;
acquiring an audio text corresponding to the first electronic sounding book;
extracting the pitch value of each audio single character to obtain a plurality of pitch values;
and determining the historical read following error rate corresponding to the first electronic sounding book according to the pitch value and the read following information.
In the information interaction method of the present invention, the adjusting the sound production parameters of the first electronic sound production book based on the first teaching information, the second teaching information, and the matching degree includes:
determining second reference sounding content from the historical sounding content with the matching degree larger than a second preset value;
acquiring a target sounding text corresponding to the first electronic sounding book and a reference sounding text corresponding to the second reference sounding content;
extracting display distribution corresponding to each target sound-producing object in the target sound-producing text from the first teaching information, and;
extracting reference display distribution corresponding to each reference sound-emitting object in the reference sound-emitting text from the second teaching information;
determining the influence degree of the reference display distribution on the display distribution based on the matching degree of the second reference sounding content;
and adjusting the display duration and the display times of each target sound-producing object in the first electronic sound-producing book according to the influence degree and the display distribution.
In the information interaction method of the present invention, the playing the second electronic vocalized book based on the eye movement data corresponding to the child identifier in response to the gesture operation on the second electronic vocalized book includes:
in response to the gesture operation aiming at the second electronic voice book, identifying whether the gesture operation is a target gesture operation;
when the gesture operation is recognized as a target gesture operation, acquiring eye movement data corresponding to the child identifier and coordinate data of each point reading content in the second electronic book;
determining target point reading content in the second electronic book according to the eye movement data and the coordinate data;
and playing the target reading content, and outputting image information or dynamic effect of the target reading content after preset time.
In the information interaction method of the present invention, the acquiring a second electronic sounding book corresponding to the child identifier in the point-reading mode includes:
acquiring data to be read and a teaching progress corresponding to the child identification;
and determining target data in the data to be read as a second electronic sounding book based on the teaching progress and preset configuration information.
An embodiment of the present invention further provides an information interaction apparatus, which includes:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for responding to the interactive operation of the intelligent sound-emitting equipment and determining a user identifier corresponding to the interactive operation;
the first acquisition module is used for triggering the intelligent sound production equipment to enter a teaching mode when the user identification is a student identification, and acquiring historical teaching data and a first electronic sound production book corresponding to the student identification in the teaching mode; the adjusting module is used for adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; the first playing module is used for responding to the playing operation of the first electronic sounding book and playing the first electronic sounding book according to the adjusted sounding parameters;
the second acquisition module is used for triggering the intelligent sound-producing equipment to enter a point reading mode when the user identifier is a child identifier, and acquiring a second electronic sound-producing book corresponding to the child identifier in the point reading mode; and the playing module is used for responding to gesture operation aiming at the second electronic sound-emitting book and playing the second electronic sound-emitting book based on the eye movement data corresponding to the child identification.
The embodiment of the invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the information interaction method when executing the program.
The embodiment of the invention also provides a storage medium, wherein processor-executable instructions are stored in the storage medium and are loaded by one or more processors to execute the information interaction method.
The information interaction method and the information interaction respond to the interaction operation aiming at the intelligent sound-emitting equipment, after the user identification corresponding to the interaction operation is determined, when the user identification is the student identification, the intelligent sound-emitting equipment is triggered to enter a teaching mode, and in the teaching mode, historical teaching data and a first electronic sound-emitting book corresponding to the student identification are obtained; adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic sounding book, playing the first electronic sounding book according to the adjusted sounding parameters, triggering the intelligent sounding device to enter a point reading mode when the user identification is a child identification, and acquiring a second electronic sounding book corresponding to the child identification in the point reading mode; according to the information interaction method and the information interaction device, the corresponding electronic sounding book can be entered according to different user identifications, the first electronic sounding book can be adjusted by using historical teaching data, and the second electronic sounding book can be played on the basis of the eye movement data, so that the information interaction form of the intelligent sounding device is enriched.
Drawings
FIG. 1 is a flow chart of an information interaction method according to the present invention;
FIG. 2 is a flowchart illustrating an information interaction method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an information interaction apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an adjustment module of an information interaction apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a second obtaining module of an embodiment of an information interaction apparatus according to the present invention;
FIG. 6 is a schematic structural diagram of a second playing module of an embodiment of an information interaction apparatus according to the present invention;
fig. 7 is a schematic view of a working environment structure of an electronic device in which the information interaction apparatus of the present invention is located.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, embodiments of the invention are described with reference to steps and symbols of operations performed by one or more computers, unless otherwise indicated. It will thus be appreciated that those steps and operations, which are referred to herein several times as being computer-executed, include being manipulated by a computer processing unit in the form of electronic signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the computer's memory system, which may reconfigure or otherwise alter the computer's operation in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific details shown, since one skilled in the art will recognize that various steps and operations described below may be implemented in hardware.
The information interaction method and the information interaction device can be arranged in any electronic equipment and used for responding to the interaction operation aiming at the intelligent sound-emitting equipment, determining the user identification corresponding to the interaction operation, triggering the intelligent sound-emitting equipment to enter a teaching mode when the user identification is the student identification, and acquiring historical teaching data and a first electronic sound-emitting book corresponding to the student identification in the teaching mode; adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic phonation book, and playing the first electronic phonation book according to the adjusted phonation parameters; when the user identification is a child identification, triggering the intelligent sound-producing equipment to enter a click-to-read mode, and acquiring a second electronic sound-producing book corresponding to the child identification in the click-to-read mode; and responding to the gesture operation aiming at the second electronic sound-emitting book, and playing the second electronic sound-emitting book based on the eye movement data corresponding to the child identification. Including but not limited to personal computers, server computers, multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The information interaction device is preferably a data processing terminal or a server for information interaction, can enter a corresponding electronic sounding mode according to different user identifications, can adjust a first electronic sounding book by using historical teaching data, and can play a second electronic sounding book based on eye movement data, so that the information interaction form of the intelligent sounding device is enriched.
In the current information interaction scheme based on the intelligent sound-generating device, for example, a click-to-read scene is taken as an example, the current intelligent sound-generating device outputs sounds or paraphrases corresponding to click-to-read contents according to the click position of a click-to-read pen or the click position of a finger, and the sounds and the paraphrases of the click-to-read contents are usually metadata stored in advance and are not distinguished for a user, so that the information interaction form is single in the current information interaction scheme based on the intelligent sound-generating device.
The invention provides an information interaction scheme, which is used for responding to the interactive operation aiming at intelligent sound-producing equipment and determining a user identifier corresponding to the interactive operation; when the user identification is a student identification, triggering the intelligent sound production equipment to enter a teaching mode, and acquiring historical teaching data and a first electronic sound production book corresponding to the student identification in the teaching mode; adjusting sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic phonation book, and playing the first electronic phonation book according to the adjusted phonation parameters; when the user identification is a child identification, triggering the intelligent sound-producing equipment to enter a click-to-read mode, and acquiring a second electronic sound-producing book corresponding to the child identification in the click-to-read mode; and responding to gesture operation aiming at the second electronic sound-emitting book, and playing the second electronic sound-emitting book based on eye movement data corresponding to the child identification.
Referring to fig. 1, fig. 1 is a flowchart illustrating an information interaction method according to an embodiment of the present invention. The information interaction method of this embodiment may be implemented by using the electronic device, and the information interaction method of this embodiment includes:
step 101, responding to interactive operation aiming at the intelligent sound-emitting device, and determining a user identifier corresponding to the interactive operation;
102, when the user identification is a student identification, triggering the intelligent sound-generating equipment to enter a teaching mode, and acquiring historical teaching data and a first electronic sound-generating book corresponding to the student identification in the teaching mode; adjusting sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic phonation book, and playing the first electronic phonation book according to the adjusted phonation parameters;
103, when the user identification is a child identification, triggering the intelligent sound-generating equipment to enter a click-to-read mode, and acquiring a second electronic sound-generating book corresponding to the child identification in the click-to-read mode; and responding to the gesture operation aiming at the second electronic sound-emitting book, and playing the second electronic sound-emitting book based on the eye movement data corresponding to the child identification.
The information interaction method of the present embodiment is described in detail below.
In step 101, the intelligent sound generating device may be a point reading machine or a point reading pen, or an electronic device integrated with a sound generating function, the interactive operation may be an operation for the intelligent sound generating device, and the interactive operation may be a long-press operation, a click operation, a slide operation, a gesture operation, or a voice operation, for example, the interactive operation is a click operation for a display screen of the intelligent sound generating device, or a shake operation for the intelligent sound generating device, and in response to the interactive operation for the intelligent sound generating device, the intelligent sound generating device is triggered to enter a user recognition mode, and a user identifier is determined in the user recognition mode.
The user identifier may be preset, or the user identifier may be determined by acquiring a user image in real time and determining the corresponding user identifier based on an image feature of the user image.
For example, a parent presets a user identifier of a user a as a child identifier, a user identifier of a user B as a student identifier, and a user identifier of a user C as a student identifier, and when the user a performs an interactive operation triggered by the intelligent sound-emitting device, acquires a user image of the user a, and determines the user identifier corresponding to the interactive operation as the user identifier a according to the user image; for another example, in response to an interactive operation for the intelligent sound-emitting device, an image corresponding to the interactive operation is collected, a user feature of the user image is identified by taking the user image as an example, the user age of the user corresponding to the interactive operation is estimated based on the user feature, and the user identifier corresponding to the interactive operation is determined according to the user age.
In step 102, the history teaching data records the teaching condition of the user in a history period, the history teaching data carries history on-demand information, history query information and history follow-up reading information of the user, and the history on-demand information, the history query information and the history follow-up reading information can reflect proficiency of the user in different sound production contents and teaching contents, so that the sound production parameters of the first electronic sound production book can be adjusted based on the history on-demand information, the history query information and the history follow-up reading information, the adjusted sound production contents of the first electronic sound production book are more suitable for the user, and therefore the electronic sound production book is configured in a personalized mode.
It should be noted that, the different electronic phonics correspond to different first teaching information, for example, the teaching information of the electronic phonics a carries the content of the sixth chapter of english text, and the teaching information of the electronic phonics B carries the content of the first chapter of english text, so it can be seen that, the contents corresponding to the two are different, but since they are both english texts, there is similarity to some extent, for example, some words are the same words, but the usage on some sentences is different, so optionally, in some embodiments, the information interaction method provided by the present application may adjust the occurrence parameter of the first electronic phonics based on the similarity between the different electronic phonics, that is, the step "adjust the sounding parameter of the first electronic phonics based on historical teaching data" may specifically include:
(11) extracting first teaching information corresponding to a first electronic sound production book in a historical period from historical teaching data, and extracting second teaching information corresponding to other historical sound production contents in the historical period from the historical teaching data;
(12) calculating the matching degree between the historical sounding content and the first electronic sounding book;
(13) and adjusting the sounding parameters of the first electronic sounding book based on the first teaching information, the second teaching information and the matching degree.
For example, a user may calculate a content matching degree between the historical utterance content and the first electronic utterance book during a historical period of time for first teaching information generated by a first electronic utterance book and second teaching information corresponding to other historical utterance contents, and finally, adjust the utterance parameters of the first electronic utterance book based on the first teaching information, the second teaching information, and the content matching degree, for example, adjust the utterance speed and the key utterance speed of the first electronic utterance book.
For another example, the first electronic vocalized book is the electronic vocalized book first contacted by the user, in this case, the content matching degree between the historical vocalized content and the first electronic vocalized book can be directly calculated, and finally, the vocalization parameters of the first electronic vocalized book are adjusted based on the second teaching information and the content matching degree.
It should be noted that the matching degree may be a result of comprehensive calculation between matching degrees of a plurality of different dimensions, and the matching degree M may be specifically calculated by equation (1):
formula (1)
Figure 596371DEST_PATH_IMAGE001
Wherein a is a weight coefficient corresponding to each dimension, and
Figure 215309DEST_PATH_IMAGE002
n isThe number corresponding to the dimension may optionally be adjusted according to actual requirements, or the weight coefficient corresponding to each dimension may also be adjusted according to the number corresponding to the dimension, which may be specifically selected according to actual situations, and is not described herein again.
It should be noted that the matching degrees of the plurality of different dimensions may include a matching degree of a text dimension, a matching degree of a space dimension, a matching degree of a semantic dimension, a matching degree of a content tag dimension, and the like.
In order to improve the accuracy of subsequent adjustment of the first electronic vocative book, optionally, in some embodiments, the historical vocative contents may be divided based on the matching degree, the vocative contents with the matching degree greater than the preset value are determined as the first reference vocative contents, and the vocative contents with the matching degree less than or equal to the preset value are determined as the second reference vocative contents.
Optionally, the first preset value is greater than the second preset value, and for the first reference vocalization content, because its matching degree is higher, therefore, its forward reference value is higher, therefore, can adjust the vocalization parameter of the first electronic vocalization book according to its corresponding reference occurrence speed, reference reading-after speed and reference reading-after error rate, that is, the step "adjust the vocalization parameter of the first electronic vocalization book based on the first teaching information, the second teaching information and the matching degree", includes:
(21) determining the historical sounding content with the matching degree larger than a first preset value as a first reference sounding content;
(22) determining the historical sounding speed, the historical reading following speed and the historical reading following error rate corresponding to the first electronic sounding book according to the first teaching information;
(23) determining a reference sounding speed, a reference reading-following speed and a reference reading-following error rate of the first reference sounding content according to the second teaching information;
(24) converting the historical reading error rate into a first parameter adjusting weight, and converting the matching degree corresponding to the first reference sounding content into a second parameter adjusting weight;
(25) outputting a third parameter-adjusting weight based on the ratio between the historical sounding speed and the historical reading speed, and outputting a fourth parameter-adjusting weight based on the ratio between the reference sounding speed and the reference reading speed;
(26) calculating the product of the historical sounding speed, the first parameter adjusting weight, the second parameter adjusting weight, the third parameter adjusting weight and the fourth parameter adjusting weight to obtain a target sounding speed corresponding to the playing of the first electronic sounding book;
(27) and adjusting the playing times of the target sounding contents in the first electronic sounding book based on the historical follow-up reading error rate and the reference follow-up reading error rate.
It should be noted that the sound production speed of the electronic phonation book can be preset by a manufacturer or a content sharer, and the unit of the sound production speed is word/minute.
It can be understood that the historical reading following error rate represents whether the user correctly reads the pronunciation of the sounding content, and the higher the historical reading following error rate is, the lower the corresponding first parameter tuning weight is; the matching degree of the first reference sound production content is used for measuring the reference importance degree of the first reference sound production content, and the higher the matching degree is, the higher the corresponding reference importance degree is; the ratio of the historical reading speed to the historical sound production speed can represent the familiarity degree of the user with the first electronic sound production book, and the larger the ratio is, the more familiar the user is with the content of the first electronic sound production book, so that the corresponding sound production speed can be improved; similarly, the ratio between the reference reading-after speed and the reference utterance speed can represent the familiarity of the user with the first reference utterance content, and the larger the ratio is, the more familiar the user is with the first reference utterance content, so that the corresponding utterance speed can be improved.
Taking the read-after scenario as an example, for example, the historical read-after error rate is 10%, the historical read-after error rate may be converted into a first parameter-adjusting weight Q1, where a relationship between the historical read-after error rate w and the first parameter-adjusting weight Q1 is shown in equation (2):
formula (2)
Figure 463888DEST_PATH_IMAGE003
Namely, the historical read-following error rate is 10%, and the value of the corresponding first parameter adjusting weight is 0.9, so that the sound production parameters of the first electronic sound book can be adjusted according to the first parameter adjusting weight.
Further, the matching degree of the first reference utterance content a is 85%, the corresponding second reference utterance weight Q2 is 0.85, the historical follow-up speed is 150 words/min, the historical utterance speed is 180 words/min, the ratio between the historical follow-up speed and the historical utterance speed can be calculated to be 0.83, the ratio is determined to be the third reference utterance weight Q3, the reference utterance speed is 180 words/min, and the reference follow-up speed is 130 words/min, so the ratio between the reference follow-up speed and the reference utterance speed can be calculated to be 0.72, the ratio is determined to be the fourth reference utterance weight Q4, and finally, the product of the historical utterance speed VL, the first reference utterance weight Q1, the second reference utterance weight Q2, the third reference utterance weight Q3, and the fourth reference utterance weight Q4 can be calculated specifically by using equation (3):
Figure 124676DEST_PATH_IMAGE004
where VL (Q1 + Q3) is a case where only the historical utterance is considered, VC (Q2 + Q4) is a case where only the reference utterance is considered, VL is the historical utterance speed, and VC is the reference utterance speed, and therefore, in order to combine the historical utterance situation and the reference utterance situation, averaging VL (Q1 + Q3) and VC (Q2 + Q4) is the final target utterance speed.
Follow above-mentioned historical phonation speed, historical reading speed and historical reading error rate with following can know, in historical period, because the phonation speed of first electron phonation book sets up very fast, lead to the user in the reading-following in-process, the condition of misreading, neglected reading or pronunciation unclear appears, thereby the wrong condition of reading-following appears, consequently, this application utilizes historical reading-following error rate, historical phonation speed, historical reading-following speed, refer to phonation speed and refer to reading-following speed, adjust the phonation speed of first electron phonation book.
Further, after the user follows the reading according to the target sound-producing speed, when it is detected that the current following reading error rate is smaller than the historical following reading error rate, the target sound-producing speed is increased, for example, the target sound-producing speed is restored to the historical sound-producing speed, that is, the target sound-producing speed is increased from 121 words/minute to 180 words/minute.
Alternatively, in some embodiments, the sounding speed of the electronic sounding book is set to be slower, and it may happen that the user's speech speed is greater than the sounding speed, which results in longer playing time of the electronic sounding book, and in order to improve the read-after efficiency, the sounding speed of the first electronic sounding book may be increased according to the product of the historical sounding speed VL, the first parameter tuning weight Q1, the second parameter tuning weight Q2, the third parameter tuning weight Q3 and the fourth parameter tuning weight Q4, for example, the historical read-after error rate is 5%, and its corresponding first parameter tuning weight is 0.95%,
the matching degree of the first reference utterance content a is 85%, the corresponding second reference utterance weight Q2 is 0.85, the historical read-back speed is 150 words/min, the historical utterance speed is 130 words/min, the ratio between the historical read-back speed and the historical utterance speed can be calculated to be 1.2, the ratio is determined as the third reference utterance weight Q3, the reference utterance speed is 180 words/min, the reference read-back speed is 130 words/min, the ratio between the reference read-back speed and the reference utterance speed can be calculated to be 0.72, the ratio is determined as the fourth reference utterance weight Q4, and finally, the product of the historical utterance speed VL, the first reference utterance weight Q1, the second reference utterance weight Q2, the third reference utterance weight Q3 and the fourth reference utterance weight Q4 can be calculated by the following formula (4):
Figure 142311DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 230352DEST_PATH_IMAGE006
the adjustment coefficient is in the range of 1.2 to 1.5, and is specifically selected according to the historical read-following error rate, it being understood that the lower the historical read-following error rate isThe higher the adjustment coefficient is, the higher the sound production speed of the first electronic sounding book is.
Optionally, in some embodiments, when the history read-after error rate is less than or equal to the preset error rate, and the history read-after speed is greater than the history sound-producing speed, the formula (4) is used to increase the sound-producing speed of the first electronic sound-producing book, so that the playing time of the first electronic sound-producing book can be reduced while ensuring the accuracy of the user read-after.
Of course, the first content of the wrong sound production corresponding to the first electronic sound production book and the second content of the correct sound production corresponding to the first electronic sound production book can be determined according to the historical reading following error rate.
In addition, the first content with the wrong sound and the second content with the correct sound are determined in the first sounding book, and for the first content, the corresponding playing times can be increased, so that the familiarity of the user on the first content is improved; for the second content, the corresponding playing times can be reduced; after the first content and the second content are determined, outputting a scheduled playing time X corresponding to the first content and a scheduled playing time Y corresponding to the second content based on the historical follow-up reading error rate; subsequently, a parameter adjustment factor a for adjusting the planned play count is output according to the matching degree and the reference follow-up reading error rate, and finally, the play count T1 of the first content is calculated based on the product of the planned play count X and the parameter adjustment factor a, and the play count T2 of the second content is calculated based on the product of the planned play count Y and the parameter adjustment factor a.
A mapping table of the read following error rate and the scheduled playing time may be preset, specifically referring to table 1:
TABLE 1
Figure 700648DEST_PATH_IMAGE008
For example, the historical read-following error rate is 15%, the matching degree is 85%, the reference read-following error rate is 10%, and it can be known through table lookup that the scheduled playing time X corresponding to the output first content is 3 times and the scheduled playing time Y corresponding to the output second content is 1 time; in addition, the product of the matching degree and the reference follow-up error rate is calculated to obtain a parameter adjustment factor a of 0.085, and then, the number of times of playing T1= X + (1-a) =3+ (1-0.085) of the first content is calculated, rounded to an integer of 4, the number of times of playing T2= Y + a =1+0.085 of the second content is calculated, rounded to an integer of 1.
It should be noted that, when the matching degree is smaller than the preset value, the scheduled playing time X is determined as the playing time T1 of the first content, and the scheduled playing time Y is determined as the playing time T2 of the second content, and the preset value may be set according to an actual situation, which is not described herein again.
Further, for the electronic phonic book of follow-reading type, one of the most important functions of the intelligent phonic device is to assist in practicing pronunciation, and therefore, in this scenario, the follow-reading accuracy is crucial, so optionally, in some embodiments, the audio text corresponding to the first electronic phonic book may be determined, and the historical follow-reading error rate of the first electronic phonic book may be determined according to the pitch change rate and the pitch value in the audio text and the follow-reading information corresponding to the first electronic phonic book, that is, the step "determining the historical follow-reading error rate corresponding to the first electronic phonic book according to the first teaching information" may specifically include:
(31) extracting reading-following information corresponding to the first electronic sounding book from the first teaching information;
(32) acquiring an audio text corresponding to a first electronic sounding book;
(33) extracting characters corresponding to a preset time point from the audio text to obtain a plurality of audio single characters;
(34) extracting the pitch value of each audio single character to obtain a plurality of pitch values;
(35) and determining the historical follow-up reading error rate corresponding to the first electronic sounding book according to the pitch value and the follow-up reading information.
It should be noted that, in a read-after scene, in the current mode, a user clicks the position of a sounding book through a reading pen, and outputs a sounding corresponding to the read-after content, and then, the user performs read-after according to the sounding, and then, the intelligent sounding device determines whether the sounding corresponding to the read-after voice is correct according to the sounding, so as to output a corresponding read-after accuracy rate or read-after error rate.
However, in this way, considering only the pronunciation of a single word, for a polyphone word, the pronunciation of the word may be correct and the pronunciation of the phrase composed of the word may be incorrect, for example, for the word "rumma", "the pronunciation of" may be "de" first sound or "di" second sound, if the user clicks "with a pointing pen", "lu" and "horse", respectively, the reading following sound of the user is "de", "lu" and "ma", the reading following correct rate may be 100%, and actually the pronunciation of "rumma" should be "di", "lu" and "ma". For another example, for the definite article "the", the pronunciation rules are "thuh" before the consonant and "thee" before the vowel, so that if the user clicks "the" and "applet" with a click-and-read pen respectively for the word "the applet", the user's follow-up sounds are "thuh" and "
Figure 10000254797521
æ pl ", it may happen that the read-following accuracy is 100%, i.e. the accuracy of the current read-following accuracy or read-following error rate is not good.
Since a single word usually has more than one pronunciation, and the relative pitch is determined to be different pronunciations, the vocal cords can be adjusted at any time during the pronunciation process, so as to cause various pitch changes, thereby forming different tones, i.e. meaning that a single word usually has multiple pitches, for example, for the "liu zhi luma" of the audio text, the pitch values corresponding to the first "and the second" are different, i.e. the pitch value between the first "and the second" is changed, in some embodiments of the present application, the history read-after error rate corresponding to the first electronic pronunciation book is determined by using the pitch value and the read-after information, wherein the read-after information carries the read-after pitch value, the read-after speed, the read-after duration, and the like of the content of the utterance in the first electronic pronunciation book, in this embodiment, the audio text corresponding to the first electronic pronunciation book is obtained first, then, audio single words corresponding to each preset time point are obtained, a pitch value corresponding to each audio single word is extracted, then the difference value between the pitch value and the follow-up reading pitch value is compared, the audio single words with the difference value larger than the set difference value are determined as error follow-up reading words, and therefore the historical follow-up reading error rate corresponding to the first electronic vocal book is determined.
For the second reference vocalization content, since the matching degree thereof is greater than the second preset value, but it is smaller than the first preset value, therefore, the second reference vocalization content can be utilized to assist in adjusting the vocalization parameter of the first electronic vocalization book, that is, optionally, in some embodiments, the step "adjust the vocalization parameter of the first electronic vocalization book based on the first teaching information, the second teaching information, and the matching degree" specifically may include:
(41) determining second reference sounding content from the historical sounding content with the matching degree larger than a second preset value;
(42) acquiring a target sounding text corresponding to the first electronic sounding book and a reference sounding text corresponding to the second reference sounding content;
(43) extracting display distribution corresponding to each target sound-emitting object in the target sound-emitting text from the first teaching information, and extracting reference display distribution corresponding to each reference sound-emitting object in the reference sound-emitting text from the second teaching information;
(44) determining the influence degree of the reference display distribution on the display distribution based on the matching degree of the second reference sound production content;
(45) and adjusting the display duration and the display times of each target sound-producing object in the first electronic sound-producing book according to the influence degree and the display distribution.
Wherein, the second reference utterance content is determined from the historical utterance content with a matching degree greater than the second preset value, that is, in this embodiment, the local utterance content (i.e., the second reference utterance content) is focused on, then, the display distribution corresponding to each target utterance object and the reference display distribution corresponding to each reference utterance object are determined, where the utterance object may be a word, a phrase, a short sentence, a long sentence, or the like, and the present invention is not limited thereto, and further, according to the matching degree of the second reference utterance content, the degree of influence of the reference display distribution on the display distribution is determined, for example, the matching degree of the second reference utterance content with the reference utterance object a1 corresponding to the target object a1, the reference utterance object a2 corresponding to the target object a2, the reference utterance object a1 corresponding to the second reference utterance content is 90%, and the matching degree of the second reference utterance content with the reference utterance object a2 corresponding to the reference utterance content is 70%, it can be seen that the reference sound-generating object a1 has a greater influence on the target reference object a1, and therefore, the matching degree can be converted into a weight coefficient, for example, the matching degree corresponding to the reference sound-generating object a1 is converted into a weight coefficient of 0.9, and the number of times of exhibition of the reference sound-generating object a1 is increased according to the weight coefficient, and the number of times of exhibition can be calculated as x = (1 + y) t, where x is the adjusted number of times of exhibition, y is the weight coefficient, and t is the number of times of exhibition before adjustment; similarly, the display duration of the reference sound object a1 may be increased according to the weight coefficient, and the adjustment manner of the display duration is similar to the adjustment manner of the display times.
In step 103, when the user identifier is a child identifier, the click-to-read mode is triggered. In some embodiments of the present application, a user aged 5-8 years is determined as a child, and since the user at this age has limited characters to recognize, in order to facilitate the child to click-read the second electronic vocalized book in the click-reading mode, the switch of the click-reading function is tied to the gesture operation, so that the child can click-read the second electronic vocalized book more conveniently and quickly, that is, optionally, in some embodiments, the step "playing the second electronic vocalized book based on the eye movement data corresponding to the child identifier in response to the gesture operation on the second electronic vocalized book" may specifically include:
(51) responding to the gesture operation aiming at the second electronic voice book, and identifying whether the gesture operation is a target gesture operation;
(52) when the gesture operation is recognized as the target gesture operation, eye movement data corresponding to the child identification and coordinate data of each point reading content in the second electronic book are obtained;
(53) determining target point reading content in the second electronic book according to the eye movement data and the coordinate data;
(54) and playing the target click-to-read content, and outputting image information or dynamic effect of the target click-to-read content after preset time.
For example, a plurality of target gesture tracks may be stored in advance, when a gesture operation for the second electronic vocalizing book is received, a gesture track corresponding to the gesture operation is recognized, and when the gesture track is recognized to be matched with any one of the stored target gesture tracks, eye movement data corresponding to the child identifier and coordinate data of each point reading content in the second electronic book are acquired. And then, determining eye movement coordinates of the child on the second electronic book according to the eye movement data, then determining target reading contents in the second electronic book according to the coordinate data of the reading contents of all the points and the eye movement coordinates, and for some reading contents with image expression, outputting image information or movement effects of the target reading contents after the preset time of playing the target reading contents.
Optionally, in some embodiments, the step of "acquiring, in a click-to-read mode, a second electronic vocalized book corresponding to the child identifier" may specifically include:
(61) acquiring data to be read and a teaching progress corresponding to the child identifier;
(62) and determining target data in the data to be read as a second electronic sounding book based on the teaching progress and preset configuration information.
Optionally, the preset configuration information carries the configured data to be read and the display sequence of each piece of data to be read, and for a user who uses the intelligent sound generating device for the first time, the preset configuration information can output a second electronic sound generating book according to the set sequence; and for the user who uses the intelligent sound-emitting device, the user can determine the data to be read at the forefront of the teaching progress as the second electronic sound-emitting book according to the teaching progress of each data to be read.
Thus, the information interaction process of the embodiment is completed.
In the information interaction method, after a user identifier corresponding to interactive operation is determined in response to the interactive operation aiming at the intelligent sound-emitting device, when the user identifier is a student identifier, the intelligent sound-emitting device is triggered to enter a teaching mode, and historical teaching data and a first electronic sound-emitting book corresponding to the student identifier are acquired in the teaching mode; adjusting the sounding parameters of the first electronic sounding book based on historical teaching data; responding to the playing operation of the first electronic sounding book, playing the first electronic sounding book according to the adjusted sounding parameters, triggering the intelligent sounding device to enter a click-to-read mode when the user identification is the child identification, and acquiring a second electronic sounding book corresponding to the child identification in the click-to-read mode; the second electronic sounding book is played based on eye movement data corresponding to the child identification in response to gesture operation on the second electronic sounding book, and when information interaction is carried out on the second electronic sounding book and the intelligent sounding device, the second electronic sounding book can enter a corresponding electronic sounding mode according to different user identifications, not only can be adjusted by using historical teaching data, but also can be played based on the eye movement data, and therefore the information interaction form of the intelligent sounding device is enriched.
An embodiment of the present application further provides an information interaction method, where the information interaction apparatus is integrated in a server, please refer to fig. 2, and a specific flow is as follows:
step 201, a server responds to an interactive operation aiming at an intelligent sound production device and determines a user identifier corresponding to the interactive operation;
step 202, when the user identification is a student identification, the server triggers the intelligent sound-generating equipment to enter a teaching mode, and obtains historical teaching data and a first electronic sound-generating book corresponding to the student identification in the teaching mode; adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic phonation book, and playing the first electronic phonation book according to the adjusted phonation parameters;
step 203, when the user identifier is a child identifier, the server triggers the intelligent sound-generating device to enter a click-to-read mode, and acquires a second electronic sound-generating book corresponding to the child identifier in the click-to-read mode; and responding to the gesture operation aiming at the second electronic sound-emitting book, and playing the second electronic sound-emitting book based on the eye movement data corresponding to the child identification.
By last, the server can get into corresponding electron vocal mode according to the user identification of difference, not only can utilize historical teaching data to adjust first electron vocal book, can also play second electron vocal book based on eye movement data, from this, has richened intelligent sound production equipment's information interaction form.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of the information interaction apparatus of the present invention, and the information interaction apparatus of the present embodiment can be implemented by using the information interaction method. The information interaction device 30 of this embodiment includes a determining module 301, a first obtaining module 302, an adjusting module 303, a first playing module 304, a second obtaining module 305, and a second playing module 306, which are as follows:
a determining module 301, configured to determine, in response to an interactive operation for the smart sound emitting device, a user identifier corresponding to the interactive operation.
The first obtaining module 302 is configured to, when the user identifier is a student identifier, trigger the intelligent sound generating apparatus to enter a teaching mode, and obtain historical teaching data and a first electronic sound generating book corresponding to the student identifier in the teaching mode.
And the adjusting module 303 is configured to adjust the sounding parameters of the first electronic sounding book based on the historical teaching data.
And the first playing module 304 is configured to, in response to a playing operation for the first electronic voice book, play the first electronic voice book according to the adjusted voice generation parameters.
And the second obtaining module 305 is configured to, when the user identifier is a child identifier, trigger the smart sound-emitting device to enter a click-to-read mode, and obtain a second electronic sound-emitting book corresponding to the child identifier in the click-to-read mode.
And the second playing module 306 is used for responding to the gesture operation aiming at the second electronic vocalized book and playing the second electronic vocalized book based on the eye movement data corresponding to the child identification.
Optionally, in some embodiments, please refer to fig. 4, where fig. 4 is a schematic structural diagram of an adjusting module of an embodiment of the information interaction apparatus of the present invention, and the adjusting module 303 may specifically include:
an extracting unit 3031, configured to extract, from the historical teaching data, first teaching information corresponding to a first electronic vocalized book in a historical period, and extract, from the historical teaching data, second teaching information corresponding to other historical vocalized contents in the historical period;
a calculating unit 3032, configured to calculate a matching degree between the historical utterance content and the first electronic utterance book;
the adjusting unit 3033 is configured to adjust the sounding parameters of the first electronic sounding book based on the first teaching information, the second teaching information, and the matching degree.
Optionally, in some embodiments, the adjusting unit 3033 is specifically applicable to: determining the historical sounding content with the matching degree larger than a first preset value as a first reference sounding content; determining the historical sounding speed, the historical reading following speed and the historical reading following error rate corresponding to the first electronic sounding book according to the first teaching information; determining a reference sounding speed, a reference reading-following speed and a reference reading-following error rate of the first reference sounding content according to the second teaching information; converting the historical reading speed into a first parameter adjusting weight; outputting a second parameter adjusting weight corresponding to the reference sounding speed and a third parameter adjusting weight corresponding to the reference reading speed based on the matching degree of the first reference sounding content; calculating the product of the historical sounding speed, the first parameter adjusting weight, the second parameter adjusting weight and the third parameter adjusting weight to obtain a target sounding speed corresponding to the playing of the first electronic sounding book; and adjusting the playing times of the target sounding contents in the first electronic sounding book based on the historical follow-up reading error rate and the reference follow-up reading error rate.
Optionally, in some embodiments, the adjusting unit 3033 is specifically applicable to: extracting reading-following information corresponding to the first electronic sounding book from the first teaching information; acquiring an audio text corresponding to a first electronic sounding book; extracting characters corresponding to a preset time point from the audio text to obtain a plurality of audio single characters; counting the frequency of the single words appearing in the audio text of the audio single words and the frequency of the pitch value corresponding to the audio single words appearing in the audio text; calculating the product of the times of single characters and the times of pitches, and determining the independent probability of the pitches of the single characters in the audio text; obtaining the change parameters of each audio single character at a preset time point to obtain a plurality of change parameters; extracting the pitch value of each audio single character to obtain a plurality of pitch values; calculating the probability of the pitch value changing in the adjacent preset time interval based on the change parameter and the pitch value to obtain the pitch change probability corresponding to the pitch value; and determining the historical follow-up reading error rate corresponding to the first electronic sounding book according to the pitch variation probability, the pitch value and the follow-up reading information.
Optionally, in some embodiments, the adjusting unit 3033 may be further applied to: determining second reference sounding content from the historical sounding content with the matching degree larger than a second preset value; acquiring a target sounding text corresponding to the first electronic sounding book and a reference sounding text corresponding to the second reference sounding content; extracting display distribution corresponding to each target sound-producing object in the target sound-producing text from the first teaching information, and extracting reference display distribution corresponding to each reference sound-producing object in the reference sound-producing text from the second teaching information; determining the influence degree of the reference display distribution on the display distribution based on the matching degree of the second reference sounding content; and adjusting the display duration and the display times of each target sounding object in the first electronic sounding book according to the influence degree and the display distribution.
Optionally, in some embodiments, please refer to fig. 5, where fig. 5 is a schematic structural diagram of a second obtaining module of an embodiment of the information interaction apparatus of the present invention, and the second obtaining module 305 may specifically include:
the obtaining unit 3051 is configured to obtain data to be read by a point and a teaching progress corresponding to the child identifier;
the determining unit 3052 is configured to determine, based on the teaching progress and preset configuration information, target data in the data to be read by pointing as a second electronic sounding book.
Optionally, in some embodiments, please refer to fig. 6, where fig. 6 is a schematic structural diagram of a second playing module of an embodiment of the information interaction apparatus of the present invention, and the second playing module 306 may specifically include:
a recognition unit 3061 for recognizing whether the gesture operation is a target gesture operation in response to the gesture operation for the second electronic voice-emitting book;
the obtaining unit 3062 is configured to obtain eye movement data corresponding to the child identifier and coordinate data of each point reading content in the second electronic book when the gesture operation is recognized as the target gesture operation;
a determination unit 3063, configured to determine, according to the eye movement data and the coordinate data, target point-reading content in the second electronic book;
the playing unit 3064 is configured to play the target click-to-read content, and output image information or a dynamic effect of the target click-to-read content after a preset time.
This completes the information interaction process between the information interaction apparatus 30 of the present embodiment and the user.
The specific working principle of the information interaction apparatus of this embodiment is the same as or similar to that described in the above embodiment of the information interaction method, and for details, refer to the detailed description in the above embodiment of the information interaction method.
The information interaction device of the embodiment triggers the intelligent sound-emitting equipment to enter a teaching mode when the user identification is the student identification after responding to the interactive operation of the intelligent sound-emitting equipment and determining the user identification corresponding to the interactive operation, and acquires historical teaching data and a first electronic sound-emitting book corresponding to the student identification in the teaching mode; adjusting sounding parameters of the first electronic sounding book based on historical teaching data; responding to the playing operation of the first electronic sounding book, playing the first electronic sounding book according to the adjusted sounding parameters, triggering the intelligent sounding device to enter a click-to-read mode when the user identification is the child identification, and acquiring a second electronic sounding book corresponding to the child identification in the click-to-read mode; responding to the gesture operation of the second electronic sounding book, playing the second electronic sounding book based on the eye movement data corresponding to the child identification, entering a corresponding electronic sounding mode according to different user identifications when information interaction is carried out on the second electronic sounding book and the intelligent sounding equipment, adjusting the first electronic sounding book by using historical teaching data, and playing the second electronic sounding book based on the eye movement data, so that the information interaction form of the intelligent sounding equipment is enriched.
As used herein, the terms "component," "module," "system," "interface," "process," and the like are generally intended to refer to a computer-related entity: hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
FIG. 7 and the following discussion provide a brief, general description of an operating environment of an electronic device in which an information interaction device described herein may be implemented. The operating environment of FIG. 7 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example electronic devices 1012 include, but are not limited to, wearable devices, head-mounted devices, medical health platforms, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Although not required, embodiments are described in the general context of "computer readable instructions" being executed by one or more electronic devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
FIG. 7 illustrates an example of an electronic device 1012 that includes one or more embodiments of the information interaction devices of the present invention. In one configuration, electronic device 1012 includes at least one processing unit 1016 and memory 1018. Depending on the exact configuration and type of electronic device, memory 1018 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This configuration is illustrated in fig. 1 by dashed line 1014.
In other embodiments, electronic device 1012 may include additional features and/or functionality. For example, device 1012 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in FIG. 7 by storage 1020. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 1020. Storage 1020 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 1018 for execution by processing unit 1016, for example.
The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 1018 and storage 1020 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by electronic device 1012. Any such computer storage media may be part of electronic device 1012.
Electronic device 1012 may also include communication connection(s) 1026 that allow electronic device 1012 to communicate with other devices. Communication connection(s) 1026 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting electronic device 1012 to other electronic devices. The communication connection 1026 may comprise a wired connection or a wireless connection. Communication connection(s) 1026 may transmit and/or receive communication media.
The term "computer readable media" may include communication media. Communication media typically embodies computer readable instructions or other data in a "modulated data signal" such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may include signals that: one or more of the signal characteristics may be set or changed in such a manner as to encode information in the signal.
Electronic device 1012 may include input device(s) 1024 such as keyboard, mouse, pen, voice input device, touch input device, infrared camera, video input device, and/or any other input device. Output device(s) 1022 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 1012. Input device 1024 and output device 1022 may be connected to electronic device 1012 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another electronic device may be used as input device 1024 or output device 1022 for electronic device 1012.
The components of electronic device 1012 may be connected by various interconnects, such as a bus. Such interconnects may include Peripheral Component Interconnect (PCI), such as PCI express, Universal Serial Bus (USB), firewire (IEEE 13104), optical bus structures, and so forth. In another embodiment, components of electronic device 1012 may be interconnected by a network. For example, memory 1018 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, electronic device 1030 accessible via network 1028 may store computer readable instructions to implement one or more embodiments of the present invention. Electronic device 1012 may access electronic device 1030 and download a part or all of the computer readable instructions for execution. Alternatively, electronic device 1012 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at electronic device 1012 and some at electronic device 1030.
Various operations of embodiments are provided herein. In one embodiment, the one or more operations may constitute computer readable instructions stored on one or more computer readable media, which when executed by an electronic device, will cause the computing device to perform the operations. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Those skilled in the art will appreciate alternative orderings having the benefit of this description. Moreover, it should be understood that not all operations are necessarily present in each embodiment provided herein.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may perform the method in the corresponding method embodiment.
In summary, although the present invention has been disclosed in the foregoing embodiments, the serial numbers before the embodiments are used for convenience of description only, and the sequence of the embodiments of the present invention is not limited. Furthermore, the above embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be limited by the appended claims.

Claims (10)

1. An information interaction method is applied to intelligent sound production equipment and is characterized by comprising the following steps:
responding to interactive operation aiming at the intelligent sound-emitting equipment, and determining a user identifier corresponding to the interactive operation;
when the user identification is a student identification, triggering the intelligent sound-generating equipment to enter a teaching mode, and acquiring historical teaching data and a first electronic sound-generating book corresponding to the student identification in the teaching mode; adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; responding to the playing operation of the first electronic phonation book, and playing the first electronic phonation book according to the adjusted phonation parameters;
when the user identification is a child identification, triggering the intelligent sound-producing equipment to enter a point reading mode, and acquiring a second electronic sound-producing book corresponding to the child identification in the point reading mode; and responding to gesture operation aiming at the second electronic phonic book, and playing the second electronic phonic book based on the eye movement data corresponding to the child identification.
2. The method of claim 1, wherein said adjusting the vocalization parameters of the first electronic vocalized book based on the instructional history data comprises:
extracting first teaching information corresponding to the first electronic sounding book in a historical time period from the historical teaching data, and;
extracting second teaching information corresponding to other historical sound production contents in a historical period from the historical teaching data;
calculating the matching degree between the historical sounding content and the first electronic sounding book;
and adjusting the sounding parameters of the first electronic sounding book based on the first teaching information, the second teaching information and the matching degree.
3. The method of claim 2, wherein adjusting the vocalization parameters of the first electronic vocalized book based on the first instructional information, the second instructional information, and the degree of match comprises:
determining the historical sounding content with the matching degree larger than a first preset value as a first reference sounding content;
determining a historical sounding speed, a historical reading following speed and a historical reading following error rate corresponding to the first electronic sounding book according to the first teaching information;
determining a reference sounding speed, a reference follow-up reading speed and a reference follow-up reading error rate of the first reference sounding content according to the second teaching information;
converting the historical reading following error rate into a first parameter adjusting weight, and converting the matching degree corresponding to the first reference sounding content into a second parameter adjusting weight;
outputting a third parameter-adjusting weight based on the ratio between the historical sounding speed and the historical reading speed, and outputting a fourth parameter-adjusting weight based on the ratio between the reference sounding speed and the reference reading speed;
calculating the product of the historical sounding speed, the first parameter adjusting weight, the second parameter adjusting weight, the third parameter adjusting weight and the fourth parameter adjusting weight to obtain a target sounding speed corresponding to the playing of the first electronic sounding book;
and adjusting the playing times of the target sounding contents in the first electronic sounding book based on the historical follow-up reading error rate and the reference follow-up reading error rate.
4. The method of claim 3, wherein determining the historical read-after error rate corresponding to the first electronic utterance comprises:
extracting reading-after information corresponding to the first electronic sounding book from the first teaching information;
acquiring an audio text corresponding to the first electronic sounding book;
extracting characters corresponding to a preset time point from the audio text to obtain a plurality of audio single characters;
extracting the pitch value of each audio single character to obtain a plurality of pitch values;
and determining the historical read following error rate corresponding to the first electronic sounding book according to the pitch value and the read following information.
5. The method of claim 2, wherein adjusting the vocalization parameters of the first electronic vocalized book based on the first instructional information, the second instructional information, and the degree of match comprises:
determining second reference sounding content from the historical sounding content with the matching degree larger than a second preset value;
acquiring a target sounding text corresponding to the first electronic sounding book and a reference sounding text corresponding to the second reference sounding content;
extracting display distribution corresponding to each target sound-producing object in the target sound-producing text from the first teaching information;
extracting reference display distribution corresponding to each reference sound-emitting object in the reference sound-emitting text from the second teaching information;
determining the influence degree of the reference display distribution on the display distribution based on the matching degree of the second reference sound production content;
and adjusting the display duration and the display times of each target sound-producing object in the first electronic sound-producing book according to the influence degree and the display distribution.
6. The method of claim 1, wherein playing the second electronic voice-book based on the child identifying corresponding eye movement data in response to the gesture operation directed to the second electronic voice-book comprises:
responding to the gesture operation of the second electronic voice book, and identifying whether the gesture operation is a target gesture operation;
when the gesture operation is recognized as a target gesture operation, acquiring eye movement data corresponding to the child identification and coordinate data of reading contents of all points in the second electronic sound book;
determining target point reading content in the second electronic sounding book according to the eye movement data and the coordinate data;
and playing the target reading content, and outputting image information or dynamic effect of the target reading content after preset time.
7. The method as claimed in claim 1, wherein in the point-reading mode, acquiring a second electronic voice book corresponding to the child identifier comprises:
acquiring data to be read and a teaching progress corresponding to the child identification;
and determining target data in the data to be read as a second electronic sounding book based on the teaching progress and preset configuration information.
8. The utility model provides an information interaction device, is applied to intelligent sound production equipment, its characterized in that includes:
the system comprises a determining module, a judging module and a judging module, wherein the determining module is used for responding to the interactive operation of the intelligent sound-emitting equipment and determining a user identifier corresponding to the interactive operation;
the first acquisition module is used for triggering the intelligent sound production equipment to enter a teaching mode when the user identification is a student identification, and acquiring historical teaching data and a first electronic sound production book corresponding to the student identification in the teaching mode; the adjusting module is used for adjusting the sounding parameters of the first electronic sounding book based on the historical teaching data; the first playing module is used for responding to the playing operation of the first electronic sounding book and playing the first electronic sounding book according to the adjusted sounding parameters;
the second acquisition module is used for triggering the intelligent sound-producing equipment to enter a point reading mode when the user identifier is a child identifier, and acquiring a second electronic sound-producing book corresponding to the child identifier in the point reading mode; and the playing module is used for responding to gesture operation aiming at the second electronic sound-emitting book and playing the second electronic sound-emitting book based on the eye movement data corresponding to the child identification.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the information interaction method according to any one of claims 1 to 7 are implemented when the program is executed by the processor.
10. A storage medium having stored therein processor-executable instructions, the instructions being loaded by one or more processors to perform the method of information interaction of any of claims 1-7.
CN202211001562.9A 2022-08-19 2022-08-19 Information interaction method and device, electronic equipment and storage medium Active CN115083222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211001562.9A CN115083222B (en) 2022-08-19 2022-08-19 Information interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211001562.9A CN115083222B (en) 2022-08-19 2022-08-19 Information interaction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115083222A true CN115083222A (en) 2022-09-20
CN115083222B CN115083222B (en) 2022-11-11

Family

ID=83244027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211001562.9A Active CN115083222B (en) 2022-08-19 2022-08-19 Information interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115083222B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5421730A (en) * 1991-11-27 1995-06-06 National Education Training Group, Inc. Interactive learning system providing user feedback
US20110191674A1 (en) * 2004-08-06 2011-08-04 Sensable Technologies, Inc. Virtual musical interface in a haptic virtual environment
CN105825568A (en) * 2016-03-16 2016-08-03 广东威创视讯科技股份有限公司 Portable intelligent interactive equipment
CN107308657A (en) * 2017-07-31 2017-11-03 广州网嘉玩具科技开发有限公司 A kind of novel interactive intelligent toy system
CN108564943A (en) * 2018-04-27 2018-09-21 京东方科技集团股份有限公司 voice interactive method and system
CN109940627A (en) * 2019-01-29 2019-06-28 北京光年无限科技有限公司 It is a kind of towards the man-machine interaction method and system of drawing this reading machine people
CN111079495A (en) * 2019-06-09 2020-04-28 广东小天才科技有限公司 Point reading mode starting method and electronic equipment
CN111787387A (en) * 2020-06-30 2020-10-16 百度在线网络技术(北京)有限公司 Content display method, device, equipment and storage medium
CN213844755U (en) * 2020-12-11 2021-07-30 中山市秦奇电子科技有限公司 AI intelligence is painted originally and is read machine
CN114120324A (en) * 2021-11-25 2022-03-01 长沙师范学院 Intelligent object identification method and system based on big data analysis in click-to-read scene

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5421730A (en) * 1991-11-27 1995-06-06 National Education Training Group, Inc. Interactive learning system providing user feedback
US20110191674A1 (en) * 2004-08-06 2011-08-04 Sensable Technologies, Inc. Virtual musical interface in a haptic virtual environment
CN105825568A (en) * 2016-03-16 2016-08-03 广东威创视讯科技股份有限公司 Portable intelligent interactive equipment
CN107308657A (en) * 2017-07-31 2017-11-03 广州网嘉玩具科技开发有限公司 A kind of novel interactive intelligent toy system
CN108564943A (en) * 2018-04-27 2018-09-21 京东方科技集团股份有限公司 voice interactive method and system
CN109940627A (en) * 2019-01-29 2019-06-28 北京光年无限科技有限公司 It is a kind of towards the man-machine interaction method and system of drawing this reading machine people
CN111079495A (en) * 2019-06-09 2020-04-28 广东小天才科技有限公司 Point reading mode starting method and electronic equipment
CN111787387A (en) * 2020-06-30 2020-10-16 百度在线网络技术(北京)有限公司 Content display method, device, equipment and storage medium
CN213844755U (en) * 2020-12-11 2021-07-30 中山市秦奇电子科技有限公司 AI intelligence is painted originally and is read machine
CN114120324A (en) * 2021-11-25 2022-03-01 长沙师范学院 Intelligent object identification method and system based on big data analysis in click-to-read scene

Also Published As

Publication number Publication date
CN115083222B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
US8793118B2 (en) Adaptive multimodal communication assist system
CN109036464B (en) Pronunciation error detection method, apparatus, device and storage medium
TWI446257B (en) Automatic reading tutoring with parallel polarized language modeling
US10460731B2 (en) Apparatus, method, and non-transitory computer readable storage medium thereof for generating control instructions based on text
US10629192B1 (en) Intelligent personalized speech recognition
US20140324433A1 (en) Method and device for learning language and computer readable recording medium
US20200184958A1 (en) System and method for detection and correction of incorrectly pronounced words
CN109817201A (en) Language learning method and device, electronic equipment and readable storage medium
KR102101496B1 (en) Ar-based writing practice method and program
CN111653274B (en) Wake-up word recognition method, device and storage medium
CN111079423A (en) Method for generating dictation, reading and reporting audio, electronic equipment and storage medium
KR102225435B1 (en) Language learning-training system based on speech to text technology
KR20180012192A (en) Infant Learning Apparatus and Method Using The Same
CN115083222B (en) Information interaction method and device, electronic equipment and storage medium
JP6366179B2 (en) Utterance evaluation apparatus, utterance evaluation method, and program
KR102389153B1 (en) Method and device for providing voice responsive e-book
CN111652165B (en) Mouth shape evaluating method, mouth shape evaluating equipment and computer storage medium
CN111681676B (en) Method, system, device and readable storage medium for constructing audio frequency by video object identification
CN113990351A (en) Sound correction method, sound correction device and non-transient storage medium
CN110428668B (en) Data extraction method and device, computer system and readable storage medium
CN112560431A (en) Method, apparatus, device, storage medium, and computer program product for generating test question tutoring information
CN112951013A (en) Learning interaction method and device, electronic equipment and storage medium
CN111695777A (en) Teaching method, teaching device, electronic device and storage medium
CN111159433A (en) Content positioning method and electronic equipment
JP7355785B2 (en) Information provision method and system based on pointing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant