CN111028853A - Spoken language expressive force evaluation method and system - Google Patents
Spoken language expressive force evaluation method and system Download PDFInfo
- Publication number
- CN111028853A CN111028853A CN201911164553.XA CN201911164553A CN111028853A CN 111028853 A CN111028853 A CN 111028853A CN 201911164553 A CN201911164553 A CN 201911164553A CN 111028853 A CN111028853 A CN 111028853A
- Authority
- CN
- China
- Prior art keywords
- spoken language
- index
- testee
- language expression
- sentence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011156 evaluation Methods 0.000 title claims description 12
- 238000000034 method Methods 0.000 claims abstract description 57
- 238000004590 computer program Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 abstract description 8
- 238000012854 evaluation process Methods 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The embodiment of the invention provides a method and a system for evaluating spoken language expressive force. The method comprises the following steps: acquiring a spoken language expression index set of a testee according to original voice data of the testee; and evaluating the spoken language expression power of the tested person according to the spoken language expression index set of the tested person. The method and the system provided by the embodiment of the invention obtain the spoken language expression index set of the testee through the original voice data of the testee, and further evaluate the spoken language expression power of the testee according to the spoken language expression index set of the testee. Compared with the traditional method, the method provided by the embodiment of the invention can reflect the spoken language expression of the tested person more objectively, directly, in real time and scientifically, does not need to consume a large amount of manpower in the evaluation process, is simple and easy to implement, and completely controls the professional requirements of the implementer within the operable range of the existing teaching and training institution.
Description
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to a method and a system for evaluating spoken language expressive force.
Background
Expression is an action that reflects the results of thinking in the form of language, voice, intonation, expression, behavior, etc. The language is divided into two categories, i.e. oral language and written language, and the expressive force can also be divided into oral expressive force and written expressive force.
In the traditional method, a teacher or examiner on-site interviewing method is adopted when evaluating the oral expression power, so that the time consumption for evaluating the individual oral expression power is long, the labor consumption is high, and the evaluation of the oral expression power during group communication is beyond the right. In addition, the teacher or examiner judges the spoken language expression of the testee by subjective feeling, and the evaluation result is not objective enough.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method and a system for evaluating spoken language expressive force.
In a first aspect, an embodiment of the present invention provides a method for evaluating spoken language expressiveness, including:
acquiring a spoken language expression index set of a testee according to original voice data of the testee;
evaluating the spoken language expression power of the tested person according to the spoken language expression index set of the tested person; wherein,
the set of spoken language expression indicators includes: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein,
the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
Further, the method further comprises:
and providing a corresponding solution for the testee according to the spoken language expression index set of the testee so as to improve the spoken language expression of the testee.
Further, the method further comprises:
acquiring a spoken language expression index set of all related testees;
and acquiring the spoken language expression force of a group consisting of all the associated testees according to the spoken language expression index set of all the associated testees.
Further, the method further comprises:
and providing a corresponding solution for a teacher corresponding to a group formed by all the associated testees according to the spoken language expression index set of all the associated testees so as to improve the spoken language expression of the group.
Further, acquiring a spoken language expression index set of the tested person according to the original voice data of the tested person, including:
performing analog-to-digital conversion on the acquired original voice data of the testee to obtain digitized voice data of the testee;
filtering and denoising the digitized voice data of the testee;
carrying out feature recognition on the filtered and noise-reduced digitized voice data of the testee to obtain a voice feature value of the testee;
and obtaining a spoken language expression index set of the tested person according to the voice characteristic value of the tested person.
Further, the method further comprises:
and displaying the spoken language expression index set and the spoken language expression of the testee.
Further, the method further comprises:
and displaying the spoken language expression index set of all the related testees and the spoken language expression force of a group consisting of all the related testees.
In a second aspect, an embodiment of the present invention provides a spoken language expressive force evaluation system, including:
the oral expression index set acquisition module is used for acquiring an oral expression index set of a testee according to original voice data of the testee;
the oral expression force evaluation module is used for evaluating the oral expression force of the testee according to the oral expression index set of the testee; wherein,
the set of spoken language expression indicators includes: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein,
the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method provided in the first aspect when executing the program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method as provided in the first aspect.
According to the method and the system for evaluating the spoken language expression power, the spoken language expression index set of the testee is obtained through the original voice data of the testee, and the spoken language expression power of the testee is evaluated according to the spoken language expression index set of the testee. Compared with the traditional method, the method provided by the embodiment of the invention can reflect the spoken language expression of the tested person more objectively, directly, in real time and scientifically, does not need to consume a large amount of manpower in the evaluation process, is simple and easy to implement, and completely controls the professional requirements of the implementer within the operable range of the existing teaching and training institution.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for assessing spoken language expressiveness according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a spoken language expressive force evaluation system according to an embodiment of the present invention;
fig. 3 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for evaluating spoken language expressiveness according to an embodiment of the present invention, as shown in fig. 1, the method includes:
102, evaluating the spoken language expression power of the testee according to the spoken language expression index set of the testee; wherein,
the set of spoken language expression indicators includes: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein,
the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
In order to more clearly illustrate the method provided by the embodiment of the present invention, the method is applied to a smart classroom for description, and it should be noted that the smart classroom in the embodiment of the present invention refers to a classroom provided with some intelligent teaching devices, through which a teacher can intelligently teach, and a student can better learn through the smart classroom.
The intelligent teaching equipment at least comprises a plurality of devices capable of measuring and feeding back interpersonal communication efficiency in real time and a server.
In a smart classroom, each student (i.e., the subject) wears the device, and in an actual learning environment or a social communication environment, the device can acquire original voice data of the subject wearing the device, where it is to be noted that the original voice data is analog voice data.
After the device acquires original voice data, firstly, analog-to-digital conversion is carried out on the original voice data to convert the original voice data into digital voice data (namely digital voice data); the digitized voice data is then sent to the server via wifi.
After receiving the digitized voice data of the testee, the server firstly obtains the waveform of the voice of the testee along with the time change through interpersonal communication efficiency analysis software; then, filtering and denoising the waveform; and then, converting the waveform subjected to filtering and noise reduction by a feature recognition technology to obtain a voice feature value of the tested person, namely data of the voice of the tested person changing along with time.
After the server obtains the data of the tested person's voice changing with time, based on the data, some index sets (i.e. spoken language expression index sets) reflecting his spoken language expression in the actual learning environment or social communication environment of the tested person can be calculated.
The following introduces the set of spoken language expression indicators:
the set of spoken language expression indicators includes: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
The influence index is as follows: the index can reflect the capability of the tested person to change the behavior state of other persons;
the centrality index is as follows: the index can reflect the closeness degree of the connection between the tested person and the fellow;
the attention index of the companion: the index can reflect the ability of the testee to attract the companions to pay attention to the testee and respond to the testee;
liveness index: the index can reflect the overall excitement degree of the tested person in the collective activity;
teacher attention index: the index can reflect the matching degree of the tested person to the teaching activities;
teaching fitness index: this index reflects the degree to which the subject attracts the attention of the teacher.
And evaluating the spoken language expression power of the testee through the spoken language expression index set of the testee. It can be understood that, since the spoken language expression can reflect the result of the thought of the subject, the degree of grasp of the teaching content by the subject's spoken language expression can be reflected.
The method provided by the embodiment of the invention obtains the spoken language expression index set of the testee through the original voice data of the testee, and further evaluates the spoken language expression power of the testee according to the spoken language expression index set of the testee. Compared with the traditional method, the method provided by the embodiment of the invention can reflect the spoken language expression of the tested person more objectively, directly, in real time and scientifically, does not need to consume a large amount of manpower in the evaluation process, is simple and easy to implement, and completely controls the professional requirements of the implementer within the operable range of the existing teaching and training institution.
Based on any of the above embodiments, the method provided by the embodiment of the present invention further includes:
and providing a corresponding solution for the testee according to the spoken language expression index set of the testee so as to improve the spoken language expression of the testee.
Specifically, the server can obtain the communication characteristics such as the spoken language expression power of the tested person according to the spoken language expression index set of the tested person, provide a set of personalized solution for the tested person according to the communication characteristics such as the spoken language expression power of the tested person, and push the solution to the tested person. The solution can be measures needed to be taken for improving certain dimensionalities (one or more of influence, centrality, partner attention, liveness, teacher attention and teaching matching) of a tested person; the solution can also be a proper online course matched according to the characteristics of the testee, and the server can display the online course in the account of the testee; the solution can also be a suitable offline course matched according to the characteristics of the testee, and the server can send the lecture information, the course information and the education institution providing the course of the offline course to the account of the testee irregularly.
It should be noted that, the testee can also access the server by using the browser, and can obtain the data (one or more of the set of spoken language expression indexes, the spoken language expression and the solution) related to the testee in the server according to the authority.
The method provided by the embodiment of the invention pushes the corresponding solution to the testee in a targeted manner according to the spoken language expression index set of the testee so as to improve the spoken language expression of the testee. According to the method provided by the embodiment of the invention, a large amount of manpower is not required to be consumed in the pushing process, the method is simple and easy to implement, and the professional requirements on an implementer are completely controlled within the operable range of the existing teaching and training institution; meanwhile, children and parents can obtain suggestions and train without going out of home, so that time and energy investment of families are saved; and, based on the solution obtained by the accurate evaluation, the direction can be indicated for the parents without professional knowledge, such as what knowledge should be supplemented, which training class should be reported, and the time and effort for learning, groping and trial and error are saved.
Based on any of the above embodiments, the method provided by the embodiment of the present invention further includes:
acquiring a spoken language expression index set of all related testees;
and acquiring the spoken language expression force of a group consisting of all the associated testees according to the spoken language expression index set of all the associated testees.
Specifically, a plurality of testees exist in the intelligent classroom in the embodiment of the invention, each tester wears a device capable of measuring and feeding back interpersonal communication efficiency in real time, and at the moment, each device performs analog-to-digital conversion on the acquired original voice data of the tester wearing the device and sends the converted data to the server through wifi.
At this time, the server can simultaneously receive the digitized voice data transmitted by a plurality of the devices.
The server receives the digitized voice data of a plurality of testees, summarizes and carries out analysis. Firstly, interpersonal communication efficiency analysis software in a server acquires the waveform of the voice of each tested person along with time change, and filtering and denoising are carried out; then, the server converts the waveform after filtering and denoising by a feature recognition technology to obtain a voice feature value of each tested person, namely, data of voice of each tested person changing along with time.
After the server obtains the data of the voice of each tested person changing along with the time, based on the data, some index sets (namely, spoken language expression index sets) reflecting the spoken language expression of each tested person in the actual learning environment or social communication environment can be calculated and obtained. The spoken language expression index set has been described in detail in the above embodiments, and is not described herein again.
And acquiring the spoken language expression power of a group consisting of all the associated testees through the spoken language expression index set of all the associated testees. Specifically, software in the server comprehensively analyzes the indexes of all testees, combines application experience, and integrates data of each tester to form data reflecting the whole class state of the whole class, namely spoken language expression of the group.
It can be understood that the spoken language expression force can reflect the result of the group thinking, so that the group's absorption mastering degree and the like on the teaching content can be reflected through the spoken language expression force of the group, so that the teacher can better master the learning condition of the class student, timely and accurately master the classroom state, and help is provided for the teaching of the teacher.
Based on any of the above embodiments, the method provided by the embodiment of the present invention further includes:
and providing a corresponding solution for a teacher corresponding to a group formed by all the associated testees according to the spoken language expression index set of all the associated testees so as to improve the spoken language expression of the group.
It should be noted that, at the teacher workstation, the teacher may access the server by using the browser, obtain the relevant data in the data server according to the authority, and make a judgment according to the teacher's experience, modify the relevant data, and add an intelligent push solution.
Based on any of the above embodiments, obtaining a spoken language expression index set of a subject according to original voice data of the subject includes:
performing analog-to-digital conversion on the acquired original voice data of the testee to obtain digitized voice data of the testee;
filtering and denoising the digitized voice data of the testee;
carrying out feature recognition on the filtered and noise-reduced digitized voice data of the testee to obtain a voice feature value of the testee;
and obtaining a spoken language expression index set of the tested person according to the voice characteristic value of the tested person.
Based on any of the above embodiments, the method provided by the embodiment of the present invention further includes:
and displaying the spoken language expression index set and the spoken language expression of the testee. It should be noted that the above contents may be displayed on a teacher's machine or a learning terminal held by a tester.
Based on any of the above embodiments, the method provided by the embodiment of the present invention further includes:
and displaying the spoken language expression index set of all the related testees and the spoken language expression force of a group consisting of all the related testees. It should be noted that the above contents may be displayed on a teacher's machine or a learning terminal held by a tester.
Based on any of the above embodiments, fig. 2 is a schematic structural diagram of a spoken language expressive force evaluation system provided by an embodiment of the present invention, as shown in fig. 2, the system includes:
a spoken language expression index set acquisition module 201, configured to acquire a spoken language expression index set of a subject according to original voice data of the subject; a spoken language expression power evaluation module 202, configured to evaluate a spoken language expression power of the subject according to the spoken language expression index set of the subject; wherein the set of spoken language expression indicators comprises: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
Specifically, the system provided in the embodiment of the present invention is specifically configured to execute the method embodiment described above, and details of the method embodiment of the present invention are not described again. The system provided by the embodiment of the invention obtains the spoken language expression index set of the testee through the original voice data of the testee, and further evaluates the spoken language expression power of the testee according to the spoken language expression index set of the testee. Compared with the traditional method, the method provided by the embodiment of the invention can reflect the spoken language expression of the tested person more objectively, directly, in real time and scientifically, does not need to consume a large amount of manpower in the evaluation process, is simple and easy to implement, and completely controls the professional requirements of the implementer within the operable range of the existing teaching and training institution.
In summary, the method and system provided by the embodiments of the present invention obtain the voice data of the testee through the device for measuring and feeding back the interpersonal communication efficiency in real time, and calculate the sentence-related index and influence, the centrality, the peer attention, the liveness, the teacher attention, and the teaching compliance of the testee in time. The lessee-giving teacher can obtain the group result from the system, thereby timely and accurately grasping the classroom state. Compared with the conventional method for evaluating the spoken language expression only by questionnaire survey, the method and the system provided by the embodiment of the invention can reflect the relevant indexes and influence, the centrality, the partner attention, the liveness, the teacher attention and the teaching coordination of the individual or group sentences more objectively, directly and in real time. After class, series course data are collected, the performance of students in different groups can be conveniently compared, or the effects of different courses are compared, and the obtained results are used for guiding teaching.
In addition, the solution pushing algorithm can recommend a customized training scheme according to different individual conditions. Children and parents can obtain suggestions and train without going out of home, so that time and energy investment of the family are saved. Based on the guiding scheme of accurate evaluation, the direction can be indicated for the parents without professional knowledge: what knowledge should be supplemented, which training class should be reported. The time and the energy for learning, groping and trial and error are saved.
Fig. 3 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device may include: a processor (processor)301, a communication Interface (communication Interface)302, a memory (memory)303 and a communication bus 304, wherein the processor 301, the communication Interface 302 and the memory 303 complete communication with each other through the communication bus 304. The processor 301 may invoke a computer program stored on the memory 303 and executable on the processor 301 to perform the methods provided by the various embodiments described above, including, for example: acquiring a spoken language expression index set of a testee according to original voice data of the testee; evaluating the spoken language expression power of the tested person according to the spoken language expression index set of the tested person; wherein the set of spoken language expression indicators comprises: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
In addition, the logic instructions in the memory 303 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, and the method includes: acquiring a spoken language expression index set of a testee according to original voice data of the testee; evaluating the spoken language expression power of the tested person according to the spoken language expression index set of the tested person; wherein the set of spoken language expression indicators comprises: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for assessing expressive power in spoken language, comprising:
acquiring a spoken language expression index set of a testee according to original voice data of the testee;
evaluating the spoken language expression power of the tested person according to the spoken language expression index set of the tested person; wherein,
the set of spoken language expression indicators includes: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein,
the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
2. The method of claim 1, further comprising:
and providing a corresponding solution for the testee according to the spoken language expression index set of the testee so as to improve the spoken language expression of the testee.
3. The method of claim 1, further comprising:
acquiring a spoken language expression index set of all related testees;
and acquiring the spoken language expression force of a group consisting of all the associated testees according to the spoken language expression index set of all the associated testees.
4. The method of claim 3, further comprising:
and providing a corresponding solution for a teacher corresponding to a group formed by all the associated testees according to the spoken language expression index set of all the associated testees so as to improve the spoken language expression of the group.
5. The method of claim 1, wherein obtaining a set of spoken language expression indicators of a subject from raw speech data of the subject comprises:
performing analog-to-digital conversion on the acquired original voice data of the testee to obtain digitized voice data of the testee;
filtering and denoising the digitized voice data of the testee;
carrying out feature recognition on the filtered and noise-reduced digitized voice data of the testee to obtain a voice feature value of the testee;
and obtaining a spoken language expression index set of the tested person according to the voice characteristic value of the tested person.
6. The method of claim 1, further comprising:
and displaying the spoken language expression index set and the spoken language expression of the testee.
7. The method of claim 3, further comprising:
and displaying the spoken language expression index set of all the related testees and the spoken language expression force of a group consisting of all the related testees.
8. A system for assessing expressive power in spoken language, comprising:
the oral expression index set acquisition module is used for acquiring an oral expression index set of a testee according to original voice data of the testee;
the oral expression force evaluation module is used for evaluating the oral expression force of the testee according to the oral expression index set of the testee; wherein,
the set of spoken language expression indicators includes: any one or more of sentence correlation index subset, influence index, centrality index, partner attention index, liveness index, teacher attention index and teaching matching index; wherein,
the sentence correlation index subset comprises: any one or more of medium-length sentence duration, next-long sentence duration, next-short sentence duration, longest sentence duration, shortest sentence duration, utterance count, and total utterance duration.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the processor executes the program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911164553.XA CN111028853A (en) | 2019-11-25 | 2019-11-25 | Spoken language expressive force evaluation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911164553.XA CN111028853A (en) | 2019-11-25 | 2019-11-25 | Spoken language expressive force evaluation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111028853A true CN111028853A (en) | 2020-04-17 |
Family
ID=70206486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911164553.XA Pending CN111028853A (en) | 2019-11-25 | 2019-11-25 | Spoken language expressive force evaluation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111028853A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116343824A (en) * | 2023-05-29 | 2023-06-27 | 新励成教育科技股份有限公司 | Comprehensive evaluation and solution method, system, device and medium for talent expression capability |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101782941A (en) * | 2009-01-16 | 2010-07-21 | 国际商业机器公司 | Method and system for evaluating spoken language skill |
CN106851216A (en) * | 2017-03-10 | 2017-06-13 | 山东师范大学 | A kind of classroom behavior monitoring system and method based on face and speech recognition |
CN108154304A (en) * | 2017-12-26 | 2018-06-12 | 重庆大争科技有限公司 | There is the server of Teaching Quality Assessment |
CN109493968A (en) * | 2018-11-27 | 2019-03-19 | 科大讯飞股份有限公司 | A kind of cognition appraisal procedure and device |
CN109800663A (en) * | 2018-12-28 | 2019-05-24 | 华中科技大学鄂州工业技术研究院 | Teachers ' teaching appraisal procedure and equipment based on voice and video feature |
CN110111011A (en) * | 2019-05-09 | 2019-08-09 | 成都终身成长科技有限公司 | A kind of quality of instruction monitoring and managing method, device and electronic equipment |
-
2019
- 2019-11-25 CN CN201911164553.XA patent/CN111028853A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101782941A (en) * | 2009-01-16 | 2010-07-21 | 国际商业机器公司 | Method and system for evaluating spoken language skill |
CN106851216A (en) * | 2017-03-10 | 2017-06-13 | 山东师范大学 | A kind of classroom behavior monitoring system and method based on face and speech recognition |
CN108154304A (en) * | 2017-12-26 | 2018-06-12 | 重庆大争科技有限公司 | There is the server of Teaching Quality Assessment |
CN109493968A (en) * | 2018-11-27 | 2019-03-19 | 科大讯飞股份有限公司 | A kind of cognition appraisal procedure and device |
CN109800663A (en) * | 2018-12-28 | 2019-05-24 | 华中科技大学鄂州工业技术研究院 | Teachers ' teaching appraisal procedure and equipment based on voice and video feature |
CN110111011A (en) * | 2019-05-09 | 2019-08-09 | 成都终身成长科技有限公司 | A kind of quality of instruction monitoring and managing method, device and electronic equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116343824A (en) * | 2023-05-29 | 2023-06-27 | 新励成教育科技股份有限公司 | Comprehensive evaluation and solution method, system, device and medium for talent expression capability |
CN116343824B (en) * | 2023-05-29 | 2023-08-15 | 新励成教育科技股份有限公司 | Comprehensive evaluation and solution method, system, device and medium for talent expression capability |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Byun et al. | Retroflex versus bunched in treatment for rhotic misarticulation: Evidence from ultrasound biofeedback intervention | |
CA2919762C (en) | Method and system for measuring communication skills of crew members | |
Izura et al. | Age/order of acquisition effects and the cumulative learning of foreign words: A word training study | |
Wren et al. | James Law, Wendy Lee 2, Sue Roulstone 3 | |
WO2019024247A1 (en) | Data exchange network-based online teaching evaluation system and method | |
Jones et al. | The effects of concurrent cognitive load on phonological processing in adults who stutter | |
CN109147440A (en) | A kind of interactive education system and method | |
CN107133303A (en) | Method and apparatus for output information | |
CN105022929A (en) | Cognition accuracy analysis method for personality trait value test | |
Thibodeaux et al. | What do youth tennis athletes say to themselves? Observed and self-reported self-talk on the court | |
Franciscatto et al. | Towards a speech therapy support system based on phonological processes early detection | |
Wang et al. | Using the LENA in Teacher Training: Promoting Student Involement through automated feedback | |
CN104346389A (en) | Scoring method and system of semi-open-ended questions of oral test | |
Donaldson | Children with language impairments: An introduction | |
CN111028853A (en) | Spoken language expressive force evaluation method and system | |
Maćesić-Petrović et al. | Cognitive development of the children with visual impairment and special educational treatment | |
Liu et al. | Deep learning scoring model in the evaluation of oral English teaching | |
Bae | Student music therapists' differences in their clinical reflections across practicum levels | |
CN116090879A (en) | Flight training quality assessment method based on eye movement data | |
Runzrat et al. | Applying item response theory in adaptive tutoring systems for Thai language learners | |
Xu et al. | Research on the influence of situational teaching mode on online learning experience | |
CN110782189A (en) | Method and device for measuring teaching level based on cognitive load | |
Yang et al. | Implementation and evaluation of computer‐aided Mandarin phonemes training system for hearing‐impaired students | |
Bachtiar et al. | Student grouping by neural network based on affective factors in learning English | |
McKechnie | Exploring the use of technology for assessment and intensive treatment of childhood apraxia of speech |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200417 |