CN111369975A - University music scoring method, device, equipment and storage medium based on artificial intelligence - Google Patents
University music scoring method, device, equipment and storage medium based on artificial intelligence Download PDFInfo
- Publication number
- CN111369975A CN111369975A CN202010184698.2A CN202010184698A CN111369975A CN 111369975 A CN111369975 A CN 111369975A CN 202010184698 A CN202010184698 A CN 202010184698A CN 111369975 A CN111369975 A CN 111369975A
- Authority
- CN
- China
- Prior art keywords
- evaluation
- sound
- preset
- music
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 47
- 238000013077 scoring method Methods 0.000 title claims abstract description 33
- 238000011156 evaluation Methods 0.000 claims abstract description 242
- 238000004519 manufacturing process Methods 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 17
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 14
- 230000001755 vocal effect Effects 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 17
- 235000015170 shellfish Nutrition 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 230000000977 initiatory effect Effects 0.000 claims description 2
- 238000012360 testing method Methods 0.000 abstract description 14
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/005—Language recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Auxiliary Devices For Music (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a university music scoring method, a device, equipment and a storage medium based on artificial intelligence, belonging to the technical field of artificial intelligence, wherein the method comprises the steps of receiving a music evaluation instruction, starting a sound acquisition function and acquiring evaluation sound; identifying the evaluation sound to obtain character information and music information; acquiring the sound production time of each character and the interval time between each character in the character information, judging whether the sound production time of each character meets a preset time threshold value, judging whether the interval time between each character meets the preset interval time, and adding a distinguishing identifier for marking; obtaining result information after the preliminary evaluation based on a preset algorithm model, judging whether the counted result information meets a preset threshold value, and if not, directly giving a preliminary evaluation result as a final evaluation; the method and the device for music evaluation help to improve the accuracy of music evaluation, and meanwhile, a user can conveniently perform music test.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a university music scoring method, device, equipment and storage medium based on artificial intelligence.
Background
The university test is divided into two modes, one mode is a conventional written examination test, the other mode is an audition test, different test modes are often applied to different subjects, at present, the mainstream subjects such as chemistry and physics adopt the written examination mode, and most of music and spoken language adopt the audition test mode, namely, a student and an auditor are in one-to-one correspondence, for example, the music test is performed, the student performs singing, the auditor performs audition, and finally scoring is given; the oral english examination is also a one-to-one communication assessment, and the examiner gives a score.
University music examination evaluation is an important evaluation index in university music major, and currently, university music examinations adopt an artificial mode, and the evaluation mode consumes manpower, and cannot avoid introduction of subjective understanding, so that the consumed evaluation time is too long. Therefore, the prior art has the problems of excessive manpower consumption and inaccurate music evaluation when the university music evaluation is carried out.
Disclosure of Invention
The embodiment of the application aims to provide a university music scoring method, device, equipment and storage medium based on artificial intelligence, so as to solve the problems that network resources are excessively consumed and a search result is not accurate enough when a user searches in the prior art.
In order to solve the above technical problem, an embodiment of the present application provides a university music scoring method based on artificial intelligence, which adopts the following technical solutions:
an artificial intelligence based college music scoring method comprises the following steps:
receiving a music evaluation instruction, and simultaneously starting a sound acquisition function to acquire evaluation sound;
based on a preset voice recognition packet, performing recognition processing on the evaluation voice to acquire character information and music information;
acquiring the sounding time of each character and the interval time between each character in the character information based on a preset music score packet, judging whether the sounding time of each character meets a preset time threshold value based on a preset sounding time table, counting, judging whether the interval time between each character meets the preset interval time based on the preset interval time, and adding a distinguishing identifier for marking;
based on a preset algorithm model, carrying out preliminary evaluation on different marking information, obtaining result information after the preliminary evaluation, judging whether the counted result information meets a preset threshold value, and if not, directly giving out a preliminary evaluation result as a final evaluation;
acquiring vocal music information corresponding to the evaluation sound based on the preliminary evaluation result, judging whether each vocalization in the vocal music information reaches a preset standard value based on a voice-shell identification table, marking the vocalization reaching the preset standard value, differentially marking the vocalization not reaching the preset standard value, and counting vocalization results;
and scoring the evaluation sound based on the sound production result statistic value to obtain a scoring result.
Further, the university music scoring method based on artificial intelligence, wherein the initiating the sound acquiring function includes:
after receiving a music evaluation instruction, starting a preset trigger, and directly acquiring sound based on a voice recognition function in artificial intelligence.
Further, the university music scoring method based on artificial intelligence, wherein the identifying and processing of the evaluation sound based on a preset sound identification package comprises:
a pre-sorted set containing different language categories and sounds;
identifying the language type of the evaluation sound, and directly converting the evaluation sound into language characters based on the voice-to-character conversion; simultaneously, noise reduction processing is carried out to obtain accompaniment information in the evaluation sound and the sound shellfish in the evaluation sound
And (4) information.
Further, the university music scoring method based on artificial intelligence includes:
acquiring each language character and the duration of each language character;
acquiring accompaniment information in the evaluation sound and sound-shell information in the evaluation sound.
Further, the university music scoring method based on artificial intelligence may determine whether the utterance time of each word satisfies a preset time threshold, including:
and obtaining character information in the evaluation sound and time in the sounding time table for comparison, judging whether the sounding time of each character in the evaluation sound corresponds to the time in the sounding time table, and if the sounding time is consistent, judging that the sounding of the evaluation sound meets the evaluation index.
Further, the university music scoring method based on artificial intelligence preliminarily evaluates different marking information based on a preset algorithm model, obtains result information after preliminary evaluation, and judges whether the result information after statistics meets a preset threshold value, including:
counting the number n of the sound production time of each character in the overall evaluation sound reaching the evaluation index;
counting the number m of the coincidence marks of the inter-word interval time in the overall evaluation sound;
and judging whether the n and the m simultaneously satisfy the set threshold value or not, if so, satisfying the n and the m, and if one does not satisfy the m, not satisfying the set threshold value.
Further, the university music scoring method based on artificial intelligence, wherein the step of judging whether each vocalization in the vocal music information reaches a preset standard value based on the voice-shell identification table, comprises:
the voice-shell identification table comprises a voice-shell value required by each voice, the voice-shell value is used as a preset standard value, the voice-shell value of each voice in the vocal music information is obtained, comparison is carried out, judgment is carried out, and the number of voices reaching the preset standard value in the whole vocal music information is obtained.
In order to solve the technical problem, an embodiment of the present application further provides a university music scoring device based on artificial intelligence, which adopts the following technical solutions:
an artificial intelligence based university music scoring device, comprising:
the evaluation sound acquisition module is used for receiving the music evaluation instruction, and simultaneously starting a sound acquisition function to acquire an evaluation sound;
the evaluation sound processing module is used for identifying and processing the evaluation sound based on a preset sound identification packet to acquire character information and music information;
the preliminary evaluation module is used for acquiring the sounding time of each character and the interval time between each character in the character information based on a preset music score bag, judging whether the sounding time of each character meets a preset time threshold value based on a preset sounding time table, counting, judging whether the interval time between each character meets the preset interval time based on the preset interval time, and adding a distinguishing mark for marking;
the preliminary evaluation result acquisition module is used for preliminarily evaluating different marking information based on a preset algorithm model, acquiring result information after preliminary evaluation, judging whether the counted result information meets a preset threshold value or not, and directly giving a preliminary evaluation result as final evaluation if the counted result information does not meet the preset threshold value;
the secondary evaluation module is used for acquiring vocal music information corresponding to the evaluation sound based on the primary evaluation result, judging whether each vocalization in the vocal music information reaches a preset standard value based on a voice-shell identification table, marking the vocalization reaching the preset standard value, differentially marking the vocalization not reaching the preset standard value, and counting the vocalization result;
and the secondary evaluation result acquisition module is used for scoring the evaluation sound based on the sound production result statistic value to acquire a scoring result.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory having stored therein a computer program and a processor that when executed implements the steps of an artificial intelligence based university music scoring method as set forth in an embodiment of the present application.
In order to solve the above technical problem, an embodiment of the present application further provides a nonvolatile computer-readable storage medium, which adopts the following technical solutions:
a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of an artificial intelligence based university music scoring method as set forth in an embodiment of the present application.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the embodiment of the application discloses a university music scoring method, device, equipment and storage medium based on artificial intelligence, wherein a music scoring instruction is received, a sound acquisition function is started, and scoring sound is acquired; identifying the evaluation sound to obtain character information and music information; acquiring the sound production time of each character and the interval time between each character in the character information, judging whether the sound production time of each character meets a preset time threshold value, judging whether the interval time between each character meets the preset interval time, and adding a distinguishing identifier for marking; obtaining result information after the preliminary evaluation based on a preset algorithm model, judging whether the counted result information meets a preset threshold value, and if not, directly giving a preliminary evaluation result as a final evaluation; acquiring vocal music information of the evaluation sound, judging whether each sounding in the vocal music information reaches a preset standard value, and carrying out distinguishing marking; and scoring the evaluation sound to obtain a scoring result. The method and the device for music evaluation help to improve the accuracy of music evaluation, and meanwhile, a user can conveniently perform music test.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a diagram of an exemplary system architecture to which embodiments of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of an artificial intelligence based university music scoring method as described in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a university music scoring device based on artificial intelligence according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an evaluation sound acquisition module in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an evaluation sound processing module according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a preliminary evaluation module in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an embodiment of a computer device in an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the university music scoring method based on artificial intelligence provided in the embodiments of the present application is generally executed by a server/terminal device, and accordingly, the university music scoring apparatus based on artificial intelligence is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continuing reference to figure 2, a flowchart of one embodiment of an artificial intelligence based university music scoring method of the present application is shown, the artificial intelligence based university music scoring method comprising the steps of:
In this embodiment, the starting the sound acquiring function includes: after receiving a music evaluation instruction, starting a preset trigger, and directly acquiring sound based on a voice recognition function in artificial intelligence.
The specific implementation mode is as follows: when a student carries out music evaluation, firstly, the student clicks a music evaluation function, after the server receives an instruction for starting the evaluation, a preset trigger is started, the trigger can start a voice recognition function, and the voice of the student is acquired to a storage mechanism.
And 202, identifying the evaluation sound based on a preset sound identification packet to acquire character information and music information.
In this embodiment, the identifying, based on the preset voice recognition packet, the evaluation voice includes: a pre-sorted set containing different language categories and sounds; identifying the language type of the evaluation sound, and directly converting the evaluation sound into language characters based on the voice-to-character conversion; and simultaneously, noise reduction processing is carried out, and accompaniment information in the evaluation sound and sound shellfish information in the evaluation sound are obtained.
In this embodiment, the acquiring the text information and the music information includes: acquiring each language character and the duration of each language character; acquiring accompaniment information in the evaluation sound and sound-shell information in the evaluation sound.
Wherein the preset voice recognition packet comprises: a pre-sorted set containing different language categories and sounds; the identification processing of the evaluation sound comprises the following steps: identifying the language type of the evaluation sound, and directly converting the evaluation sound into language characters based on the voice-to-character conversion; and simultaneously, noise reduction processing is carried out, and accompaniment information in the evaluation sound and sound shellfish information in the evaluation sound are obtained. The text information includes: a duration of each language word and each said language word; the vocal music information includes: and evaluating accompaniment information in the sound and evaluating sound-shellfish information in the sound.
Explanation: when the students evaluate music, the music tracks to be evaluated are integrated as a voice recognition packet, and the voice recognition packet can be in Chinese language or other foreign language languages. After the server acquires the evaluation sound, the evaluation sound is sent to a sound identification packet for identification, the language type of the evaluation sound is identified, then, the sound is converted into characters, the singing time of each character in the music evaluation and the duration time of the character are acquired, and the pause time in each evaluation sound is acquired as the interval time between the characters; when the evaluation sound is identified, the accompaniment information in the evaluation sound and the sound and shellfish information of the student when the evaluation is carried out are identified.
In an embodiment of the present application, the determining whether the utterance time of each character satisfies a preset time threshold includes: and obtaining character information in the evaluation sound and time in the sounding time table for comparison, judging whether the sounding time of each character in the evaluation sound corresponds to the time in the sounding time table, and if the sounding time is consistent, judging that the sounding of the evaluation sound meets the evaluation index.
Explanation: the sounding time table includes singing time of each character in the test track, in this embodiment, the time is represented by milliseconds, for example, "fifty-six constellations fifty-sixteen flowers" in the track "love me china", the first "five" singing time is shorter than the second "five" singing time, the sixteen singing times are shorter, the singing times of "seats" and "flowers" are longer, the millisecond represents the singing time of each character, namely, the character time, compares the character time of each character obtained in the test sound with the preset character time of each character, if the two are consistent, the test index is represented, and if the two are not consistent, the test index is represented.
In an embodiment of the present application, the preset music score package includes: the method comprises the steps of obtaining music scores corresponding to music tracks used for evaluation in advance, obtaining character information in each music score, and generating a music score package;
in an embodiment of the present application, the adding the distinguishing mark includes: if the symbol is consistent with the symbol, a specific mark is used for representing, otherwise, other marks are used for representing, and different mark symbols can be set to represent the consistent index at different time intervals.
Explanation: after the evaluation, if the evaluation index is met, using a fixed identifier for representing, for example, using a color identifier, green for representing that the evaluation is met, and red for representing that the evaluation is not met; or using a distinguishing character, wherein 'A' represents that the evaluation is accordant and 'B' represents that the evaluation is not accordant. The format of the representation is not fixed and can be other identification formats.
And 204, performing preliminary evaluation on different marking information based on a preset algorithm model, acquiring result information after the preliminary evaluation, judging whether the counted result information meets a preset threshold value, and if not, directly giving a preliminary evaluation result as a final evaluation.
In some embodiments of the present application, the preliminarily evaluating different marking information based on a preset algorithm model, obtaining result information after the preliminarily evaluating, and determining whether the result information after the counting meets a preset threshold includes: counting the number n of the sound production time of each character in the overall evaluation sound reaching the evaluation index; counting the number m of the coincidence marks of the inter-word interval time in the overall evaluation sound; and judging whether the n and the m simultaneously satisfy the set threshold value or not, if so, satisfying the n and the m, and if one does not satisfy the m, not satisfying the set threshold value.
The method comprises the following steps of firstly, obtaining the result information after preliminary evaluation, wherein the preliminary evaluation is carried out on different marking information based on a preset algorithm model, and the obtaining of the result information after the preliminary evaluation comprises the following steps: counting the number n of the sound production time of each character in the overall evaluation sound reaching the evaluation index; counting the number m of the coincidence marks of the inter-word interval time in the overall evaluation sound;
wherein, the judging whether the counted result information meets a preset threshold value comprises: judging whether the n and the m simultaneously satisfy a set threshold value or not based on the n and the m, if so, satisfying the n and the m, and if one does not satisfy the m, not satisfying the set threshold value;
in some embodiments of the present application, if the result is not satisfied, the step of directly giving the preliminary evaluation result as the final score includes: and scoring the results which do not meet the set threshold value, wherein the scoring can be percentage scoring based on n and m, and then converting the scoring into grading scoring.
And step 205, acquiring vocal music information corresponding to the evaluation sound based on the preliminary evaluation result, judging whether each vocalization in the vocal music information reaches a preset standard value based on the voice-shell identification table, marking the vocalization reaching the preset standard value, differentially marking the vocalization not reaching the preset standard value, and counting the vocalization result.
In some embodiments of the present application, said and based on the voice-shell identification table, determining whether each utterance in the vocal music information reaches a preset standard value includes: the voice-shell identification table comprises a voice-shell value required by each voice, the voice-shell value is used as a preset standard value, the voice-shell value of each voice in the vocal music information is obtained, comparison is carried out, judgment is carried out, and the number of voices reaching the preset standard value in the whole vocal music information is obtained.
For example, the voice-shell identification table comprises a voice-shell value required by each voice, the voice-shell value is used as a preset standard value, the voice-shell value of each voice in the vocal music information is obtained, comparison is carried out, judgment is carried out, and the number of voices reaching the preset standard value in the whole vocal music information is obtained; and performing green marking on the sounding reaching a preset standard value, performing red marking on the sounding not reaching the preset standard value, and finally counting the number of green sounds.
And step 206, scoring the evaluation sound based on the sound production result statistic value to obtain a scoring result.
In some embodiments of the present application, the scoring the evaluation sound based on the statistical value of the utterance result, and the obtaining the scoring result includes: and counting the number of the sounding marks reaching a preset standard value, converting the counting result into a score or a grade value, and acquiring a grading result.
According to the university music scoring method based on artificial intelligence, a sound acquisition function can be started by receiving a music evaluation instruction, and evaluation sound is acquired; identifying the evaluation sound to obtain character information and music information; acquiring the sound production time of each character and the interval time between each character in the character information, judging whether the sound production time of each character meets a preset time threshold value, judging whether the interval time between each character meets the preset interval time, and adding a distinguishing identifier for marking; obtaining result information after the preliminary evaluation based on a preset algorithm model, judging whether the counted result information meets a preset threshold value, and if not, directly giving a preliminary evaluation result as a final evaluation; acquiring vocal music information of the evaluation sound, judging whether each sounding in the vocal music information reaches a preset standard value, and carrying out distinguishing marking; and scoring the evaluation sound to obtain a scoring result. The method and the device for music evaluation help to improve the accuracy of music evaluation, and meanwhile, a user can conveniently perform music test.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an artificial intelligence based university music scoring apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the university music scoring device 3 based on artificial intelligence according to the present embodiment includes: an evaluation sound acquisition module 301, an evaluation sound processing module 302, a primary evaluation module 303, a primary evaluation result acquisition module 304, a secondary evaluation module 305, and a secondary evaluation result acquisition module 306. Wherein:
the evaluation sound acquisition module 301 is configured to receive a music evaluation instruction, and start a sound acquisition function to acquire an evaluation sound;
the evaluation sound processing module 302 is configured to perform recognition processing on an evaluation sound based on a preset sound recognition packet to obtain text information and music information;
the preliminary evaluation module 303 is configured to obtain the sounding time of each character and the interval time between each character in the character information based on a preset music score package, judge whether the sounding time of each character meets a preset time threshold based on a preset sounding time table, perform statistics, judge whether the interval time between each character meets a preset interval time based on the preset interval time, and add a distinguishing identifier for marking;
a preliminary evaluation result obtaining module 304, configured to perform preliminary evaluation on different marking information based on a preset algorithm model, obtain result information after the preliminary evaluation, and determine whether the counted result information satisfies a preset threshold, and if not, directly give a preliminary evaluation result as a final evaluation;
the secondary evaluation module 305 is configured to obtain vocal music information corresponding to the evaluation sound based on the primary evaluation result, judge whether each utterance in the vocal music information reaches a preset standard value based on a voice-shellfish recognition table, mark the utterance reaching the preset standard value, perform differential marking on the utterance not reaching the preset standard value, and perform statistics on utterance results;
and the secondary evaluation result acquisition module 306 is configured to grade the evaluation sound based on the sound production result statistic to acquire a grading result.
In some embodiments of the present application, as shown in fig. 4, fig. 4 is a schematic structural diagram of an evaluation sound obtaining module in an embodiment of the present application, where the evaluation sound obtaining module 301 includes an evaluation instruction obtaining unit 301a, a trigger unit 301b, and an evaluation sound obtaining unit 301 c.
In some embodiments of the present application, the evaluation instruction obtaining unit 301a is configured to receive an evaluation request from a user.
In some embodiments of the present application, the trigger unit 301b is configured to trigger the evaluation sound acquiring unit 301c after the server receives an evaluation request from the user.
In some embodiments of the present application, the evaluation sound acquiring unit 301c is configured to record and store the sound sung by the user.
In some embodiments of the present application, as shown in fig. 5, fig. 5 is a schematic structural diagram of an evaluation sound processing module in the embodiments of the present application, and the evaluation sound processing module 302 includes an evaluation sound recognition unit 302a, a text information obtaining unit 302b, and a vocal music information obtaining unit 302 c.
In some embodiments of the present application, the evaluation sound recognition unit 302a is configured to recognize a language type to which an evaluation sound belongs based on a preset speech recognition packet, and directly convert the evaluation sound into language characters based on a speech-to-character conversion; and simultaneously, noise reduction processing is carried out, and accompaniment information in the evaluation sound and sound shellfish information in the evaluation sound are obtained.
In some embodiments of the present application, the text information obtaining unit 302b is configured to obtain each language text and a duration of each language text; the interval between letters.
In some embodiments of the present application, the vocal music information obtaining unit 302c is configured to obtain accompaniment information in the evaluation sound and vocal/shellfish information in the evaluation sound.
In some embodiments of the present application, as shown in fig. 6, fig. 6 is a schematic structural diagram of a preliminary evaluation module in the embodiments of the present application, where the preliminary evaluation module 303 includes a text duration evaluation unit 303a and a text interval evaluation unit 303 b.
In some embodiments of the present application, the text duration evaluation unit 303a is configured to obtain text information in the evaluation sound, compare the text information with time in the sound production schedule, determine whether the sound production time of each text in the evaluation sound corresponds to the time in the sound production schedule, and if the time is consistent, determine that the sound production of the evaluation sound meets the evaluation index.
In some embodiments of the present application, the text interval time evaluation unit 303b is configured to obtain text information in the evaluation sound, compare the text information with time in the sound production schedule, determine whether an interval time between adjacent texts in the evaluation sound corresponds to a preset interval time, and if the time is consistent, determine that the sound production of the evaluation sound meets the evaluation index.
In some embodiments of the present application, the preliminary evaluation result obtaining module 304 is configured to determine whether the number n of the character duration meeting the index and the number m of the character interval meeting the index meet a preset threshold at the same time, if so, the preliminary evaluation result passes, otherwise, the evaluation result is converted into a final evaluation, and a subsequent vocal music testing part is not performed.
According to the university music scoring device based on artificial intelligence, a sound acquisition function is started by receiving a music evaluation instruction, and evaluation sound is acquired; identifying the evaluation sound to obtain character information and music information; acquiring the sound production time of each character and the interval time between each character in the character information, judging whether the sound production time of each character meets a preset time threshold value, judging whether the interval time between each character meets the preset interval time, and adding a distinguishing identifier for marking; obtaining result information after the preliminary evaluation based on a preset algorithm model, judging whether the counted result information meets a preset threshold value, and if not, directly giving a preliminary evaluation result as a final evaluation; acquiring vocal music information of the evaluation sound, judging whether each sounding in the vocal music information reaches a preset standard value, and carrying out distinguishing marking; and scoring the evaluation sound to obtain a scoring result. The method and the device for music evaluation help to improve the accuracy of music evaluation, and meanwhile, a user can conveniently perform music test.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 7, fig. 7 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 7 comprises a memory 7a, a processor 7b, a network interface 7c, which are communicatively connected to each other via a system bus. It is noted that only a computer device 7 having components 7a-7c is shown in the figure, but it is to be understood that not all of the shown components need be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 7a includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 7a may be an internal storage unit of the computer device 7, such as a hard disk or a memory of the computer device 7. In other embodiments, the memory 7a may also be an external storage device of the computer device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash Card (FlashCard), and the like, which are provided on the computer device 7. Of course, the memory 7a may also comprise both an internal storage unit of the computer device 7 and an external storage device thereof. In this embodiment, the memory 7a is generally used for storing an operating system installed in the computer device 7 and various types of application software, such as program codes of a university music scoring method based on artificial intelligence, and the like. Further, the memory 7a may also be used to temporarily store various types of data that have been output or are to be output.
The processor 7b may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 7b is typically used to control the overall operation of the computer device 7. In this embodiment, the processor 7b is configured to execute the program code stored in the memory 7a or process data, such as the program code of the artificial intelligence based university music scoring method.
The network interface 7c may comprise a wireless network interface or a wired network interface, and the network interface 7c is typically used for establishing a communication connection between the computer device 7 and other electronic devices.
The present application provides yet another embodiment that provides a non-transitory computer-readable storage medium having stored thereon an artificial intelligence-based university music scoring search program executable by at least one processor to cause the at least one processor to perform the steps of the artificial intelligence-based university music scoring method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.
Claims (10)
1. An artificial intelligence based college music scoring method is characterized by comprising the following steps:
receiving a music evaluation instruction, and simultaneously starting a sound acquisition function to acquire evaluation sound;
based on a preset voice recognition packet, performing recognition processing on the evaluation voice to acquire character information and music information;
acquiring the sounding time of each character and the interval time between each character in the character information based on a preset music score packet, judging whether the sounding time of each character meets a preset time threshold value based on a preset sounding time table, counting, judging whether the interval time between each character meets the preset interval time based on the preset interval time, and adding a distinguishing identifier for marking;
based on a preset algorithm model, carrying out preliminary evaluation on different marking information, obtaining result information after the preliminary evaluation, judging whether the counted result information meets a preset threshold value, and if not, directly giving out a preliminary evaluation result as a final evaluation;
acquiring vocal music information corresponding to the evaluation sound based on the preliminary evaluation result, judging whether each vocalization in the vocal music information reaches a preset standard value based on a voice-shell identification table, marking the vocalization reaching the preset standard value, differentially marking the vocalization not reaching the preset standard value, and counting vocalization results;
and scoring the evaluation sound based on the sound production result statistic value to obtain a scoring result.
2. The artificial intelligence based university music scoring method as recited in claim 1, wherein the initiating a sound capture function comprises: after receiving a music evaluation instruction, starting a preset trigger, and directly acquiring sound based on a voice recognition function in artificial intelligence.
3. The artificial intelligence based university music scoring method according to claim 2, wherein the identifying process of the scoring voice based on a preset voice recognition package comprises:
a pre-sorted set containing different language categories and sounds;
identifying the language type of the evaluation sound, and directly converting the evaluation sound into language characters based on the voice-to-character conversion; simultaneously, noise reduction processing is carried out to obtain accompaniment information in the evaluation sound and the sound shellfish in the evaluation sound
And (4) information.
4. The artificial intelligence based university music scoring method as recited in claim 3, wherein the obtaining textual information and music information comprises:
acquiring each language character and the duration of each language character;
acquiring accompaniment information in the evaluation sound and sound-shell information in the evaluation sound.
5. The artificial intelligence based university music scoring method according to claim 4, wherein the determining whether the utterance time of each word meets a preset time threshold comprises:
and obtaining character information in the evaluation sound and time in the sounding time table for comparison, judging whether the sounding time of each character in the evaluation sound corresponds to the time in the sounding time table, and if the sounding time is consistent, judging that the sounding of the evaluation sound meets the evaluation index.
6. The university music scoring method based on artificial intelligence according to claim 5, wherein the preliminary evaluation of different labeled information based on a preset algorithm model, obtaining result information after the preliminary evaluation, and determining whether the result information after statistics meets a preset threshold comprises:
counting the number n of the sound production time of each character in the overall evaluation sound reaching the evaluation index;
counting the number m of the coincidence marks of the inter-word interval time in the overall evaluation sound;
and judging whether the n and the m simultaneously satisfy the set threshold value or not, if so, satisfying the n and the m, and if one does not satisfy the m, not satisfying the set threshold value.
7. The artificial intelligence based university music scoring method according to claim 6, wherein the determining whether each utterance in the vocal music information reaches a preset standard value based on the voice-shell recognition table comprises:
the voice-shell identification table comprises a voice-shell value required by each voice, the voice-shell value is used as a preset standard value, the voice-shell value of each voice in the vocal music information is obtained, comparison is carried out, judgment is carried out, and the number of voices reaching the preset standard value in the whole vocal music information is obtained.
8. An artificial intelligence based university music scoring device, comprising:
the evaluation sound acquisition module is used for receiving the music evaluation instruction, and simultaneously starting a sound acquisition function to acquire an evaluation sound;
the evaluation sound processing module is used for identifying and processing the evaluation sound based on a preset sound identification packet to acquire character information and music information;
the preliminary evaluation module is used for acquiring the sounding time of each character and the interval time between each character in the character information based on a preset music score bag, judging whether the sounding time of each character meets a preset time threshold value based on a preset sounding time table, counting, judging whether the interval time between each character meets the preset interval time based on the preset interval time, and adding a distinguishing mark for marking;
the preliminary evaluation result acquisition module is used for preliminarily evaluating different marking information based on a preset algorithm model, acquiring result information after preliminary evaluation, judging whether the counted result information meets a preset threshold value or not, and directly giving a preliminary evaluation result as final evaluation if the counted result information does not meet the preset threshold value;
the secondary evaluation module is used for acquiring vocal music information corresponding to the evaluation sound based on the primary evaluation result, judging whether each vocalization in the vocal music information reaches a preset standard value based on a voice-shell identification table, marking the vocalization reaching the preset standard value, differentially marking the vocalization not reaching the preset standard value, and counting the vocalization result;
and the secondary evaluation result acquisition module is used for scoring the evaluation sound based on the sound production result statistic value to acquire a scoring result.
9. A computer device comprising a memory having stored therein a computer program and a processor which, when executed, carries out the steps of the artificial intelligence based university music scoring method as claimed in any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the artificial intelligence based university music scoring method as recited in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010184698.2A CN111369975A (en) | 2020-03-17 | 2020-03-17 | University music scoring method, device, equipment and storage medium based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010184698.2A CN111369975A (en) | 2020-03-17 | 2020-03-17 | University music scoring method, device, equipment and storage medium based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111369975A true CN111369975A (en) | 2020-07-03 |
Family
ID=71211232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010184698.2A Pending CN111369975A (en) | 2020-03-17 | 2020-03-17 | University music scoring method, device, equipment and storage medium based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111369975A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112201100A (en) * | 2020-10-27 | 2021-01-08 | 暨南大学 | Music singing scoring system and method for evaluating artistic quality of primary and secondary schools |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101894552A (en) * | 2010-07-16 | 2010-11-24 | 安徽科大讯飞信息科技股份有限公司 | Speech spectrum segmentation based singing evaluating system |
CN102110435A (en) * | 2009-12-23 | 2011-06-29 | 康佳集团股份有限公司 | Method and system for karaoke scoring |
JP2013195738A (en) * | 2012-03-21 | 2013-09-30 | Yamaha Corp | Singing evaluation device |
CN107103915A (en) * | 2016-02-18 | 2017-08-29 | 广州酷狗计算机科技有限公司 | A kind of audio data processing method and device |
CN107978322A (en) * | 2017-11-27 | 2018-05-01 | 北京酷我科技有限公司 | A kind of K songs marking algorithm |
CN108122561A (en) * | 2017-12-19 | 2018-06-05 | 广东小天才科技有限公司 | A kind of spoken voice assessment method and electronic equipment based on electronic equipment |
CN109448754A (en) * | 2018-09-07 | 2019-03-08 | 南京光辉互动网络科技股份有限公司 | A kind of various dimensions singing marking system |
CN109903778A (en) * | 2019-01-08 | 2019-06-18 | 北京雷石天地电子技术有限公司 | The method and system of real-time singing marking |
CN109979485A (en) * | 2019-04-29 | 2019-07-05 | 北京小唱科技有限公司 | Audio evaluation method and device |
CN110660383A (en) * | 2019-09-20 | 2020-01-07 | 华南理工大学 | Singing scoring method based on lyric and singing alignment |
CN110688556A (en) * | 2019-09-21 | 2020-01-14 | 郑州工程技术学院 | Remote Japanese teaching interaction system and interaction method based on big data analysis |
CN110718239A (en) * | 2019-10-15 | 2020-01-21 | 北京达佳互联信息技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
-
2020
- 2020-03-17 CN CN202010184698.2A patent/CN111369975A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102110435A (en) * | 2009-12-23 | 2011-06-29 | 康佳集团股份有限公司 | Method and system for karaoke scoring |
CN101894552A (en) * | 2010-07-16 | 2010-11-24 | 安徽科大讯飞信息科技股份有限公司 | Speech spectrum segmentation based singing evaluating system |
JP2013195738A (en) * | 2012-03-21 | 2013-09-30 | Yamaha Corp | Singing evaluation device |
CN107103915A (en) * | 2016-02-18 | 2017-08-29 | 广州酷狗计算机科技有限公司 | A kind of audio data processing method and device |
CN107978322A (en) * | 2017-11-27 | 2018-05-01 | 北京酷我科技有限公司 | A kind of K songs marking algorithm |
CN108122561A (en) * | 2017-12-19 | 2018-06-05 | 广东小天才科技有限公司 | A kind of spoken voice assessment method and electronic equipment based on electronic equipment |
CN109448754A (en) * | 2018-09-07 | 2019-03-08 | 南京光辉互动网络科技股份有限公司 | A kind of various dimensions singing marking system |
CN109903778A (en) * | 2019-01-08 | 2019-06-18 | 北京雷石天地电子技术有限公司 | The method and system of real-time singing marking |
CN109979485A (en) * | 2019-04-29 | 2019-07-05 | 北京小唱科技有限公司 | Audio evaluation method and device |
CN110660383A (en) * | 2019-09-20 | 2020-01-07 | 华南理工大学 | Singing scoring method based on lyric and singing alignment |
CN110688556A (en) * | 2019-09-21 | 2020-01-14 | 郑州工程技术学院 | Remote Japanese teaching interaction system and interaction method based on big data analysis |
CN110718239A (en) * | 2019-10-15 | 2020-01-21 | 北京达佳互联信息技术有限公司 | Audio processing method and device, electronic equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112201100A (en) * | 2020-10-27 | 2021-01-08 | 暨南大学 | Music singing scoring system and method for evaluating artistic quality of primary and secondary schools |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108428446A (en) | Audio recognition method and device | |
WO2021218028A1 (en) | Artificial intelligence-based interview content refining method, apparatus and device, and medium | |
CN111833853B (en) | Voice processing method and device, electronic equipment and computer readable storage medium | |
JP6541673B2 (en) | Real time voice evaluation system and method in mobile device | |
CN110457432A (en) | Interview methods of marking, device, equipment and storage medium | |
CN112328761B (en) | Method and device for setting intention label, computer equipment and storage medium | |
WO2022178969A1 (en) | Voice conversation data processing method and apparatus, and computer device and storage medium | |
CN111292751B (en) | Semantic analysis method and device, voice interaction method and device, and electronic equipment | |
WO2020199600A1 (en) | Sentiment polarity analysis method and related device | |
CN110503956B (en) | Voice recognition method, device, medium and electronic equipment | |
CN113314150A (en) | Emotion recognition method and device based on voice data and storage medium | |
CN111930792A (en) | Data resource labeling method and device, storage medium and electronic equipment | |
CN112669842A (en) | Man-machine conversation control method, device, computer equipment and storage medium | |
CN110647613A (en) | Courseware construction method, courseware construction device, courseware construction server and storage medium | |
CN112201253B (en) | Text marking method, text marking device, electronic equipment and computer readable storage medium | |
CN112669850A (en) | Voice quality detection method and device, computer equipment and storage medium | |
CN113254814A (en) | Network course video labeling method and device, electronic equipment and medium | |
CN111369975A (en) | University music scoring method, device, equipment and storage medium based on artificial intelligence | |
CN113012683A (en) | Speech recognition method and device, equipment and computer readable storage medium | |
CN112116181B (en) | Classroom quality model training method, classroom quality evaluation method and classroom quality evaluation device | |
EP4276827A1 (en) | Speech similarity determination method, device and program product | |
CN115295020A (en) | Voice evaluation method and device, electronic equipment and storage medium | |
CN114637831A (en) | Data query method based on semantic analysis and related equipment thereof | |
CN113990286A (en) | Speech synthesis method, apparatus, device and storage medium | |
CN111783447B (en) | Sensitive word detection method, device and equipment based on ngram distance and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200703 |