CN113993034A - Directional sound propagation method and system for microphone - Google Patents

Directional sound propagation method and system for microphone Download PDF

Info

Publication number
CN113993034A
CN113993034A CN202111370975.XA CN202111370975A CN113993034A CN 113993034 A CN113993034 A CN 113993034A CN 202111370975 A CN202111370975 A CN 202111370975A CN 113993034 A CN113993034 A CN 113993034A
Authority
CN
China
Prior art keywords
propagation
information
directional
library
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111370975.XA
Other languages
Chinese (zh)
Other versions
CN113993034B (en
Inventor
王刚
毕馨月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202111370975.XA priority Critical patent/CN113993034B/en
Publication of CN113993034A publication Critical patent/CN113993034A/en
Application granted granted Critical
Publication of CN113993034B publication Critical patent/CN113993034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/34Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
    • H04R1/345Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means for loudspeakers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Otolaryngology (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention discloses a directional sound propagation method and a directional sound propagation system for a microphone, wherein the method comprises the following steps: acquiring first acquisition language information; constructing a directional semantic recognition library; performing semantic recognition, and if the first language pointing feature is triggered by the first collected language information, performing motion feature capture on the first teacher user according to the inertial sensor to obtain a first real-time motion feature; obtaining a first propagation space characteristic of the first teacher user; inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, obtaining first output information according to the propagation parameter training model, obtaining a second propagation parameter, and realizing directional propagation of a microphone according to the second propagation parameter. The technical problems that the language identification of teachers is inaccurate due to the directional propagation of the sound equipment, and the accuracy and the scientificity of the directional propagation of the sound equipment are poor are solved.

Description

Directional sound propagation method and system for microphone
Technical Field
The invention relates to the field of teaching, in particular to a directional sound propagation method and system for a microphone.
Background
In the course of giving lessons at the classroom teacher, the student can appear violating the action of classroom discipline, can seriously influence teaching effect and quality, and teacher's sound is little and directive property is low, is difficult to remind the student, is difficult to maintain discipline. Just need wear microphone and directive property stereo set, directive property stereo set is directional stereo set, directional loudspeaker again, through using directional sound technique, provides a comparatively effectual sound propagation mode for everybody, lets sound directional propagation. And the transmission range of sound can be controlled, for the directional sound technology, the directional sound technology can be directly projected into a specific area, so that people entering the area can hear the sound, and people in the area can not hear the sound almost, an independent audio space can be directly created, the precision is higher, and the directional sound technology can not cause interference to the surrounding environment, thereby creating an immersive experience.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
the technical problems that the directional propagation of the sound is inaccurate in teacher language identification, the directional propagation of the sound is poor in accuracy and scientificity, and therefore the application effect of the microphone and the directional sound in a classroom is not ideal are solved.
Disclosure of Invention
Aiming at the defects in the prior art, the directional sound propagation method and system for the microphone aim to solve the technical problems that the directional propagation of sound is inaccurate for teacher language identification, and the directional propagation of sound is poor in accuracy and scientificity in the prior art. The technical effects of improving the teaching quality by collecting classroom language of a teacher and recognizing semantics, improving the sentence recognition accuracy of directional propagation, and improving the accuracy and the scientificity of directional propagation of sound equipment by capturing teacher actions and analyzing propagation spatial characteristics are achieved.
In a first aspect, an embodiment of the present application provides a directional sound propagation method for a microphone, where the method includes: acquiring first acquired language information by inputting and acquiring classroom languages of a first teacher user in real time; constructing a directional semantic recognition library; inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature; if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature; obtaining a first propagation space characteristic of the first teacher user; inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter; and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
In another aspect, the present application also provides a directional sound propagation system for a microphone, the system comprising: the first obtaining unit is used for carrying out classroom language real-time input and collection on a first teacher user to obtain first collected language information; the first construction unit is used for constructing a pointing semantic recognition library; the first judgment unit is used for inputting the first collected language information into the directional semantic recognition library for semantic recognition and judging whether to trigger a first language directional feature; the second obtaining unit is used for capturing motion characteristics of the first teacher user according to an inertial sensor to obtain first real-time motion characteristics if the first language pointing characteristics are triggered by the first collected language information; a third obtaining unit configured to obtain a first propagation space feature of the first teacher user; a fourth obtaining unit, configured to obtain first output information according to a propagation parameter training model by inputting the first real-time action feature and the first propagation space feature as input information into the propagation parameter training model, where the first output information is a first propagation parameter; and the fifth obtaining unit is used for obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
In another aspect, the present invention provides a directional sound propagation system for a microphone, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the method comprises the steps that a first teacher user is subjected to classroom language real-time input and collection to obtain first collected language information; constructing a directional semantic recognition library; inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature; if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature; obtaining a first propagation space characteristic of the first teacher user; inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter; and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing the directional propagation of the microphone according to the second propagation parameter. The directional sound propagation method and system for the microphone have the advantages that the language acquisition and semantic recognition of teachers are achieved, the sentence recognition accuracy of directional propagation is improved, the motion of teachers is captured and the spatial characteristic analysis is propagated, the directional propagation accuracy and the scientificity of sound are improved, and accordingly the teaching quality is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic flow chart of a directional sound propagation method for a microphone according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a process of constructing a directional semantic recognition library in a directional sound propagation method for a microphone according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a first keyword library obtained in a directional sound propagation method for a microphone according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating obtaining a first propagation space characteristic in a directional sound propagation method for a microphone according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating adjustment of a first sound wave divergence angle in a directional sound propagation method for a microphone according to an embodiment of the present application;
fig. 6 is a schematic flowchart of obtaining a first relative distance in a directional sound propagation method for a microphone according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a directional sound propagation system for a microphone according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a first constructing unit 12, a first judging unit 13, a second obtaining unit 14, a third obtaining unit 15, a fourth obtaining unit 16, a fifth obtaining unit 17, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides a directional sound propagation method and system for a microphone, and solves the technical problems that in the prior art, the directional propagation of sound is inaccurate in teacher language identification, and the directional propagation of sound is poor in accuracy and scientificity. The technical effects of improving the teaching quality by collecting classroom language of a teacher and recognizing semantics, improving the sentence recognition accuracy of directional propagation, and improving the accuracy and the scientificity of directional propagation of sound equipment by capturing teacher actions and analyzing propagation spatial characteristics are achieved.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
In the course of giving lessons at the classroom teacher, the student can appear violating the action of classroom discipline, can seriously influence teaching effect and quality, and teacher's sound is little and directive property is low, is difficult to remind the student, is difficult to maintain discipline. Just need wear microphone and directive property stereo set, directive property stereo set is through using directional sound technique, for everybody provides a comparatively effectual sound propagation mode, lets sound directional propagation. And can also control the propagation range of sound, and can not cause interference to the surrounding environment, make immersive experience. The technical problems that the directional propagation of sound is inaccurate in teacher language identification and the accuracy and the scientificity of the directional propagation of the sound are poor exist in the prior art.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a directional sound propagation method for a microphone, and the method comprises the following steps: acquiring first acquired language information by inputting and acquiring classroom languages of a first teacher user in real time; constructing a directional semantic recognition library; inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature; if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature; obtaining a first propagation space characteristic of the first teacher user; inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter; and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
For better understanding of the above technical solutions, the following detailed descriptions will be provided in conjunction with the drawings and the detailed description of the embodiments.
Example one
As shown in fig. 1, the present application provides a directional sound propagation method for a microphone, wherein the method is applied to a microphone sound system, the system is connected to an inertial sensor in communication, and the method includes:
step S100: acquiring first acquired language information by inputting and acquiring classroom languages of a first teacher user in real time;
specifically, many schools currently equip teachers with microphones in class rooms. The microphone acoustic system comprises a microphone acoustic system, a microphone, a speaker, a microphone, a first teacher user, a second teacher user, a microphone acoustic system and a microphone acoustic system, wherein the first teacher user is any teacher who uses the microphone to conduct teaching, collects classroom language of the first teacher user in real time, and inputs the classroom language, the classroom language comprises classroom interaction with students, and the students are guided to pay attention to and listen to and speak, give lessons to the students and a series of words. And acquiring the first acquisition language information through real-time acquisition and entry. The first collected voice information is real and close to actual teaching, and a foundation can be laid for building a subsequent semantic recognition library.
Step S200: constructing a directional semantic recognition library;
further, as shown in fig. 2, the building of the semantic recognition library further includes:
step S210: constructing a directional keyword library, wherein the directional keyword library comprises a first keyword library and a second keyword library;
step S220: generating a first recognition sentence library according to the pointing keyword library;
step S230: obtaining a second recognition sentence library according to the first recognition sentence library, wherein the second recognition sentence library is a similar meaning sentence of the first recognition sentence library;
step S240: and constructing the pointed semantic recognition library according to the first recognition sentence library and the second recognition sentence library.
Specifically, constructing a pointed semantic recognition library first requires constructing the pointed keyword library. And constructing the first keyword library by collecting student information which is input into the class related to the first teacher user, wherein the student information comprises a name, whether the class is a cadre and the like. And establishing the second keyword library by collecting and inputting the classroom commonly used negative sentences of the first teacher user, wherein the classroom commonly used negative sentences are often used for reminding students to concentrate attention, reminding students to pay attention to classroom discipline and the like. The pointed keyword library comprises the first keyword library and a second keyword library. The first keyword library and the second keyword library constitute the first recognition sentence library, for example: and matching the names of the students with the attention disciplines, matching the names of the students with the names of the students without meeting the ears, and the like, thereby generating a first recognition sentence library. Because the keyword library has certain limitation, in order to construct a comprehensive directional semantic recognition library, the similar sentence of the first sentence library is obtained according to the first recognition sentence library and the big data to form the second recognition sentence library. For example: the similar meaning words of the crossheads and the ears are talking and chatting, and the same meaning can be expressed by different sentence patterns, such as statement sentences, question-reversing sentences, question-doubting sentences, double negative sentences and the like. And constructing the directional semantic recognition library according to the first recognition sentence library and the second recognition sentence library, thereby achieving the purpose of constructing a comprehensive directional semantic recognition library with strong pertinence.
Step S300: inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature;
step S400: if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature;
specifically, the collected language information is input into the directional semantic recognition library, a semantic recognition result can be obtained through a semantic recognition technology, such as a semantic analysis technology, an NLP technology, and the like, the first language directional feature is triggered if the semantic recognition result is a matching success, and the first language directional feature cannot be triggered if the semantic recognition result is a matching failure. The language pointing feature is a feature having language directionality in language information, and the language directionality can be understood as a phenomenon that a language points to a certain student or a certain class. And if the first language pointing feature is triggered by the first collected language information, capturing the motion feature of the first teacher user according to the inertial sensor. The inertial sensor is worn on the body of the first teacher user, and can capture the motion characteristics of the first teacher user by detecting and measuring the motion acceleration, the inclination, the impact, the rotation and the multi-degree-of-freedom motion of the first teacher user to obtain real-time first real-time motion characteristics, wherein the first real-time motion characteristics comprise various motions, motion angles, motion speeds, motion amplitudes and the like of the first teacher user. By capturing the real-time actions of the first teacher user, the real-time state of the teacher can be accurately mastered.
Step S500: obtaining a first propagation space characteristic of the first teacher user;
further, as shown in fig. 4, after obtaining the first propagation space characteristic of the first instructor user, step S500 in this embodiment of the present application further includes:
step S510: obtaining first geometric space information of the first teacher user;
step S520: obtaining feature point distribution information according to the first geometric space information, wherein the feature point distribution information comprises seat distribution information and podium distribution information;
step S530: determining a space limit threshold value of microphone propagation according to the feature point distribution information;
step S540: taking a spatial restriction threshold for the microphone propagation as the first propagation spatial feature.
Specifically, the first geometric spatial information is spatial information given by the first teacher user, such as an area and a layout of a classroom, a report room, an auditorium, and the like. And obtaining feature point distribution information according to the first geometric space information, wherein the feature point distribution information comprises seat distribution information and podium distribution information, such as classroom area, distance between tables and chairs, geometric parameters (length, width, height and shape) of the podium and the like. The feature point distribution information affecting sound propagation includes seat distribution information and podium distribution information. And obtaining seat distribution information of the first row and the last row according to the feature point distribution information so as to determine the maximum value and the minimum value (loudness of sound) of sound transmitted by the microphone, wherein the maximum value and the minimum value of sound are the microphone transmission space limit threshold value, and the microphone transmission space limit threshold value is taken as the first transmission space feature. Through first propagation space characteristic can be propagated the microphone and be restricted to can be for not influencing under the prerequisite of other class teaching work, the basis is established in sound directive propagation.
Step S600: inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter;
step S700: and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
Specifically, the propagation parameter training model is a Neural network model, which is a Neural network model in machine learning, and a Neural Network (NN) is a complex Neural network system formed by widely connecting a large number of simple processing units (called neurons), which reflects many basic features of human brain functions, and is a highly complex nonlinear dynamical learning system. Neural network models are described based on mathematical models of neurons. Artificial Neural Networks (ANN), is a description of the first-order properties of the human brain system. Briefly, it is a mathematical model. And inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information through the training of a large amount of training data to convergence, wherein the first propagation space characteristic is a space limit threshold value propagated by a microphone, and the first real-time action characteristic comprises various actions, action angles, action speeds, action amplitudes and the like of a first teacher user. And obtaining first output information according to the propagation parameter training model, wherein the first output information is a first propagation parameter, and the first propagation parameter is the direction and the sound loudness of sound waves. The first propagation sound is used by the first teacher user, the first propagation sound has different pointing types due to different sound directivity angles, and in order to ensure that the sound propagation parameters are suitable for the type of sound attributes, the first propagation sound is adjusted according to the sound adjustment data and the output sound wave data, namely the first propagation sound is optimized and adjusted, so that the second propagation parameters are obtained. And further realizing directional propagation of the microphone according to the second propagation parameter. The technical effects of scientifically predicting the sound propagation parameters through real-time action characteristics and first space propagation characteristics of a teacher, adjusting the sound propagation parameters through a first propagation sound box, realizing directional propagation of sound and improving teaching quality are achieved.
Further, as shown in fig. 3, the embodiment of the present application further includes:
step S241: obtaining class information of the first teacher user;
step S242: constructing a class keyword library according to the class information of the lectures, wherein each class in the classes of the lectures correspondingly comprises a class keyword library;
step S243: obtaining a real-time class keyword library according to the real-time class information of the first teacher user;
step S244: and taking the real-time class keyword library as the first keyword library.
Specifically, the class information of the first teacher user is obtained through information collection, and the class information includes class student names, student positions (such as class representatives and cadres), and the like. And counting the class information based on the class information of the lectures, and constructing class keyword libraries, wherein each class in the classes of the lectures corresponds to one class keyword library. Further, the real-time class information of the first teacher user is obtained, which may be obtained by, but not limited to, real-time positioning or real-time entry, for example, positioning information displayed by a microphone connection app (bluetooth connection, etc.), or technical means such as manual input before class, voice input, etc. And acquiring a real-time class keyword library according to the real-time class information of the first teacher user, namely matching the class information currently giving classes of the first teacher user to a corresponding class keyword library. For example: the next lesson is a 5-class course, and a keyword library constructed by names of students in 5 classes is selected as a first keyword library. The real-time class keyword library is further used as the first keyword library, and the first keyword library can be accurately matched with the real-time teaching information of the first teacher user, so that the flexibility and the accuracy of constructing the pointed semantic recognition library are improved.
Further, as shown in fig. 5, the embodiment of the present application includes:
step S710: obtaining first connection sound information of the first teacher user;
step S720: determining the pointing type of the sound according to the first connection sound information;
step S730: determining a first sound wave diffusion angle according to the sound direction type, wherein the first sound wave diffusion angle is an initial sound wave diffusion angle;
step S740: adjusting the first acoustic divergence angle based on the first propagation parameter.
Specifically, the microphone used by the first teacher user is connected with the corollary sound equipment, so that the amplification and diffusion of sound are realized. And obtaining the first connection sound information through the connection relation of a microphone, wherein the first connection sound information comprises the pointing type of the sound. The directional angles of the sound devices are different, so that different directional types are generated, after the directional type of the sound devices is determined, the directional propagation covering angle of the sound waves can be determined according to the directional type of the sound devices, and therefore a first sound wave diffusion angle is determined, and the first sound wave diffusion angle is an initial sound wave diffusion angle. The first propagation parameters are the direction and sound loudness of sound waves. In order to ensure that the propagated signals are directional and are transmitted in a direction with a small included angle, the first sound wave diffusion angle is adjusted, in an unlimited example, diffraction analysis is performed on the first sound wave diffusion angle through the first sound wave parameters, and if the angle is too large, diffraction needs to be reduced. Thereby achieving the effect of improving the directional transmission of sound.
Further, as shown in fig. 6, the embodiment of the present application further includes:
step S250: if the first language pointing feature is triggered by the first collected language information, judging whether the first collected language information triggers the first keyword library;
step S260: if the first collected language information triggers the first keyword library, a first real-time distance pointing to a user is obtained;
step S270: generating a first relative distance according to the real-time distance of a first pointing user, wherein the first relative distance is the relative distance between the first teacher user and the first pointing user;
step S280: adding the first relative distance to the first propagation spatial feature.
Specifically, if the first collected language information triggers the first language directional feature, it is determined whether the first collected language information triggers the first keyword library and whether the first keyword library relates to a student name. The language pointing feature is a feature that language information has directivity, for example, a phenomenon that a language points to a certain student or a certain class. Further, if the first keyword bank is triggered, a first real-time distance of the first pointing user is obtained. The first pointing user is a student, the first real-time distance is obtained through a student seat in a class seat table, and a first relative distance is generated through the real-time distance of the first pointing user, wherein the first relative distance is the relative distance between the first teacher user and the first pointing user. Adding the first relative distance to the first propagation spatial feature. This enables a precise positioning of the first relative distance position in the first propagation spatial feature. If the first collection language does not trigger the first keyword library, i.e., no student name is identified, then only rough pointing needs to be performed based on the teacher's orientation and direction. Can improve the microphone directive property through accurate location and rough location for the student can in time notice oneself action.
Further, if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature, in step S400 according to the embodiment of the present application, the method further includes:
step S410: obtaining a first sensing action and a second sensing action according to the inertial sensor, wherein the first sensing action is taken as a head orientation action, and the second sensing action is taken as a trunk orientation action;
step S420: generating first identification data by performing orientation feature identification on the first sensing action and the second sensing action, wherein the priority of the first sensing action is greater than that of the second sensing action;
step S430: and obtaining the first real-time action characteristic according to the first identification data.
Specifically, based on the inertial sensor can capture a steering motion, a steering angle, a speed and the like, a first sensing motion and a second sensing motion are obtained through the inertial sensor, the first sensing motion is taken as a head-oriented motion, the second sensing motion is taken as a trunk-oriented motion, the motion of the first teacher user is divided into a head motion and a trunk motion, the head motion and the trunk motion can be attached to a back-to-back classmate or a side-to-side classmate of a teacher in practice, and only the head rotates. And performing orientation feature recognition on the first sensing action and the second sensing action, and performing orientation feature recognition through the angle of the sensor recognition action to obtain a recognition result, namely the first recognition data. According to the first identification data, real-time comprehensive action characteristics of the teacher can be obtained and are the first real-time action characteristics. Through the capture of the first sensing action and the second sensing action, the action of the first teacher user can be further refined, and therefore the accuracy of action capture is improved.
Compared with the prior art, the invention has the following beneficial effects:
1. the method comprises the steps that a first teacher user is subjected to classroom language real-time input and collection to obtain first collected language information; constructing a directional semantic recognition library; inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature; if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature; obtaining a first propagation space characteristic of the first teacher user; inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter; and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing the directional propagation of the microphone according to the second propagation parameter. The directional sound propagation method and system for the microphone have the advantages that the language acquisition and semantic recognition of teachers are achieved, the sentence recognition accuracy of directional propagation is improved, the motion of teachers is captured and the spatial characteristic analysis is propagated, the directional propagation accuracy and the scientificity of sound are improved, and accordingly the teaching quality is improved.
2. Due to the fact that the method for capturing the motion passing through the first sensing motion and the second sensing motion is adopted, the motion of the first teacher user can be further refined, real-time comprehensive motion characteristics of the teacher can be obtained, and therefore the accuracy of motion capture is improved.
Example two
Based on the same inventive concept as the directional sound propagation method for a microphone in the foregoing embodiment, the present invention also provides a directional sound propagation system for a microphone, as shown in fig. 7, wherein the system includes:
the first obtaining unit 11 is used for performing classroom language real-time input and collection on a first teacher user to obtain first collected language information;
a first construction unit 12, wherein the first construction unit 12 is used for constructing a pointing semantic recognition library;
a first judging unit 13, where the first judging unit 13 is configured to input the first collected language information into the directional semantic recognition library for semantic recognition, and judge whether to trigger a first language directional feature;
a second obtaining unit 14, where the second obtaining unit 14 is configured to capture motion features of the first teacher user according to an inertial sensor to obtain a first real-time motion feature if the first language pointing feature is triggered by the first collected language information;
a third obtaining unit 15, the third obtaining unit 15 being configured to obtain a first propagation space characteristic of the first instructor user;
a fourth obtaining unit 16, where the fourth obtaining unit 16 is configured to obtain first output information according to a propagation parameter training model by inputting the first real-time motion feature and the first propagation space feature as input information into the propagation parameter training model, where the first output information is a first propagation parameter;
a fifth obtaining unit 17, where the fifth obtaining unit 17 is configured to obtain a second propagation parameter according to the first propagation parameter and the first propagation sound, and implement directional propagation of a microphone according to the second propagation parameter.
Further, the system further comprises:
the second construction unit is used for constructing a directional keyword library, and the directional keyword library comprises a first keyword library and a second keyword library;
the first generating unit is used for generating a first recognition sentence library according to the directional keyword library;
a sixth obtaining unit, configured to obtain a second recognized sentence library according to the first recognized sentence library, where the second recognized sentence library is a similar sentence of the first recognized sentence library;
and the third construction unit is used for constructing the pointing semantic recognition library according to the first recognition sentence library and the second recognition sentence library.
Further, the system further comprises:
a seventh obtaining unit, configured to obtain class information of the first teacher user;
the fourth construction unit is used for constructing a class keyword library according to the teaching class information, wherein each class in the teaching classes correspondingly comprises a class keyword library;
an eighth obtaining unit, configured to obtain a real-time class keyword library according to the real-time class information of the first teacher user;
the first execution unit is used for taking the real-time class keyword library as the first keyword library.
Further, the system further comprises:
a ninth obtaining unit configured to obtain first geometric space information of the first teacher user;
a tenth obtaining unit, configured to obtain feature point distribution information according to the first geometric spatial information, where the feature point distribution information includes seat distribution information and podium distribution information;
a second execution unit, configured to determine a spatial limitation threshold for microphone propagation according to the feature point distribution information;
a third execution unit to treat a spatial limitation threshold of the microphone propagation as the first propagation spatial feature.
Further, the system further comprises:
an eleventh obtaining unit configured to obtain first connection microphone information of the first tutor user;
a fourth execution unit, configured to determine a microphone pointing type according to the first microphone connection information;
a fifth execution unit, configured to determine a first sound wave divergence angle according to the microphone pointing type, where the first sound wave divergence angle is an initial sound wave divergence angle;
a sixth execution unit configured to adjust the first sound wave divergence angle based on the first propagation parameter.
Further, the system further comprises:
a second judging unit, configured to judge whether the first collected language information triggers the first keyword library if the first collected language information triggers the first language directional feature;
a twelfth obtaining unit, configured to obtain a first real-time distance of a first pointing user if the first keyword library is triggered by the first collected language information;
a second generating unit, configured to generate a first relative distance according to a real-time distance of a first pointing user, where the first relative distance is a relative distance between the first instructor user and the first pointing user;
a seventh execution unit to add the first relative distance to the first propagation space feature.
Further, the system further comprises:
a thirteenth obtaining unit configured to obtain a first sensing motion and a second sensing motion according to the inertial sensor, wherein the first sensing motion is a head-facing motion, and the second sensing motion is a trunk-facing motion;
a third generation unit, configured to generate first identification data by performing orientation feature identification on the first sensing motion and the second sensing motion, where a priority of the first sensing motion is greater than a priority of the second sensing motion;
a fourteenth obtaining unit, configured to obtain the first real-time action feature according to the first identification data.
Various modifications and specific examples of the directional sound propagation method for microphone in the first embodiment of fig. 1 are also applicable to the directional sound propagation system for microphone in the present embodiment, and a method for implementing the directional sound propagation system for microphone in the present embodiment is clear to those skilled in the art from the foregoing detailed description of the directional sound propagation method for microphone, so for the sake of brevity of description, detailed description is omitted here.
EXAMPLE III
The electronic apparatus of the embodiment of the present application is described below with reference to fig. 8.
Fig. 8 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of a directional sound propagation method for a microphone as in the previous embodiment, the present invention also provides a directional sound propagation system for a microphone, on which a computer program is stored, which when executed by a processor implements the steps of any one of the methods of the directional sound propagation system for a microphone as described above.
Where in fig. 8 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium. The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
The embodiment of the application provides a directional sound propagation method for a microphone, and the method comprises the following steps: acquiring first acquired language information by inputting and acquiring classroom languages of a first teacher user in real time; constructing a directional semantic recognition library; inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature; if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature; obtaining a first propagation space characteristic of the first teacher user; inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter; and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A directional sound propagation method for a microphone, the method being applied to a microphone sound system, the system being communicatively connected to an inertial sensor, the method comprising:
acquiring first acquired language information by inputting and acquiring classroom languages of a first teacher user in real time;
constructing a directional semantic recognition library;
inputting the first collected language information into the directional semantic recognition library for semantic recognition, and judging whether to trigger a first language directional feature;
if the first language pointing feature is triggered by the first collected language information, capturing motion features of the first teacher user according to the inertial sensor to obtain a first real-time motion feature;
obtaining a first propagation space characteristic of the first teacher user;
inputting the first real-time action characteristic and the first propagation space characteristic into a propagation parameter training model as input information, and training the model according to the propagation parameter to obtain first output information, wherein the first output information is a first propagation parameter;
and obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
2. The method of claim 1, wherein the building points to a semantic recognition library, the method further comprising:
constructing a directional keyword library, wherein the directional keyword library comprises a first keyword library and a second keyword library;
generating a first recognition sentence library according to the pointing keyword library;
obtaining a second recognition sentence library according to the first recognition sentence library, wherein the second recognition sentence library is a similar meaning sentence of the first recognition sentence library;
and constructing the pointed semantic recognition library according to the first recognition sentence library and the second recognition sentence library.
3. The method of claim 2, wherein the method further comprises:
obtaining class information of the first teacher user;
constructing a class keyword library according to the class information of the lectures, wherein each class in the classes of the lectures correspondingly comprises a class keyword library;
obtaining a real-time class keyword library according to the real-time class information of the first teacher user;
and taking the real-time class keyword library as the first keyword library.
4. The method of claim 1, wherein after obtaining the first propagation spatial feature of the first instructor user, the method further comprises:
obtaining first geometric space information of the first teacher user;
obtaining feature point distribution information according to the first geometric space information, wherein the feature point distribution information comprises seat distribution information and podium distribution information;
determining a space limit threshold value of microphone propagation according to the feature point distribution information;
taking a spatial restriction threshold for the microphone propagation as the first propagation spatial feature.
5. The method of claim 1, wherein the method further comprises:
obtaining first connecting microphone information of the first teacher user;
determining the microphone pointing type according to the first connecting microphone information;
determining a first sound wave divergence angle according to the microphone pointing type, wherein the first sound wave divergence angle is an initial sound wave divergence angle;
adjusting the first acoustic divergence angle based on the first propagation parameter.
6. The method of claim 2, wherein the method further comprises:
if the first language pointing feature is triggered by the first collected language information, judging whether the first collected language information triggers the first keyword library;
if the first collected language information triggers the first keyword library, a first real-time distance pointing to a user is obtained;
generating a first relative distance according to the real-time distance of a first pointing user, wherein the first relative distance is the relative distance between the first teacher user and the first pointing user;
adding the first relative distance to the first propagation spatial feature.
7. The method of claim 1, wherein if said first collected language information triggers said first language pointing feature, obtaining a first real-time action feature based on said inertial sensor performing an action feature capture on said first instructor user, said method further comprising:
obtaining a first sensing action and a second sensing action according to the inertial sensor, wherein the first sensing action is taken as a head orientation action, and the second sensing action is taken as a trunk orientation action;
generating first identification data by performing orientation feature identification on the first sensing action and the second sensing action, wherein the priority of the first sensing action is greater than that of the second sensing action;
and obtaining the first real-time action characteristic according to the first identification data.
8. A directional sound propagation system for a microphone, the system comprising:
the first obtaining unit is used for carrying out classroom language real-time input and collection on a first teacher user to obtain first collected language information;
the first construction unit is used for constructing a pointing semantic recognition library;
the first judgment unit is used for inputting the first collected language information into the directional semantic recognition library for semantic recognition and judging whether to trigger a first language directional feature;
the second obtaining unit is used for capturing motion characteristics of the first teacher user according to an inertial sensor to obtain first real-time motion characteristics if the first language pointing characteristics are triggered by the first collected language information;
a third obtaining unit configured to obtain a first propagation space feature of the first teacher user;
a fourth obtaining unit, configured to obtain first output information according to a propagation parameter training model by inputting the first real-time action feature and the first propagation space feature as input information into the propagation parameter training model, where the first output information is a first propagation parameter;
and the fifth obtaining unit is used for obtaining a second propagation parameter according to the first propagation parameter and the first propagation sound, and realizing directional propagation of the microphone according to the second propagation parameter.
9. A directional sound propagation system for a microphone, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1-7 when executing the program.
CN202111370975.XA 2021-11-18 2021-11-18 Directional sound propagation method and system for microphone Active CN113993034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111370975.XA CN113993034B (en) 2021-11-18 2021-11-18 Directional sound propagation method and system for microphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111370975.XA CN113993034B (en) 2021-11-18 2021-11-18 Directional sound propagation method and system for microphone

Publications (2)

Publication Number Publication Date
CN113993034A true CN113993034A (en) 2022-01-28
CN113993034B CN113993034B (en) 2023-04-07

Family

ID=79749356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111370975.XA Active CN113993034B (en) 2021-11-18 2021-11-18 Directional sound propagation method and system for microphone

Country Status (1)

Country Link
CN (1) CN113993034B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060190253A1 (en) * 2005-02-23 2006-08-24 At&T Corp. Unsupervised and active learning in automatic speech recognition for call classification
US20120316876A1 (en) * 2011-06-10 2012-12-13 Seokbok Jang Display Device, Method for Thereof and Voice Recognition System
US20150346845A1 (en) * 2014-06-03 2015-12-03 Harman International Industries, Incorporated Hands free device with directional interface
CN105244023A (en) * 2015-11-09 2016-01-13 上海语知义信息技术有限公司 System and method for reminding teacher emotion in classroom teaching
CN108320760A (en) * 2018-01-05 2018-07-24 广东小天才科技有限公司 The class offerings way of recording, device, equipment and storage medium based on microphone
CN110991381A (en) * 2019-12-12 2020-04-10 山东大学 Real-time classroom student state analysis and indication reminding system and method based on behavior and voice intelligent recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060190253A1 (en) * 2005-02-23 2006-08-24 At&T Corp. Unsupervised and active learning in automatic speech recognition for call classification
US20120316876A1 (en) * 2011-06-10 2012-12-13 Seokbok Jang Display Device, Method for Thereof and Voice Recognition System
US20150346845A1 (en) * 2014-06-03 2015-12-03 Harman International Industries, Incorporated Hands free device with directional interface
CN105244023A (en) * 2015-11-09 2016-01-13 上海语知义信息技术有限公司 System and method for reminding teacher emotion in classroom teaching
CN108320760A (en) * 2018-01-05 2018-07-24 广东小天才科技有限公司 The class offerings way of recording, device, equipment and storage medium based on microphone
CN110991381A (en) * 2019-12-12 2020-04-10 山东大学 Real-time classroom student state analysis and indication reminding system and method based on behavior and voice intelligent recognition

Also Published As

Publication number Publication date
CN113993034B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
EP3593958B1 (en) Data processing method and nursing robot device
Alaa et al. Assessment and ranking framework for the English skills of pre-service teachers based on fuzzy Delphi and TOPSIS methods
KR102114207B1 (en) Learning Support System And Method Using Augmented Reality And Virtual reality based on Artificial Intelligence
CN107025816A (en) A kind of interactive learning aid system of English
CN112908355B (en) System and method for quantitatively evaluating teaching skills of teacher and teacher
CN106297441A (en) A kind of art teaching system
Sidgi et al. The usefulness of automatic speech recognition (ASR) eyespeak software in improving Iraqi EFL students’ pronunciation
Howard et al. Using data mining and machine learning approaches to observe technology-enhanced learning
CN111428686A (en) Student interest preference evaluation method, device and system
CN113993034B (en) Directional sound propagation method and system for microphone
KR102355960B1 (en) System for providing qualification verification based korean language training service
CN106056503A (en) Intelligent music teaching platform and application method thereof
JP7096626B2 (en) Information extraction device
CN116362587A (en) College classroom teaching evaluation method and system based on artificial intelligence
JP2022075661A (en) Information extraction apparatus
TWM600908U (en) Learning state improvement management system
Murad et al. CHR vs. Human‐Computer Interaction Design for Emerging Technologies: Two Case Studies
Zhang Integration of English Teaching and Internet Distance Education Based on Computer-aided Teaching
WO2022091230A1 (en) Information extraction device
RU2020100190A (en) SOFTWARE AND HARDWARE OF THE TRAINING SYSTEM WITH AUTOMATIC EVALUATION OF THE STUDENT'S EMOTIONS
TWI731577B (en) Learning state improvement management system
Du et al. Application of multiple difference feature network and speech recognition in dance training system
CN111461153A (en) Crowd characteristic deep learning method
Ding Application of sensor network and speech recognition system in online english teaching
Noor et al. VRFlex: Towards the Design of a Virtual Reality Hyflex Class Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant