CN108305629B - Scene learning content acquisition method and device, learning equipment and storage medium - Google Patents

Scene learning content acquisition method and device, learning equipment and storage medium Download PDF

Info

Publication number
CN108305629B
CN108305629B CN201711421836.9A CN201711421836A CN108305629B CN 108305629 B CN108305629 B CN 108305629B CN 201711421836 A CN201711421836 A CN 201711421836A CN 108305629 B CN108305629 B CN 108305629B
Authority
CN
China
Prior art keywords
scene
user
vocabulary
learning content
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711421836.9A
Other languages
Chinese (zh)
Other versions
CN108305629A (en
Inventor
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201711421836.9A priority Critical patent/CN108305629B/en
Publication of CN108305629A publication Critical patent/CN108305629A/en
Application granted granted Critical
Publication of CN108305629B publication Critical patent/CN108305629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention is applicable to the technical field of computers, and provides a scene learning content acquisition method, a scene learning content acquisition device and a scene learning content storage medium, wherein the method comprises the following steps: when the microphone equipment receives a scene learning content acquisition request input by a user, sound in a scene where the user is located is collected and recognized, vocabulary extraction is carried out on a recognition result of the sound, so as to obtain scene vocabularies corresponding to the scene where the user is located, the scene vocabularies are sent to the user mobile equipment, and the scene learning content corresponding to the scene vocabularies is generated by the user mobile equipment, so that the approach degree of the learning content to the scene where the user is located is effectively improved, learning resources are provided for the user in a targeted manner, the learning efficiency of the user is effectively improved, and the user experience is improved.

Description

Scene learning content acquisition method and device, learning equipment and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a scene learning content acquisition method and device, learning equipment and a storage medium.
Background
With the acceleration of the life rhythm of people, only students do not need to draw knowledge any more, and office workers who enter a work post need to charge themselves through learning knowledge, so that learning new languages becomes the choice of most people under the global influence.
However, in the more convenient learning mode and the more abundant learning resources, learning a new language is still not easy for people, and the user pertinence of course content arrangement and schedule arrangement is poor no matter in a real classroom or a network classroom, and the content learned by the user in the course is not close to the living environment of the user, so that the user cannot apply the self-learned knowledge in real life after mastering certain word quantity and reading capacity.
Disclosure of Invention
The invention aims to provide a scene learning content acquisition method, a scene learning content acquisition device, a scene learning device and a storage medium, and aims to solve the problems that in the prior art, the learning content recommended to a user is lack of user pertinence and is not close to the real life of the user, so that the learning efficiency of the user is not high, and the user experience is not good.
In one aspect, the present invention provides a method for acquiring scene learning content, including the following steps:
when microphone equipment receives a scene learning content acquisition request input by a user, acquiring and identifying sound in a scene where the user is located;
the microphone equipment extracts vocabularies of the recognition result of the sound to obtain scene vocabularies corresponding to the scene where the user is located, and sends the scene vocabularies to user mobile equipment;
and when the user mobile equipment receives the scene vocabulary, generating scene learning content corresponding to the scene vocabulary.
In another aspect, the present invention provides a scene learning content acquiring apparatus, including:
the system comprises a collecting and identifying unit, a processing unit and a processing unit, wherein the collecting and identifying unit is used for collecting and identifying the sound in the scene of a user when a microphone device receives a scene learning content acquisition request input by the user;
the vocabulary extraction unit is used for extracting vocabularies of the recognition result of the voice by the microphone equipment to obtain scene vocabularies corresponding to the scene where the user is located and sending the scene vocabularies to the user mobile equipment; and
and the learning content generating unit is used for generating scene learning content corresponding to the scene vocabulary when the user mobile equipment receives the scene vocabulary.
In another aspect, the present invention further provides a learning device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the scene learning content acquiring method when executing the computer program.
In another aspect, the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the scene learning content acquiring method.
When receiving a scene vocabulary acquisition request, the microphone equipment acquires, identifies and extracts the vocabularies in the scene where the user is located so as to obtain the scene vocabularies corresponding to the scene where the user is located, sends the scene vocabularies to the user mobile equipment, and generates scene learning contents corresponding to the scene vocabularies by the user mobile equipment, so that the approach degree of the learning contents to the scene where the user is located is effectively improved, the user can learn by combining with a real scene, the practicability of the learning contents is improved, the learning efficiency of the user is effectively improved, and the user experience is improved.
Drawings
Fig. 1 is a flowchart illustrating an implementation of a scene learning content obtaining method according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation of a scene learning content obtaining method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a scene learning content acquiring apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a scene learning content acquiring apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a preferred structure of a scene learning content acquiring apparatus according to a fourth embodiment of the present invention; and
fig. 6 is a schematic structural diagram of a learning device according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows an implementation flow of a scene learning content obtaining method provided in a first embodiment of the present invention, and for convenience of description, only parts related to the first embodiment of the present invention are shown, which are detailed as follows:
in step S101, when the microphone apparatus receives a scene learning content acquisition request input by a user, sound in a scene where the user is located is collected and identified.
In the embodiment of the invention, the microphone device can be a wireless microphone, and can also be a mobile device which is convenient for a user to carry and is provided with a microphone, such as a smart watch, a mobile phone, a tablet computer or a learning machine. When a user wants to learn language knowledge related to his/her own scene, for example, learning english knowledge related to ordering and purchase orders at the time of ordering, learning english knowledge related to cut prices at the time of shopping, and sending a scene learning content acquisition request to the microphone device.
Preferably, the reminding time for acquiring the scene learning content is preset, and when the current time is detected to be the reminding time, the user is reminded to send the scene learning content acquisition request, so that the user experience is effectively improved. For example, the user may set the time of three meals a day as the reminding time of the scene learning content acquisition to acquire the scene vocabulary during eating, and may also set the time of shopping one hour before the premise of shopping as the reminding time of the scene learning content acquisition to acquire the scene vocabulary during shopping.
In the embodiment of the invention, when receiving a scene learning content acquisition request of a user, the microphone device collects the sound in the scene where the user is located and identifies the collected sound. The collected sound can be identified through a preset voice identification algorithm or a voice identification chip.
Preferably, a plurality of sounds, such as machine running noise, human walking footsteps, collision sound of objects, and the like, may exist in the daily scene, and the collected sounds are denoised before being recognized, so as to improve the accuracy of subsequent sound recognition.
In step S102, the microphone device extracts words from the recognition result of the sound to obtain scene words corresponding to the scene where the user is located, and sends the scene words to the mobile device of the user.
In the embodiment of the invention, the user vocabulary in the preset user dictionary library can be matched with the voice recognition result, the user vocabulary appearing in the recognition result is obtained according to the matching result of the user vocabulary in the user dictionary library and the recognition result, the user vocabulary with the frequency exceeding the preset frequency threshold value in the voice recognition result is set as the scene vocabulary corresponding to the scene where the user is located, the scene vocabulary can be considered to have certain representativeness for the scene where the user is located, for example, the vocabularies such as 'good looking', 'discount', 'fit' and the like have higher frequency in the shopping scene, and the characteristics of the shopping scene can be obviously embodied.
In the embodiment of the present invention, the user mobile device may be a mobile device such as a mobile phone, a tablet computer, a learning machine, and the like. Preferably, the microphone device extracts words from the recognition result of the sound, when the extracted scene words are sent to the user mobile device, the user mobile device detects a wireless signal (bluetooth signal or Wi-Fi signal) of the microphone device, when the wireless signal of the microphone device is detected, a wireless connection request is sent to the microphone device, and after the wireless connection between the user mobile device and the microphone device is successful, a scene word acquisition request is sent to the microphone device, so that the intelligent degree and efficiency of scene learning content acquisition are effectively improved.
Preferably, a scene vocabulary library corresponding to a scene where the user is located is established, scene vocabularies extracted from a voice recognition result are stored in the scene vocabulary library, and the scene vocabularies in the scene vocabulary library are sent to the user mobile equipment, so that different scene vocabulary libraries are established for different scenes, and the accuracy of acquiring the subsequent scene learning content is effectively improved.
Preferably, when the user dictionary base is constructed, the personal information of the user, such as the age of the user, the grade of the user, the occupation of the user, the hobbies of the user, the language learning stage of the user and the like, is acquired, words suitable for the user are collected according to the personal information of the user, and the words form the user dictionary base, so that the user experience is effectively improved. By way of example, when the user is a pupil and just begins to learn English, simple words of interest to the pupil are collected, such as the names of various toys, the names of various fruits, and so forth.
In step S103, the user mobile device generates scene learning content corresponding to the scene vocabulary when receiving the scene vocabulary.
In the embodiment of the invention, when receiving the scene vocabulary sent by the microphone device, the user mobile device takes the scene vocabulary as the key word, searches the learning contents associated with the key word in a preset language learning library or a preset language learning platform, and sets the learning contents as the scene learning contents corresponding to the scene vocabulary. Preferably, the learning content associated with the keyword includes vocabulary translation, grammar sentence pattern, scene dialogue, language course and the like corresponding to the keyword, thereby providing rich learning resources for the user in a targeted manner.
Optionally, when the user mobile device and the microphone device are the same device (for example, the user mobile device and the microphone device are the same tablet computer), the scene learning content corresponding to the scene vocabulary is generated by the microphone device.
In the embodiment of the invention, the microphone equipment collects the sound of the scene where the user is located, identifies the sound content, extracts the scene vocabulary corresponding to the scene where the user is located from the sound content, sends the scene vocabulary to the user mobile equipment, and the user mobile equipment generates the scene learning content corresponding to the scene vocabulary, thereby providing practical and rich scene learning content for the user by combining the user scene, effectively improving the proximity degree of the learning content and the scene where the user is located, further helping the user improve the learning efficiency and improving the user experience.
Example two:
fig. 2 shows an implementation flow of a scene learning content obtaining method provided by the second embodiment of the present invention, and for convenience of description, only the parts related to the second embodiment of the present invention are shown, which are detailed as follows:
in S201, when the microphone device receives a scene learning content acquisition request input by a user, sound in a scene where the user is located is collected and identified.
In the embodiment of the invention, when a user wants to learn the language knowledge related to the scene where the user is, for example, learn the english knowledge related to ordering and buying orders at the time of ordering, learn the english knowledge related to cutting prices at the time of shopping, and send a scene learning content acquisition request to the microphone device. When receiving a scene learning content acquisition request of a user, the microphone device collects sound in a scene where the user is located and identifies the collected sound.
In step S202, the microphone device extracts words from the recognition result of the sound to obtain scene words corresponding to the scene where the user is located, and sends the scene words to the mobile device of the user.
In the embodiment of the invention, the user vocabulary in the preset user dictionary library can be matched with the voice recognition result, the user vocabulary appearing in the recognition result can be obtained according to the matching result of the user vocabulary in the user dictionary library and the recognition result, the user vocabulary with the frequency exceeding the preset frequency threshold in the voice recognition result is set as the scene vocabulary corresponding to the scene where the user is located, the scene vocabulary can be considered to have certain representativeness for the scene where the user is located, for example, the vocabularies such as 'good looking', 'discount', 'fit' and the like have higher frequency in the shopping scene, and the characteristics of the shopping scene can be obviously embodied.
In the embodiment of the present invention, the user mobile device may be a mobile device such as a mobile phone, a tablet computer, a learning machine, and the like. Preferably, the microphone device extracts words from the recognition result of the sound, and when the extracted scene words are sent to the user mobile device, the user mobile device detects a wireless signal (bluetooth signal or Wi-Fi signal) of the microphone device, and when the wireless signal of the microphone device is detected, a wireless connection request may be sent to the microphone device, and after the wireless connection between the user mobile device and the microphone device is successful, a scene word acquisition request is sent to the microphone device, thereby effectively improving the intelligent degree and efficiency of scene learning content acquisition.
In step S203, the user mobile device identifies a scene type corresponding to the scene vocabulary according to the scene vocabulary and a preset typical scene vocabulary library.
In the embodiment of the present invention, the user mobile device may match the scene vocabulary with the vocabulary in the preset typical scene vocabulary library to identify the scene type corresponding to the scene vocabulary. The scene type can be a eating scene, a shopping scene, a learning scene and the like, and the typical scene vocabulary library stores vocabularies which are easy to be bored by a user in different scene types, such as vocabularies of 'how much money', 'whether big codes exist' and the like in the shopping scene.
In step S204, a preset language learning library or a language learning platform searches for learning content corresponding to a scene type, and sets the learning content corresponding to the scene type as scene learning content corresponding to a scene vocabulary.
In the embodiment of the invention, after determining the scene type corresponding to the scene vocabulary, the user mobile device can search the learning content corresponding to the scene type in the language learning library or the language learning platform by taking the scene type as the keyword, and set the learning content as the scene learning content corresponding to the scene vocabulary. Preferably, the learning content corresponding to the scene type includes vocabulary translation, word pronunciation, scene dialogue, grammar sentence pattern, language course and the like corresponding to the scene type, so as to provide rich learning resources for the user in a targeted manner.
Optionally, when the user mobile device and the microphone device are the same device, the microphone device generates scene learning content corresponding to the scene vocabulary.
In the embodiment of the invention, the microphone equipment collects the sound of the scene where the user is located, identifies the sound content, extracts the scene vocabulary corresponding to the scene where the user is located from the sound content, sends the scene vocabulary to the user mobile equipment, and the user mobile equipment generates the scene learning content corresponding to the scene vocabulary, thereby providing practical and rich scene learning content for the user by combining the user scene, effectively improving the proximity degree of the learning content and the scene where the user is located, further helping the user improve the learning efficiency and improving the user experience.
Example three:
fig. 3 shows a structure of a scene learning content acquiring apparatus according to a third embodiment of the present invention, and for convenience of description, only a part related to the third embodiment of the present invention is shown, where the structure includes:
the collecting and identifying unit 31 is configured to collect and identify sound in a scene where the user is located when the microphone device receives a scene learning content acquisition request input by the user.
In the embodiment of the invention, the microphone device can be a wireless microphone, and can also be a mobile device which is convenient for a user to carry and is provided with a microphone, such as a smart watch, a mobile phone, a tablet computer or a learning machine. When a user wants to learn language knowledge related to his/her own scene, for example, learning english knowledge related to ordering and purchase orders at the time of ordering, learning english knowledge related to cut prices at the time of shopping, and sending a scene learning content acquisition request to the microphone device.
Preferably, the reminding time for acquiring the scene learning content is preset, and when the current time is detected to be the reminding time, the user is reminded to send the scene learning content acquisition request, so that the user experience is effectively improved. For example, the user may set the time of three meals a day as the reminding time of the scene learning content acquisition to acquire the scene vocabulary during eating, and may also set the time of shopping one hour before the premise of shopping as the reminding time of the scene learning content acquisition to acquire the scene vocabulary during shopping.
In the embodiment of the invention, when receiving a scene learning content acquisition request of a user, the microphone device collects the sound in the scene where the user is located and identifies the collected sound. The collected sound can be identified through a preset voice identification algorithm or a voice identification chip.
Preferably, a plurality of sounds, such as machine running noise, human walking footsteps, collision sound of objects, and the like, may exist in the daily scene, and the collected sounds are denoised before being recognized, so as to improve the accuracy of subsequent sound recognition.
And the vocabulary extracting unit 32 is used for extracting the vocabulary of the voice recognition result by the microphone equipment to obtain the scene vocabulary corresponding to the scene where the user is located, and sending the scene vocabulary to the mobile equipment of the user.
In the embodiment of the invention, the user vocabulary in the preset user dictionary library can be matched with the voice recognition result, the user vocabulary appearing in the recognition result is obtained according to the matching result of the user vocabulary in the user dictionary library and the recognition result, the user vocabulary with the frequency exceeding the preset frequency threshold value in the voice recognition result is set as the scene vocabulary corresponding to the scene where the user is located, the scene vocabulary can be considered to have certain representativeness for the scene where the user is located, for example, the vocabularies such as 'good looking', 'discount', 'fit' and the like have higher frequency in the shopping scene, and the characteristics of the shopping scene can be obviously embodied.
In the embodiment of the present invention, the user mobile device may be a mobile device such as a mobile phone, a tablet computer, a learning machine, and the like. Preferably, the microphone device extracts words from the recognition result of the sound, when the extracted scene words are sent to the user mobile device, the user mobile device detects a wireless signal (bluetooth signal or Wi-Fi signal) of the microphone device, when the wireless signal of the microphone device is detected, a wireless connection request is sent to the microphone device, and after the wireless connection between the user mobile device and the microphone device is successful, a scene word acquisition request is sent to the microphone device, so that the intelligent degree and efficiency of scene learning content acquisition are effectively improved.
Preferably, a scene vocabulary library corresponding to a scene where the user is located is established, scene vocabularies extracted from a voice recognition result are stored in the scene vocabulary library, and the scene vocabularies in the scene vocabulary library are sent to the user mobile equipment, so that different scene vocabulary libraries are established for different scenes, and the accuracy of acquiring the subsequent scene learning content is effectively improved.
Preferably, when the user dictionary base is constructed, the personal information of the user, such as the age of the user, the grade of the user, the occupation of the user, the hobbies of the user, the language learning stage of the user and the like, is acquired, words suitable for the user are collected according to the personal information of the user, and the words form the user dictionary base, so that the user experience is effectively improved. By way of example, when the user is a pupil and just begins to learn English, simple words of interest to the pupil are collected, such as the names of various toys, the names of various fruits, and so forth.
The learning content generating unit 33 is configured to generate scene learning content corresponding to the scene vocabulary when the user mobile device receives the scene vocabulary.
In the embodiment of the invention, when receiving the scene vocabulary sent by the microphone device, the user mobile device takes the scene vocabulary as the key word, searches the learning contents associated with the key word in a preset language learning library or a preset language learning platform, and sets the learning contents as the scene learning contents corresponding to the scene vocabulary. Preferably, the learning content associated with the keyword includes vocabulary translation, grammar sentence pattern, scene dialogue, language course and the like corresponding to the keyword, thereby providing rich learning resources for the user in a targeted manner.
Optionally, when the user mobile device and the microphone device are the same device (for example, the user mobile device and the microphone device are the same tablet computer), the scene learning content corresponding to the scene vocabulary is generated by the microphone device.
In the embodiment of the invention, the microphone equipment collects the sound of the scene where the user is located, identifies the sound content, extracts the scene vocabulary corresponding to the scene where the user is located from the sound content, sends the scene vocabulary to the user mobile equipment, and the user mobile equipment generates the scene learning content corresponding to the scene vocabulary, thereby providing practical and rich scene learning content for the user by combining the user scene, effectively improving the proximity degree of the learning content and the scene where the user is located, further helping the user improve the learning efficiency and improving the user experience.
Example four:
fig. 4 shows a structure of a scene learning content acquiring apparatus according to a fourth embodiment of the present invention, and for convenience of description, only a part related to the embodiment of the present invention is shown, where the scene learning content acquiring apparatus includes:
and the acquisition and identification unit 41 is used for acquiring and identifying the sound in the scene where the user is located when the microphone device receives the scene learning content acquisition request input by the user.
In the embodiment of the invention, when a user wants to learn the language knowledge related to the scene where the user is, a scene learning content acquisition request can be sent to the microphone device. When receiving a scene learning content acquisition request of a user, the microphone device collects sound in a scene where the user is located and identifies the collected sound.
And the vocabulary extracting unit 42 is used for the microphone device to extract the vocabulary of the voice recognition result so as to obtain the scene vocabulary corresponding to the scene where the user is located, and sending the scene vocabulary to the mobile device of the user.
In the embodiment of the invention, the user vocabulary in the preset user dictionary library can be matched with the recognition result of the voice, the user vocabulary appearing in the recognition result can be obtained according to the matching result of the user vocabulary in the user dictionary library and the recognition result, the user vocabulary with the frequency exceeding the preset frequency threshold value in the recognition result of the voice is set as the scene vocabulary corresponding to the scene where the user is located, and the scene vocabulary can be considered to have certain representativeness to the scene where the user is located.
In the embodiment of the present invention, the user mobile device may be a mobile device such as a mobile phone, a tablet computer, a learning machine, and the like. Preferably, the microphone device extracts words from the recognition result of the sound, and when the extracted scene words are sent to the user mobile device, the user mobile device detects a wireless signal (bluetooth signal or Wi-Fi signal) of the microphone device, and when the wireless signal of the microphone device is detected, a wireless connection request may be sent to the microphone device, and after the wireless connection between the user mobile device and the microphone device is successful, a scene word acquisition request is sent to the microphone device, thereby effectively improving the intelligent degree and efficiency of scene learning content acquisition.
And the type recognition unit 43 is used for recognizing the scene type corresponding to the scene vocabulary by the mobile equipment of the user according to the scene vocabulary and a preset typical scene vocabulary library.
In the embodiment of the present invention, the user mobile device may match the scene vocabulary with the vocabulary in the preset typical scene vocabulary library to identify the scene type corresponding to the scene vocabulary. The scene type can be a eating scene, a shopping scene, a learning scene and the like, and the typical scene vocabulary library stores vocabularies which are easy to be bored by a user in different scene types, such as vocabularies of 'how much money', 'whether big codes exist' and the like in the shopping scene.
And a content searching unit 44, configured to search a preset language learning library or a language learning platform for learning content corresponding to a scene type, and set the learning content corresponding to the scene type as scene learning content corresponding to a scene vocabulary.
In the embodiment of the invention, after determining the scene type corresponding to the scene vocabulary, the user mobile device can search the learning content corresponding to the scene type in the language learning library or the language learning platform by taking the scene type as the keyword, and set the learning content as the scene learning content corresponding to the scene vocabulary. Preferably, the learning content corresponding to the scene type includes vocabulary translation, word pronunciation, scene dialogue, grammar sentence pattern, language course and the like corresponding to the scene type, so as to provide rich learning resources for the user in a targeted manner.
Optionally, when the user mobile device and the microphone device are the same device, the microphone device generates scene learning content corresponding to the scene vocabulary.
Preferably, as shown in fig. 5, the vocabulary extracting unit 42 includes:
the vocabulary matching unit 521 is used for the microphone equipment to match the user vocabulary in the preset user dictionary library with the recognition result so as to obtain the user vocabulary in the recognition result; and
and the frequency comparing unit 522 is configured to set the user vocabulary with the occurrence frequency exceeding the preset frequency threshold in the recognition result as the scene vocabulary corresponding to the scene where the user is located.
In the embodiment of the invention, the microphone equipment collects the sound of the scene where the user is located, identifies the sound content, extracts the scene vocabulary corresponding to the scene where the user is located from the sound content, sends the scene vocabulary to the user mobile equipment, and the user mobile equipment generates the scene learning content corresponding to the scene vocabulary, thereby providing practical and rich scene learning content for the user by combining the user scene, effectively improving the proximity degree of the learning content and the scene where the user is located, further helping the user improve the learning efficiency and improving the user experience.
In the embodiment of the present invention, each unit of a scene learning content acquiring apparatus may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein.
Example five:
fig. 6 shows a structure of a learning apparatus provided in a fifth embodiment of the present invention, and for convenience of explanation, only a part related to the embodiment of the present invention is shown.
The learning device 6 of an embodiment of the present invention comprises a processor 60, a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60. The processor 60, when executing the computer program 62, implements the steps in the various method embodiments described above, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the units in the above-described device embodiments, such as the functions of the units 31 to 33 shown in fig. 3.
In the embodiment of the invention, the microphone equipment collects the sound of the scene where the user is located, identifies the sound content, extracts the scene vocabulary corresponding to the scene where the user is located from the sound content, sends the scene vocabulary to the user mobile equipment, and the user mobile equipment generates the scene learning content corresponding to the scene vocabulary, thereby providing practical and rich scene learning content for the user by combining the user scene, effectively improving the proximity degree of the learning content and the scene where the user is located, further helping the user improve the learning efficiency and improving the user experience.
Example six:
in an embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the steps in the various method embodiments described above, e.g., steps S101 to S103 shown in fig. 1. Alternatively, the computer program may be adapted to perform the functions of the units of the above-described device embodiments, such as the functions of the units 31 to 33 shown in fig. 3, when executed by the processor.
In the embodiment of the invention, the microphone equipment collects the sound of the scene where the user is located, identifies the sound content, extracts the scene vocabulary corresponding to the scene where the user is located from the sound content, sends the scene vocabulary to the user mobile equipment, and the user mobile equipment generates the scene learning content corresponding to the scene vocabulary, thereby providing practical and rich scene learning content for the user by combining the user scene, effectively improving the proximity degree of the learning content and the scene where the user is located, further helping the user improve the learning efficiency and improving the user experience
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A scene learning content acquisition method, characterized by comprising the steps of:
when microphone equipment receives a scene learning content acquisition request input by a user, acquiring and identifying sound in a scene where the user is located;
the microphone equipment extracts vocabularies of the recognition result of the sound to obtain scene vocabularies corresponding to the scene where the user is located, and sends the scene vocabularies to user mobile equipment;
when the user mobile equipment receives the scene vocabulary, generating scene learning content corresponding to the scene vocabulary;
the microphone equipment extracts vocabularies of the recognition result of the sound to obtain scene vocabularies corresponding to the scene where the user is located, and the method comprises the following steps:
the microphone equipment matches user words in a preset user dictionary library with the recognition result to obtain the user words in the recognition result;
and setting the user vocabulary with the occurrence frequency exceeding a preset frequency threshold value in the recognition result as the scene vocabulary corresponding to the scene where the user is located.
2. The method of claim 1, wherein the step of generating the scene learning content corresponding to the scene vocabulary when the scene vocabulary is received by the user mobile device comprises:
the user mobile equipment searches learning contents related to the keywords in a preset language learning library or a preset language learning platform by taking the scene vocabularies as the keywords;
and setting the learning content related to the keywords as scene learning content corresponding to the scene vocabulary.
3. The method of claim 1, wherein the step of extracting words from the recognition result of the voice by the microphone device to obtain scene words corresponding to the scene in which the user is located, and sending the scene words to the mobile device of the user further comprises:
when a wireless connection request of the user mobile equipment is received, the microphone equipment establishes wireless connection with the user mobile equipment;
and when the user mobile equipment successfully establishes wireless connection with the microphone equipment, sending the scene vocabulary acquisition request to the microphone equipment.
4. The method of claim 1, wherein the step of generating the scene learning content corresponding to the scene vocabulary when the scene vocabulary is received by the user mobile device comprises:
the user mobile equipment identifies a scene type corresponding to the scene vocabulary according to the scene vocabulary and a preset typical scene vocabulary library;
and searching the learning content corresponding to the scene type in a preset language learning library or a language learning platform, and setting the learning content corresponding to the scene type as the scene learning content corresponding to the scene vocabulary.
5. A scene learning content acquisition apparatus, characterized in that the apparatus comprises:
the system comprises a collecting and identifying unit, a processing unit and a processing unit, wherein the collecting and identifying unit is used for collecting and identifying the sound in the scene of a user when a microphone device receives a scene learning content acquisition request input by the user;
the vocabulary extraction unit is used for extracting vocabularies of the recognition result of the voice by the microphone equipment to obtain scene vocabularies corresponding to the scene where the user is located and sending the scene vocabularies to the user mobile equipment; and
the learning content generating unit is used for generating scene learning content corresponding to the scene vocabulary when the user mobile equipment receives the scene vocabulary;
the vocabulary extraction unit includes:
the vocabulary matching unit is used for matching the user vocabulary in a preset user dictionary library with the recognition result by the microphone equipment so as to obtain the user vocabulary in the recognition result; and
and the frequency comparison unit is used for setting the user vocabulary with the occurrence frequency exceeding a preset frequency threshold value in the recognition result as the scene vocabulary corresponding to the scene where the user is located.
6. The apparatus of claim 5, wherein the learning content generating unit comprises:
the type identification unit is used for identifying the scene type corresponding to the scene vocabulary by the user mobile equipment according to the scene vocabulary and a preset typical scene vocabulary library; and
and the content searching unit is used for searching the learning content corresponding to the scene type in a preset language learning library or a language learning platform and setting the learning content corresponding to the scene type as the scene learning content corresponding to the scene vocabulary.
7. Learning device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 4 are implemented when the computer program is executed by the processor.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201711421836.9A 2017-12-25 2017-12-25 Scene learning content acquisition method and device, learning equipment and storage medium Active CN108305629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711421836.9A CN108305629B (en) 2017-12-25 2017-12-25 Scene learning content acquisition method and device, learning equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711421836.9A CN108305629B (en) 2017-12-25 2017-12-25 Scene learning content acquisition method and device, learning equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108305629A CN108305629A (en) 2018-07-20
CN108305629B true CN108305629B (en) 2021-07-20

Family

ID=62870728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711421836.9A Active CN108305629B (en) 2017-12-25 2017-12-25 Scene learning content acquisition method and device, learning equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108305629B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696010A (en) * 2020-05-28 2020-09-22 深圳市元征科技股份有限公司 Scene-based training method, server, terminal device and storage medium
CN112908341B (en) * 2021-02-22 2023-01-03 哈尔滨工程大学 Language learner voiceprint recognition method based on multitask self-attention mechanism
CN113377937A (en) * 2021-06-22 2021-09-10 读书郎教育科技有限公司 System and method for instantly generating English dialogue training

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105943A (en) * 2006-07-13 2008-01-16 英业达股份有限公司 Language aided expression system and its method
CN103680262A (en) * 2012-09-25 2014-03-26 南京大五教育科技有限公司 Situational vocabulary learning method and a system thereof
WO2016101577A1 (en) * 2014-12-24 2016-06-30 中兴通讯股份有限公司 Voice recognition method, client and terminal device
CN105913039A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Visual-and-vocal sense based dialogue data interactive processing method and apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101427528B1 (en) * 2013-06-10 2014-08-07 이장호 Method of interactive language learning using foreign Video contents and Apparatus for it
CN104715009B (en) * 2014-12-30 2018-09-11 上海孩子国科教设备有限公司 Location finding obtains the method and system of scene knowledge
BR112018015114A2 (en) * 2016-01-25 2018-12-18 Wespeke Inc digital media content extraction system, lesson generation and presentation, digital media content extraction and lesson generation system, video transmission and associated audio or text channel analysis system and automatic exercise generation learning based on the data extracted from the channel and system for video streaming analysis and automatic generation of a lesson based on the data extracted from the video streaming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105943A (en) * 2006-07-13 2008-01-16 英业达股份有限公司 Language aided expression system and its method
CN103680262A (en) * 2012-09-25 2014-03-26 南京大五教育科技有限公司 Situational vocabulary learning method and a system thereof
WO2016101577A1 (en) * 2014-12-24 2016-06-30 中兴通讯股份有限公司 Voice recognition method, client and terminal device
CN105913039A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Visual-and-vocal sense based dialogue data interactive processing method and apparatus

Also Published As

Publication number Publication date
CN108305629A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
US9230547B2 (en) Metadata extraction of non-transcribed video and audio streams
CN109509470B (en) Voice interaction method and device, computer readable storage medium and terminal equipment
CN106601237B (en) Interactive voice response system and voice recognition method thereof
CN104598644B (en) Favorite label mining method and device
WO2018223796A1 (en) Speech recognition method, storage medium, and speech recognition device
WO2017112813A1 (en) Multi-lingual virtual personal assistant
US20150019206A1 (en) Metadata extraction of non-transcribed video and audio streams
CN109492221B (en) Information reply method based on semantic analysis and wearable equipment
CN103886034A (en) Method and equipment for building indexes and matching inquiry input information of user
CN111858876B (en) Knowledge base generation method, text searching method and device
CN108305629B (en) Scene learning content acquisition method and device, learning equipment and storage medium
US10504512B1 (en) Natural language speech processing application selection
CN108345612A (en) A kind of question processing method and device, a kind of device for issue handling
US11797629B2 (en) Content generation framework
CN108710653B (en) On-demand method, device and system for reading book
JP2021056668A (en) Program, device and method of agent speaking text corresponding to character
CN114708869A (en) Voice interaction method and device and electric appliance
CN113850291A (en) Text processing and model training method, device, equipment and storage medium
KR20130068624A (en) Apparatus and method for recognizing speech based on speaker group
CN112417210A (en) Body-building video query method, device, terminal and storage medium
CN113539235B (en) Text analysis and speech synthesis method, device, system and storage medium
CN113539234B (en) Speech synthesis method, device, system and storage medium
CN113850290B (en) Text processing and model training method, device, equipment and storage medium
CN111680514A (en) Information processing and model training method, device, equipment and storage medium
Ziaei et al. Prof-Life-Log: Audio Environment Detection for Naturalistic Audio Streams.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant