WO2015090182A1 - Multi-information synchronization code learning device and method - Google Patents

Multi-information synchronization code learning device and method Download PDF

Info

Publication number
WO2015090182A1
WO2015090182A1 PCT/CN2014/093937 CN2014093937W WO2015090182A1 WO 2015090182 A1 WO2015090182 A1 WO 2015090182A1 CN 2014093937 W CN2014093937 W CN 2014093937W WO 2015090182 A1 WO2015090182 A1 WO 2015090182A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
unit
right channel
channel
language
Prior art date
Application number
PCT/CN2014/093937
Other languages
French (fr)
Chinese (zh)
Inventor
王佑夫
杨海
张灼坤
Original Assignee
深圳环球维尔安科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳环球维尔安科技有限公司 filed Critical 深圳环球维尔安科技有限公司
Publication of WO2015090182A1 publication Critical patent/WO2015090182A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Definitions

  • the present invention relates to the field of language learning methods, and in particular, to a multi-information synchronous coding learning apparatus and method.
  • the image we feel and store in the brain is an integrated image of the image information formed by the bilateral eyeballs.
  • the visual field, the stereoscopic effect of the object, and the distance judgment are all more complete than the single-eye image.
  • the acoustic information receiving and processing modes of the bilateral ears of the human body are the same as the receiving and processing modes of the visual image information.
  • the process heard by the human ear can be simply seen by the vibration of the machine, causing the opening of the channel of the ear canal, and the integration of the vibration into an electrical signal through the auditory center ultimately allows us to hum, bird, or Hear in the form of human language.
  • the object of the present invention is to overcome the deficiencies in the prior art and to provide a multi-information synchronous coding learning device and method for improving learning and memory effects.
  • Providing a multi-information synchronous coding learning device comprising a central processing unit, an audio input unit, an audio output unit, a storage unit and a control panel;
  • the audio input unit is configured to input two corresponding speech signals of a source language and a target language
  • the central processing unit includes an audio processing unit, and the audio processing unit is configured to perform alignment processing on the midpoints of the two segments of the voice segment, and record the voice signal of the source language (or the target language) to form a left channel voice. Recording the voice signal of the target language (or source language) to form a right channel voice, and the processed two segments of the voice signal are transmitted to the storage unit;
  • the audio output unit includes left and right channel headphones, the left channel headphones receive left channel speech through the left channel, the right channel headphones receive right channel speech through the right channel, and the left and right channel headphones are synchronized for output. of;
  • the operation panel is composed of an operation keyboard and/or an operation touch screen, and is connected to the central processor for controlling input and output of a source language and a target language.
  • the device further includes a video and picture input unit and a display unit, wherein the video and picture input unit and the display unit are connected to the central processor for inputting and displaying pictures, videos, and videos corresponding to the voice signal. subtitle.
  • the device further includes a text input device, and the text input device is connected to the central processor for inputting text data;
  • the central processing unit further includes a text processing unit, the text processing unit is connected to the storage unit, and compares the text data by using left and right channel voices (corresponding source language and target language) stored in the storage unit. And comparing the two corresponding left and right channel voices (corresponding source language and target language) to the audio output unit for output.
  • a text processing unit the text processing unit is connected to the storage unit, and compares the text data by using left and right channel voices (corresponding source language and target language) stored in the storage unit. And comparing the two corresponding left and right channel voices (corresponding source language and target language) to the audio output unit for output.
  • the text input device includes a letter button, a touch screen tablet, and a scanner.
  • the above device further includes a voice input device, wherein the voice input device is a microphone, and is connected to the central processor for inputting a voice signal in real time;
  • the central processing unit further includes a voice processing unit, the voice processing unit is connected to the storage unit, and uses the left and right channel voices (corresponding source language and target language) stored in the storage unit to record the voice recorded in real time.
  • the signals are compared, and the corresponding two corresponding left and right channel voices (corresponding source language and target language) are sent to the audio output unit for output.
  • the audio output unit further includes a target language playing horn for separately performing sound amplification playing on the target language in the two pieces of left and right channel voices obtained by the comparison; the input and output modes include wired And wireless transmission.
  • the learning device of the present invention can be applied to mobile phones and various playback and display devices to have learning, translation and conversation functions.
  • a multi-information synchronous coding learning method is also provided, which includes the following steps
  • the voice signal of the source language and the voice signal of the target language corresponding thereto are segmented and input through the audio input unit, and the segmented source language voice signal and the voice signal of the target language are input one by one;
  • the audio processing unit respectively processes the two-segment-corresponding speech signals, so that the midpoints of the two-to-one-corresponding speech signals are aligned and recorded, and one of the speech signals is recorded to form a left-channel speech, and another A voice signal is recorded to form a right channel voice, and the processed two corresponding voice signals are stored in the storage unit;
  • the audio output unit extracts the voice signal in the storage unit, so that the left channel voice is input to the left channel earphone through the left channel, the right channel voice is input to the right channel earphone through the right channel, and finally the left and right channel earphones Synchronous output.
  • steps S2 and S3 further include steps
  • the word processing unit or the voice processing unit compares the text data or the instant voice signal by using left and right channel voices (corresponding source language and target language) stored in the storage unit, and compares the two corresponding segments.
  • the channel voice (the corresponding source language and target language) is sent to the audio output unit for output.
  • the target language is a translation, interpretation or description of the source language.
  • the beneficial effects of the present invention are: simultaneously receiving two sets of voice data corresponding to learning by two ears of the user as two relatively independent systems, and integrating a different cortical area of the brain to form a common information.
  • the Chinese "Good Morning” and the English "good morning” code are stored in the brain to form a common code in the user's brain memory.
  • the Chinese or English "Good Morning” enters the human ear again.
  • it reaches the corresponding area of the cerebral cortex it can be directly recognized by the brain as the already known information without going through a process of translation between Chinese and English or a process of recognizing things, thus completing the learning process.
  • Figure 1 is a schematic view of the structure of the present invention
  • Figure 2 is a schematic view 2 of the structure of the present invention.
  • FIG. 3 is a schematic diagram of a process of arranging the midpoints of the source language and the target language according to the present invention
  • Figure 4 is a flow chart of the playback of the present invention.
  • a multi-information synchronous coding learning apparatus of the present invention comprises a central processing unit 1, an audio input unit 2, an audio output unit 3, a storage unit 4, and a control panel 9.
  • the audio input unit 2 includes a source language input unit 21 and a target language input unit 22 for inputting two corresponding speech signals of a source language and a target language; wherein one segment of the speech signal is the source language and the other segment corresponds to a speech signal target language, the target language is a translation, interpretation or description of the source language, and the speech signals of the source language and the target language may be a native language and a first foreign language, a first foreign language and a second foreign language, a telegram and a translation, and a digital Name (such as a person, an organization and phone number, password number and corresponding language), calculations and results (such as the square of 2 and result 4), etc., when applied in the field of dance, gymnastics, diving, etc., can also
  • the voice signal of the source language is music
  • the voice signal of the target language is required for action.
  • the central processing unit 1 includes an audio processing unit 11, as shown in FIG. 3, the audio processing unit 11 is configured to perform alignment processing on the midpoints of the two segments of the speech segment, and to source the language (or target language).
  • the voice signal is recorded to form a left channel voice
  • the voice signal of the target language (or source language) is recorded to form a right channel voice
  • the processed two voice signals are transmitted to the storage unit 4 for storage.
  • the audio output unit 3 includes left and right channel headphones 31, 32, and the left channel earphone 31 receives left channel speech through a left channel for playing a voice signal of a source language (or a target language);
  • the channel earphone 32 receives the right channel voice through the right channel for playing the voice signal of the target language (or source language), and the left and right channel headphones are synchronized for output.
  • the operation panel 9 is composed of an operation keyboard 91 and/or an operation touch screen 92, and is connected to the central processing unit for controlling input and output of a source language and a target language, and various input and output to the following.
  • the device performs control operations; the modes of input and output include wired and wireless transmission.
  • a Chinese speech signal and a translated English speech signal are input through the audio input unit 2, and then the audio processing unit 11 performs alignment of the midpoints of the two Chinese and English speech signals. Processing, and recording the Chinese voice signal into left channel voice, the English voice signal is recorded into the right channel voice, and finally the Chinese voice signal is synchronously played through the right channel earphone through the left channel earphone and the English voice signal. Go out, pass Chinese and English respectively The user's left and right ears receive.
  • the above learning device uses two ears of the user as two relatively independent systems to simultaneously receive two sets of voice data that need to be learned correspondingly, and integrates different cortical areas of the brain to form a common information (such as Chinese "Good Morning” and The English "good morning” code is stored in the brain to form a common code in the user's brain memory.
  • Chinese or English "Good Morning” enters the human ear again and reaches the corresponding area of the cerebral cortex.
  • the brain can directly recognize the information that has already been recognized without experiencing a process of translation between Chinese and English or a process of association with known things, thus completing the learning process, which undoubtedly greatly enhances our knowledge. Learning and memory effects.
  • the learning device of the present invention further includes a video and picture input unit 5 and a display unit 6, and the video and picture input unit 5 and the display unit 6 are connected to the central processing unit 1 for inputting and displaying the voice signal.
  • the present invention further includes a text input device 7 such as a letter button, a touch screen tablet, and a scanner, and a voice input device 8 such as a microphone, and the text input device 7 and the voice input device 8 and the center.
  • a text input device 7 such as a letter button, a touch screen tablet, and a scanner
  • a voice input device 8 such as a microphone
  • the processor 1 is connected for inputting text data and instantaneously inputting a voice signal; the central processing unit 1 further includes a text processing unit 12 and a voice processing unit 13, the text processing unit 12 and the voice processing unit 13 and the storage unit 4 connection, using the left and right channel voice stored in the storage unit 4 (for the corresponding source language and target language, including Chinese and foreign language translation data) to compare the text data and the instant input voice signal, and compare the obtained Two corresponding speech signals (two corresponding speech signals corresponding to the source language and the target language, such as Chinese and foreign languages) are sent to the audio output unit 3 for synchronous playback, and the user's ears respectively receive synchronous translation.
  • the central processing unit 1 further includes a text processing unit 12 and a voice processing unit 13, the text processing unit 12 and the voice processing unit 13 and the storage unit 4 connection, using the left and right channel voice stored in the storage unit 4 (for the corresponding source language and target language, including Chinese and foreign language translation data) to compare the text data and the instant input voice signal, and compare the obtained Two corresponding
  • the text input device or the voice input device inputs the text data or instantly inputs the voice signal (source language), and compares the text data or the instant recorded voice signal (source language) with the data in the storage unit, and obtains two corresponding correspondences.
  • the source language and the speech signal of the target language are synchronized
  • the earphone is played, and the target language is broadcasted by the target language playing speaker 33, thereby functioning as a translator and a conversation machine.
  • the invention also provides a multi-information synchronous coding learning method, which comprises the following steps
  • the voice signal of the source language and the voice signal of the target language corresponding thereto are segmented and input through the audio input unit, and the segmented source language voice signal and the voice signal of the target language are input one by one;
  • the target language is the translation, interpretation or description of the source language.
  • the source language and its corresponding target language can be the native language and the first foreign language, the first foreign language and the second foreign language, the telegram and the translation, the number and the name (such as someone, somebody).
  • the organization and telephone number, the password number and the corresponding language) operations and results (such as the square of 2 and the result 4), etc., when applied in the field of dance, gymnastics, diving, etc., the source language can be the music in the project
  • the target language is the requirements of the action essentials, which enables the learners to meet the standard requirements faster.
  • the audio processing unit 11 respectively processes the two-segment-one corresponding speech signals, so that the midpoints of the two-segment-corresponding speech signals are arranged and recorded, as shown in FIG. 3, and one of the speech signals is recorded.
  • the left channel voice is recorded, and another segment of the voice signal is recorded to form a right channel voice, and the processed two corresponding segments of the voice signal are stored in the storage unit 4.
  • the audio output unit 3 extracts the voice signal in the storage unit, so that the left channel voice is input to the left channel earphone 31 through the left channel, and the right channel voice is input to the right channel earphone 32 through the right channel, and finally left
  • the channel headphones 31 and the right channel headphones 32 perform synchronous output.
  • the steps S2 and S3 further include a step S201, collecting text data or an instant voice signal through a text input device or a voice input device, and the word processing unit or the voice.
  • the processing unit compares the text data or the instant voice signal by using the left and right channel voices stored in the storage unit (including the corresponding source language and the target language, including Chinese and foreign language translation data), and compares the two segments.
  • Corresponding language signals (such as two speech signals corresponding to Chinese and foreign languages) are sent to the audio output unit for synchronous playback, and the user's ears are respectively After receiving the Chinese information in English and synchronous translation, you can understand the meaning of the language spoken by the other party and play the role of translation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A multi-information synchronization code learning device, comprising: an audio input unit (2) which is used for inputting two segments of corresponding voice signals; an audio processing unit (11) which is used for conducting alignment arrangement processing on midpoints of two ends of each of two voice segments and recording the two segments of voice signals into a left-channel voice and a right-channel voice respectively; and an audio output unit (3) which comprises a left-channel earphone and a right-channel earphone for respectively receiving the left-channel voice and the right-channel voice, and conducting synchronous output on the left-channel earphone and the right-channel earphone. The present invention uses two ears of a user as two relatively independent systems to simultaneously receive two groups of voice data needing to be correspondingly learnt, and forms a common code in the brain memory of the user; and when corresponding signals enter the brain again, the signals can be directly recognized as information which has been recognized already without experiencing a translation process between Chinese and English or an association process of a thing which has been recognized, so that a learning process is completed, thereby greatly improving our learning and memory effect of much knowledge undoubtedly. Also disclosed is a multi-information synchronization code learning method.

Description

多信息同步编码学习装置及方法Multi-information synchronous coding learning device and method 【技术领域】[Technical Field]
本发明涉及一种语言学习方法领域,特别涉及一种多信息同步编码学习装置及方法。The present invention relates to the field of language learning methods, and in particular, to a multi-information synchronous coding learning apparatus and method.
【背景技术】【Background technique】
发明者作为医生,在研究双眼成像的原理中,注意到人体左右眼球接收的图像信号是不同的,但经过视网膜、视神经、视交叉将外界的图像视觉信息通过视细胞、双极细胞、水平细胞和神经节细胞,并经视神经以“串行”的信息模式传递至外侧膝状体进行解码成“点阵”形式再由视放射传送到初级视皮质不同功能区,最后向高级区域的相应分工区域传递,在不同皮质区整合以产生对视觉信息的完整认知。故我们感受和存储于大脑中的图像是双侧眼球分别形成图像信息的整合图像,其视野、物体的立体感、距离判断等都较单眼的图像更为完整。同样的原理,人体双侧耳朵的音响信息接收和处理模式与视觉图像信息的接收和处理模式是相同的。人耳听到的过程,可以简单地看着是由机械的振动,引起双耳道传声渠道的开放,将振动转化为电信号经听觉中枢的整合最终让我们以蝉鸣、鸟鸣、或人的语言等形式听到。As a doctor, in the study of the principle of binocular imaging, the inventor noticed that the image signals received by the left and right eyeballs of the human body are different, but through the retina, optic nerve, and optic chiasm, the external image visual information is passed through the visual cells, bipolar cells, and horizontal cells. And ganglion cells, and the optic nerve is transmitted to the lateral geniculate body in a "serial" information pattern for decoding into a "lattice" form and then transmitted by visual radiation to different functional areas of the primary visual cortex, and finally to the corresponding division of labor in the advanced area. Regional delivery, integrated in different cortical areas to produce a complete understanding of visual information. Therefore, the image we feel and store in the brain is an integrated image of the image information formed by the bilateral eyeballs. The visual field, the stereoscopic effect of the object, and the distance judgment are all more complete than the single-eye image. By the same principle, the acoustic information receiving and processing modes of the bilateral ears of the human body are the same as the receiving and processing modes of the visual image information. The process heard by the human ear can be simply seen by the vibration of the machine, causing the opening of the channel of the ear canal, and the integration of the vibration into an electrical signal through the auditory center ultimately allows us to hum, bird, or Hear in the form of human language.
目前,虽然有应用视频加音频的模式来加强对外语的学习方法,但不能同时具备母语加外语或两种外语与视频图像共同输入的模式。有研究表明,人类大脑对事物的认识和记忆,是靠感觉器官对外界事物的形态、音响、触感等刺激形成信息进入大脑相应部位存储而完成记忆。当新的刺激形成的信息与大脑中的记忆信息相同时,该刺激就形成回忆反射,提示我们这是已经认识的事物。人类在成长和学习的过程中,不断扩大和加深对周边事物和知识的认知。 At present, although there is a mode of applying video plus audio to enhance the learning method of foreign languages, it is not possible to have both a native language plus a foreign language or a mode in which two foreign languages and video images are input together. Studies have shown that the human brain's understanding and memory of things is based on the sensory organ's formation of information on the shape, sound, and touch of external things, and the information is stored in the corresponding parts of the brain to complete the memory. When the information generated by the new stimulus is the same as the memory information in the brain, the stimulus forms a reverberant reflection, suggesting that this is something that we already know. In the process of growth and learning, human beings continue to expand and deepen their understanding of surrounding things and knowledge.
【发明内容】[Summary of the Invention]
本发明的目的在于克服现有技术中的不足,提供一种提高学习和记忆效果的多信息同步编码学习装置及方法。The object of the present invention is to overcome the deficiencies in the prior art and to provide a multi-information synchronous coding learning device and method for improving learning and memory effects.
本发明的目的是这样实现的:The object of the invention is achieved in this way:
提供一种多信息同步编码学习装置,它包括中央处理器、音频输入单元、音频输出单元、存储单元和控制面板;Providing a multi-information synchronous coding learning device, comprising a central processing unit, an audio input unit, an audio output unit, a storage unit and a control panel;
所述音频输入单元用于输入两段相对应的源语言和目标语言的语音信号;The audio input unit is configured to input two corresponding speech signals of a source language and a target language;
所述中央处理器包括音频处理单元,所述音频处理单元用于对两段语音段两端的中点进行对齐排列处理,并将源语言(或目标语言)的语音信号录制形成左声道语音,将目标语言(或源语言)的语音信号录制形成右声道语音,处理后的两段语音信号传送至存储单元;The central processing unit includes an audio processing unit, and the audio processing unit is configured to perform alignment processing on the midpoints of the two segments of the voice segment, and record the voice signal of the source language (or the target language) to form a left channel voice. Recording the voice signal of the target language (or source language) to form a right channel voice, and the processed two segments of the voice signal are transmitted to the storage unit;
所述音频输出单元包括左右声道耳机,所述左声道耳机通过左声道接收左声道语音,右声道耳机通过右声道接收右声道语音,并且左右声道耳机是同步进行输出的;The audio output unit includes left and right channel headphones, the left channel headphones receive left channel speech through the left channel, the right channel headphones receive right channel speech through the right channel, and the left and right channel headphones are synchronized for output. of;
所述操作面板由操作键盘和/或操作触控屏组成,其与所述中央处理器连接,用于控制源语言和目标语言的输入及输出。The operation panel is composed of an operation keyboard and/or an operation touch screen, and is connected to the central processor for controlling input and output of a source language and a target language.
上述装置中,还包括视频及图片输入单元和显示单元,所述视频及图片输入单元和显示单元与所述中央处理器连接,用以输入和显示与所述语音信号相对应的图片、视频以及字幕。The device further includes a video and picture input unit and a display unit, wherein the video and picture input unit and the display unit are connected to the central processor for inputting and displaying pictures, videos, and videos corresponding to the voice signal. subtitle.
上述装置中,还包括文字输入设备,所述文字输入设备与中央处理器连接,用于输入文字资料;The device further includes a text input device, and the text input device is connected to the central processor for inputting text data;
所述中央处理器还包括文本处理单元,所述文本处理单元与所述存储单元连接,利用存储单元中存储的左右声道语音(相对应的源语言和目标语言)对所述文字资料进行对比,并将对比得到的两段相对应的左右声道语音(相对应的源语言和目标语言)发送至音频输出单元进行输出。The central processing unit further includes a text processing unit, the text processing unit is connected to the storage unit, and compares the text data by using left and right channel voices (corresponding source language and target language) stored in the storage unit. And comparing the two corresponding left and right channel voices (corresponding source language and target language) to the audio output unit for output.
上述装置中,所述文字输入设备包括字母按键、触屏手写板以及扫描机。 In the above device, the text input device includes a letter button, a touch screen tablet, and a scanner.
上述装置中,还包括语音输入设备,所述语音输入设备为麦克风,其与中央处理器连接,用于即时录入语音信号;The above device further includes a voice input device, wherein the voice input device is a microphone, and is connected to the central processor for inputting a voice signal in real time;
所述中央处理器还包括语音处理单元,所述语音处理单元与所述存储单元连接,利用存储单元中存储的左右声道语音(相对应的源语言和目标语言)对所述即时录入的语音信号进行对比,并将对比得到的两段相对应的左右声道语音(相对应的源语言和目标语言)发送至音频输出单元进行输出。The central processing unit further includes a voice processing unit, the voice processing unit is connected to the storage unit, and uses the left and right channel voices (corresponding source language and target language) stored in the storage unit to record the voice recorded in real time. The signals are compared, and the corresponding two corresponding left and right channel voices (corresponding source language and target language) are sent to the audio output unit for output.
上述装置中,所述音频输出单元还包括有目标语言播放喇叭,用于单独对所述对比得到的两段左右声道语音中的目标语言进行扩音播放;所述输入和输出的模式包括有线及无线传输。In the above device, the audio output unit further includes a target language playing horn for separately performing sound amplification playing on the target language in the two pieces of left and right channel voices obtained by the comparison; the input and output modes include wired And wireless transmission.
本发明的学习装置可应用于手机和各种播放、显示设备内,使其具备学习、翻译、会话功能。The learning device of the present invention can be applied to mobile phones and various playback and display devices to have learning, translation and conversation functions.
还提供一种多信息同步编码学习方法,它包括如下步骤A multi-information synchronous coding learning method is also provided, which includes the following steps
S1、将源语言的语音信号及与其对应的目标语言的语音信号通过音频输入单元进行分段录入,并使其分段的源语言的语音信号和目标语言的语音信号一一对应录入;S1, the voice signal of the source language and the voice signal of the target language corresponding thereto are segmented and input through the audio input unit, and the segmented source language voice signal and the voice signal of the target language are input one by one;
S2、通过音频处理单元分别将两段一一对应的语音信号进行处理,使两段一一对应的语音信号两端的中点对齐排列录制,并将其中一段语音信号录制形成左声道语音,另一段语音信号录制形成右声道语音,处理好的两段相对应的语音信号存储到存储单元内;S2. The audio processing unit respectively processes the two-segment-corresponding speech signals, so that the midpoints of the two-to-one-corresponding speech signals are aligned and recorded, and one of the speech signals is recorded to form a left-channel speech, and another A voice signal is recorded to form a right channel voice, and the processed two corresponding voice signals are stored in the storage unit;
S3、然后音频输出单元提取存储单元内的语音信号,使左声道语音通过左声道输入到左声道耳机,右声道语音通过右声道输入到右声道耳机,最后左右声道耳机进行同步输出。S3, then the audio output unit extracts the voice signal in the storage unit, so that the left channel voice is input to the left channel earphone through the left channel, the right channel voice is input to the right channel earphone through the right channel, and finally the left and right channel earphones Synchronous output.
S4、将与所述源语言相对应的字幕、视频和图片通过显示单元与音频输出单元进行同步输出。S4. Synchronize the subtitles, videos, and pictures corresponding to the source language through the display unit and the audio output unit.
进一步的,所述步骤S2和S3之间还包括步骤Further, the steps S2 and S3 further include steps
S201、通过文字输入设备或语音输入设备采集文字资料或即时语音信号,文 字处理单元或语音处理单元利用存储单元中存储的左右声道语音(相对应的源语言和目标语言)对所述文字资料或即时语音信号进行对比,并将对比得到的两段相对应的左右声道语音(相对应的源语言和目标语言)发送至音频输出单元进行输出。S201, collecting text data or instant voice signal through a text input device or a voice input device, The word processing unit or the voice processing unit compares the text data or the instant voice signal by using left and right channel voices (corresponding source language and target language) stored in the storage unit, and compares the two corresponding segments. The channel voice (the corresponding source language and target language) is sent to the audio output unit for output.
进一步的,所述目标语言为源语言的翻译、解释或说明。Further, the target language is a translation, interpretation or description of the source language.
与现有技术相比,本发明的有益效果是:以使用者的两个耳朵为两个相对独立的系统同时接收两组需要对应学习的语音数据,经过大脑不同皮质区整合形成一个共同的信息(比如中文的“早上好”和英文的“good morning”)编码存储于大脑中,使之在使用者的大脑记忆中形成一个共同的编码,当中文或英文的“早上好”再次进入人耳并到达大脑皮质的相应区域内时,立刻可以被大脑直接认知为已经认识的信息而无需经历一个中文与英文之间的翻译过程或已认知事物的联想过程,从而完成学习的过程,这无疑大大的提升了我们对于很多知识的学习和记忆效果;另外,增加了相对应的图片、视频以及字幕形成一个多维的学习编码从而完成学习过程,进一步的提高了学习和记忆的效果;而且通过文字输入设备和语音输入设备进行输入文字资料和即时录入语音信号,并通过存储单元内的数据进行对比,对比获得两组对应的语音信号进行同步播出,进而起到翻译机、会话机的作用。Compared with the prior art, the beneficial effects of the present invention are: simultaneously receiving two sets of voice data corresponding to learning by two ears of the user as two relatively independent systems, and integrating a different cortical area of the brain to form a common information. (For example, the Chinese "Good Morning" and the English "good morning" code are stored in the brain to form a common code in the user's brain memory. When the Chinese or English "Good Morning" enters the human ear again. And when it reaches the corresponding area of the cerebral cortex, it can be directly recognized by the brain as the already known information without going through a process of translation between Chinese and English or a process of recognizing things, thus completing the learning process. Undoubtedly greatly improved our learning and memory effects for a lot of knowledge; in addition, the corresponding pictures, videos and subtitles were added to form a multi-dimensional learning code to complete the learning process, further improving the learning and memory effects; Text input device and voice input device for inputting text data and instant input voice Number, and the data in the storage unit contrast, comparison of the two groups to obtain the corresponding speech signal broadcast simultaneously, and thus play a translator, the phone will effect.
【附图说明】[Description of the Drawings]
图1为本发明的结构示意图一;Figure 1 is a schematic view of the structure of the present invention;
图2为本发明的结构示意图二;Figure 2 is a schematic view 2 of the structure of the present invention;
图3为本发明源语言与目标语言的中点对其排列处理示意图;3 is a schematic diagram of a process of arranging the midpoints of the source language and the target language according to the present invention;
图4为本发明的播放流程图。Figure 4 is a flow chart of the playback of the present invention.
【具体实施方式】【detailed description】
下面结合附图及具体实施方式对本发明作进一步描述: The present invention is further described below in conjunction with the accompanying drawings and specific embodiments.
如图1所示,本发明的一种多信息同步编码学习装置,它包括中央处理器1、音频输入单元2、音频输出单元3、存储单元4和控制面板9。As shown in FIG. 1, a multi-information synchronous coding learning apparatus of the present invention comprises a central processing unit 1, an audio input unit 2, an audio output unit 3, a storage unit 4, and a control panel 9.
所述音频输入单元2包括源语言输入单元21和目标语言输入单元22,用于输入两段相对应的源语言和目标语言的语音信号;其中,其中一段语音信号为源语言,另一段对应的语音信号目标语言,所述目标语言是源语言的翻译、解释或说明,源语言和目标语言的语音信号可分别为母语与第一外语、第一外语与第二外语、电报与译文、数字与名称(如某人、某机构与电话号码、密码数字与对应语言)、运算与结果(如2的平方与结果4)等,其在应用在舞蹈、体操、跳水等项目领域中时,也可以是源语言的语音信号为音乐,目标语言的语音信号为动作要领要求。The audio input unit 2 includes a source language input unit 21 and a target language input unit 22 for inputting two corresponding speech signals of a source language and a target language; wherein one segment of the speech signal is the source language and the other segment corresponds to a speech signal target language, the target language is a translation, interpretation or description of the source language, and the speech signals of the source language and the target language may be a native language and a first foreign language, a first foreign language and a second foreign language, a telegram and a translation, and a digital Name (such as a person, an organization and phone number, password number and corresponding language), calculations and results (such as the square of 2 and result 4), etc., when applied in the field of dance, gymnastics, diving, etc., can also The voice signal of the source language is music, and the voice signal of the target language is required for action.
所述中央处理器1包括音频处理单元11,如图3所示,所述音频处理单元11用于对两段语音段两端的中点进行对齐排列处理,并将源语言(或目标语言)的语音信号录制形成左声道语音,将目标语言(或源语言)的语音信号录制形成右声道语音,处理后的两段语音信号传送至存储单元4进行储存。The central processing unit 1 includes an audio processing unit 11, as shown in FIG. 3, the audio processing unit 11 is configured to perform alignment processing on the midpoints of the two segments of the speech segment, and to source the language (or target language). The voice signal is recorded to form a left channel voice, the voice signal of the target language (or source language) is recorded to form a right channel voice, and the processed two voice signals are transmitted to the storage unit 4 for storage.
所述音频输出单元3包括左右声道耳机31、32,所述左声道耳机31通过左声道接收左声道语音,用于播放源语言(或目标语言)的语音信号;所述右声道耳机32通过右声道接收右声道语音,用于播放目标语言(或源语言)的语音信号,并且左右声道耳机是同步进行输出的。The audio output unit 3 includes left and right channel headphones 31, 32, and the left channel earphone 31 receives left channel speech through a left channel for playing a voice signal of a source language (or a target language); The channel earphone 32 receives the right channel voice through the right channel for playing the voice signal of the target language (or source language), and the left and right channel headphones are synchronized for output.
所述操作面板9由操作键盘91和/或操作触控屏92组成,其与所述中央处理器连接,用于控制源语言和目标语言的输入及输出,以及对下述各种输入、输出设备进行控制操作;所述输入和输出的模式包括有线及无线传输。The operation panel 9 is composed of an operation keyboard 91 and/or an operation touch screen 92, and is connected to the central processing unit for controlling input and output of a source language and a target language, and various input and output to the following. The device performs control operations; the modes of input and output include wired and wireless transmission.
具体使用时,如将一段中文的语音信号和一段翻译的英文的语音信号通过音频输入单元2进行输入,然后通过音频处理单元11对这两段中英文的语音信号进行两端的中点进行对齐排列处理,并将中文的语音信号录制成左声道语音,英文的语音信号录制成右声道语音,最后使中文的语音信号通过左声道耳机、英文的语音信号通过右声道耳机进行同步播放出去,使用中文和英文分别通过 使用者的左耳和右耳进行接收。For specific use, for example, a Chinese speech signal and a translated English speech signal are input through the audio input unit 2, and then the audio processing unit 11 performs alignment of the midpoints of the two Chinese and English speech signals. Processing, and recording the Chinese voice signal into left channel voice, the English voice signal is recorded into the right channel voice, and finally the Chinese voice signal is synchronously played through the right channel earphone through the left channel earphone and the English voice signal. Go out, pass Chinese and English respectively The user's left and right ears receive.
上述的学习装置是以使用者的两个耳朵为两个相对独立的系统同时接收两组需要对应学习的语音数据,经过大脑不同皮质区整合形成一个共同的信息(比如中文的“早上好”和英文的“good morning”)编码存储于大脑中,使之在使用者的大脑记忆中形成一个共同的编码,当中文或英文的“早上好”再次进入人耳并到达大脑皮质的相应区域内时,立刻可以被大脑直接认知为已经认识的信息而无需经历一个中文与英文之间的翻译过程或已认知事物的联想过程,从而完成学习的过程,这无疑大大的提升了我们对于很多知识的学习和记忆效果。The above learning device uses two ears of the user as two relatively independent systems to simultaneously receive two sets of voice data that need to be learned correspondingly, and integrates different cortical areas of the brain to form a common information (such as Chinese "Good Morning" and The English "good morning" code is stored in the brain to form a common code in the user's brain memory. When the Chinese or English "Good Morning" enters the human ear again and reaches the corresponding area of the cerebral cortex. Immediately, the brain can directly recognize the information that has already been recognized without experiencing a process of translation between Chinese and English or a process of association with known things, thus completing the learning process, which undoubtedly greatly enhances our knowledge. Learning and memory effects.
本发明的学习装置还包括视频及图片输入单元5和显示单元6,所述视频及图片输入单元5和显示单元6与所述中央处理器1连接,用以输入和显示与所述语音信号相对应的图片、视频以及字幕;参见图4所示,在两个耳朵形成相对独立的学习系统的基础上,增加了相对应的图片、视频以及字幕形成一个多维的学习编码,从而完成学习过程,进一步的提高了学习和记忆的效果。The learning device of the present invention further includes a video and picture input unit 5 and a display unit 6, and the video and picture input unit 5 and the display unit 6 are connected to the central processing unit 1 for inputting and displaying the voice signal. Corresponding pictures, videos and subtitles; as shown in Fig. 4, on the basis of the relatively independent learning system formed by the two ears, the corresponding pictures, videos and subtitles are added to form a multi-dimensional learning code, thereby completing the learning process. Further improve the effect of learning and memory.
进一步的,如图2所示,本发明还包括有字母按键、触屏手写板以及扫描机等文字输入设备7和麦克风等语音输入设备8,所述文字输入设备7和语音输入设备8与中央处理器1连接,用于输入文字资料和即时录入语音信号;所述中央处理器1还包括文本处理单元12和语音处理单元13,所述文本处理单元12和语音处理单元13与所述存储单元4连接,利用存储单元4中存储的左右声道语音(为相对应的源语言和目标语言,包括有中外语翻译数据)对所述文字资料和即时录入语音信号进行对比,并将对比得到的两段相对应的语言信号(为相对应的源语言和目标语言,如中文和外文相对应的两段语音信号)发送至音频输出单元3进行同步播放,使用者的耳朵分别收到同步翻译的信息,就能理解对方所说语言的意思,起到翻译的作用;在使用者进行交谈或采用手写的模式时,通过文字输入设备或语音输入设备进行输入文字资料或即时录入语音信号(源语言),并将所述文字资料或即时录入语音信号(源语言)与存储单元内的数据进行对比,对比获得两组对应的源语言和目标语言的语音信号进行同步 耳机播放,并且使目标语言经目标语言播放喇叭33播出,进而起到翻译机、会话机的作用。Further, as shown in FIG. 2, the present invention further includes a text input device 7 such as a letter button, a touch screen tablet, and a scanner, and a voice input device 8 such as a microphone, and the text input device 7 and the voice input device 8 and the center. The processor 1 is connected for inputting text data and instantaneously inputting a voice signal; the central processing unit 1 further includes a text processing unit 12 and a voice processing unit 13, the text processing unit 12 and the voice processing unit 13 and the storage unit 4 connection, using the left and right channel voice stored in the storage unit 4 (for the corresponding source language and target language, including Chinese and foreign language translation data) to compare the text data and the instant input voice signal, and compare the obtained Two corresponding speech signals (two corresponding speech signals corresponding to the source language and the target language, such as Chinese and foreign languages) are sent to the audio output unit 3 for synchronous playback, and the user's ears respectively receive synchronous translation. Information, you can understand the meaning of the language spoken by the other party, play a role in translation; when the user talks or uses the handwritten mode, through The text input device or the voice input device inputs the text data or instantly inputs the voice signal (source language), and compares the text data or the instant recorded voice signal (source language) with the data in the storage unit, and obtains two corresponding correspondences. The source language and the speech signal of the target language are synchronized The earphone is played, and the target language is broadcasted by the target language playing speaker 33, thereby functioning as a translator and a conversation machine.
本发明还提供一种多信息同步编码学习方法,它包括如下步骤The invention also provides a multi-information synchronous coding learning method, which comprises the following steps
S1、将源语言的语音信号及与其对应的目标语言的语音信号通过音频输入单元进行分段录入,并使其分段的源语言的语音信号和目标语言的语音信号一一对应录入;所述目标语言为源语言的翻译、解释或说明,源语言和其对应的目标语言可分别为母语与第一外语、第一外语与第二外语、电报与译文、数字与名称(如某人、某机构与电话号码、密码数字与对应语言)运算与结果(如2的平方与结果4)等,其在应用在舞蹈、体操、跳水等项目领域中时,源语言可以为所述项目中的音乐,目标语言为动作要领要求,可使学习者较快达到标准要求。S1, the voice signal of the source language and the voice signal of the target language corresponding thereto are segmented and input through the audio input unit, and the segmented source language voice signal and the voice signal of the target language are input one by one; The target language is the translation, interpretation or description of the source language. The source language and its corresponding target language can be the native language and the first foreign language, the first foreign language and the second foreign language, the telegram and the translation, the number and the name (such as someone, somebody). The organization and telephone number, the password number and the corresponding language) operations and results (such as the square of 2 and the result 4), etc., when applied in the field of dance, gymnastics, diving, etc., the source language can be the music in the project The target language is the requirements of the action essentials, which enables the learners to meet the standard requirements faster.
S2、通过音频处理单元11分别将两段一一对应的语音信号进行处理,使两段一一对应的语音信号两端的中点排列录制,如图3所示,并将其中一段语音信号录制形成左声道语音,另一段语音信号录制形成右声道语音,处理好的两段相对应的语音信号存储到存储单元4内。S2: The audio processing unit 11 respectively processes the two-segment-one corresponding speech signals, so that the midpoints of the two-segment-corresponding speech signals are arranged and recorded, as shown in FIG. 3, and one of the speech signals is recorded. The left channel voice is recorded, and another segment of the voice signal is recorded to form a right channel voice, and the processed two corresponding segments of the voice signal are stored in the storage unit 4.
S3、然后音频输出单元3提取存储单元内的语音信号,使左声道语音通过左声道输入到左声道耳机31,右声道语音通过右声道输入到右声道耳机32,最后左声道耳机31和右声道耳机32进行同步输出。S3, then the audio output unit 3 extracts the voice signal in the storage unit, so that the left channel voice is input to the left channel earphone 31 through the left channel, and the right channel voice is input to the right channel earphone 32 through the right channel, and finally left The channel headphones 31 and the right channel headphones 32 perform synchronous output.
S4、将与所述源语言相对应的字幕、视频和图片通过显示单元与音频输出单元的音频信号进行同步输出。S4. Synchronize the subtitles, videos, and pictures corresponding to the source language through the audio signals of the display unit and the audio output unit.
进一步的,当将本方法应用于同步翻译、会话机上时,所述步骤S2和S3之间还包括步骤S201、通过文字输入设备或语音输入设备采集文字资料或即时语音信号,文字处理单元或语音处理单元利用存储单元中存储的左右声道语音(为相对应的源语言和目标语言,包括有中外语翻译数据)对所述文字资料或即时语音信号进行对比,并将对比得到的两段相对应的语言信号(如中文和外文相对应的两段语音信号)发送至音频输出单元进行同步播放,使用者的耳朵分别 收到英文和同步翻译的中文信息,就能理解对方所说语言的意思,起到翻译的作用;在使用者进行交谈或采用手写的模式时,通过文字输入设备或语音输入设备进行输入文字资料或即时录入语音信号(源语言),并将所述文字资料或即时录入语音信号(源语言)与存储单元内的数据进行对比,对比获得两组对应的源语言和目标语言的语音信号进行同步耳机播放,并且使目标语言经目标语言播放喇叭33播出,进而起到翻译机、会话机的作用。Further, when the method is applied to the synchronous translation and the conversation machine, the steps S2 and S3 further include a step S201, collecting text data or an instant voice signal through a text input device or a voice input device, and the word processing unit or the voice. The processing unit compares the text data or the instant voice signal by using the left and right channel voices stored in the storage unit (including the corresponding source language and the target language, including Chinese and foreign language translation data), and compares the two segments. Corresponding language signals (such as two speech signals corresponding to Chinese and foreign languages) are sent to the audio output unit for synchronous playback, and the user's ears are respectively After receiving the Chinese information in English and synchronous translation, you can understand the meaning of the language spoken by the other party and play the role of translation. When the user talks or uses the handwritten mode, input text data through the text input device or voice input device. Or immediately input the voice signal (source language), and compare the text data or the instant recorded voice signal (source language) with the data in the storage unit, and obtain two sets of corresponding source language and target language voice signals for synchronization. The earphone is played, and the target language is broadcasted by the target language playing speaker 33, thereby functioning as a translator and a conversation machine.
根据上述说明书的揭示和教导,本发明所属领域的技术人员还可以对上述实施方式进行适当的变更和修改。因此,本发明并不局限于上面揭示和描述的具体实施方式,对本发明的一些修改和变更也应当落入本发明的权利要求的保护范围内。此外,尽管本说明书中使用了一些特定的术语,但这些术语只是为了方便说明,并不对本发明构成任何限制。 The above embodiments may be modified and modified as appropriate by those skilled in the art in light of the above disclosure. Therefore, the invention is not limited to the specific embodiments disclosed and described herein, and the modifications and variations of the invention are intended to fall within the scope of the appended claims. In addition, although specific terms are used in the specification, these terms are merely for convenience of description and do not limit the invention.

Claims (10)

  1. 一种多信息同步编码学习装置,其特征在于:包括中央处理器、音频输入单元、音频输出单元、存储单元和控制面板;A multi-information synchronous coding learning device, comprising: a central processing unit, an audio input unit, an audio output unit, a storage unit and a control panel;
    所述音频输入单元用于输入两段相对应的源语言和目标语言的语音信号;The audio input unit is configured to input two corresponding speech signals of a source language and a target language;
    所述中央处理器包括音频处理单元,所述音频处理单元用于对两段语音段两端的中点进行对齐排列处理,并将源语言(或目标语言)的语音信号录制形成左声道语音,将目标语言(或源语言)的语音信号录制形成右声道语音,处理后的两段语音信号传送至存储单元;The central processing unit includes an audio processing unit, and the audio processing unit is configured to perform alignment processing on the midpoints of the two segments of the voice segment, and record the voice signal of the source language (or the target language) to form a left channel voice. Recording the voice signal of the target language (or source language) to form a right channel voice, and the processed two segments of the voice signal are transmitted to the storage unit;
    所述音频输出单元包括左右声道耳机,所述左声道耳机通过左声道接收左声道语音,右声道耳机通过右声道接收右声道语音,并且左右声道耳机是同步进行输出的;The audio output unit includes left and right channel headphones, the left channel headphones receive left channel speech through the left channel, the right channel headphones receive right channel speech through the right channel, and the left and right channel headphones are synchronized for output. of;
    所述控制面板由操作键盘和/或操作触控屏组成,其与所述中央处理器连接,用于控制源语言和目标语言的输入及输出。The control panel is composed of an operation keyboard and/or an operation touch screen, and is connected to the central processor for controlling input and output of a source language and a target language.
  2. 根据权利要求1所述的多信息同步编码学习装置,其特征在于:还包括视频及图片输入单元和显示单元,所述视频及图片输入单元和显示单元与所述中央处理器连接,用以输入和显示与所述语音信号相对应的图片、视频以及字幕。The multi-information synchronous coding learning device according to claim 1, further comprising a video and picture input unit and a display unit, wherein the video and picture input unit and the display unit are connected to the central processor for input. And displaying pictures, videos, and subtitles corresponding to the voice signal.
  3. 根据权利要求1或2所述的多信息同步编码学习装置,其特征在于:还包括文字输入设备,所述文字输入设备与中央处理器连接,用于输入文字资料;The multi-information synchronous coding learning device according to claim 1 or 2, further comprising: a text input device, wherein the text input device is connected to the central processor for inputting text data;
    所述中央处理器还包括文本处理单元,所述文本处理单元与所述存储单元连接,利用存储单元中存储的左右声道语音(相对应的源语言和目标语言)对所述文字资料进行对比,并将对比得到的两段相对应的左右声道语音(相对应的源语言和目标语言)发送至音频输出单元进行输出。 The central processing unit further includes a text processing unit, the text processing unit is connected to the storage unit, and compares the text data by using left and right channel voices (corresponding source language and target language) stored in the storage unit. And comparing the two corresponding left and right channel voices (corresponding source language and target language) to the audio output unit for output.
  4. 根据权利要求3所述的多信息同步编码学习装置,其特征在于:所述文字输入设备包括字母按键、触屏手写板以及扫描机。The multi-information synchronous coding learning apparatus according to claim 3, wherein said text input device comprises a letter button, a touch screen tablet, and a scanner.
  5. 根据权利要求1或2所述的多信息同步编码学习装置,其特征在于:还包括语音输入设备,所述语音输入设备为麦克风,其与中央处理器连接,用于即时录入语音信号;The multi-information synchronous coding learning device according to claim 1 or 2, further comprising a voice input device, wherein the voice input device is a microphone, and is connected to the central processor for inputting a voice signal in real time;
    所述中央处理器还包括语音处理单元,所述语音处理单元与所述存储单元连接,利用存储单元中存储的左右声道语音(相对应的源语言和目标语言)对所述即时录入的语音信号进行对比,并将对比得到的两段相对应的左右声道语音(相对应的源语言和目标语言)发送至音频输出单元进行输出。The central processing unit further includes a voice processing unit, the voice processing unit is connected to the storage unit, and uses the left and right channel voices (corresponding source language and target language) stored in the storage unit to record the voice recorded in real time. The signals are compared, and the corresponding two corresponding left and right channel voices (corresponding source language and target language) are sent to the audio output unit for output.
  6. 根据权利要求5所述的多信息同步编码学习装置,其特征在于:所述音频输出单元还包括有目标语言播放喇叭,用于单独对所述对比得到的两段左右声道语音中的目标语言进行扩音播放;所述输入和输出的模式包括有线及无线传输。The multi-information synchronous coding learning apparatus according to claim 5, wherein the audio output unit further comprises a target language playing horn for separately targeting the target language in the two pieces of left and right channel voices obtained by the comparison. Performing amplifying play; the modes of input and output include wired and wireless transmission.
  7. 一种多信息同步编码学习方法,其特征在于:包括如下步骤A multi-information synchronous coding learning method, comprising: the following steps
    S1、将源语言的语音信号及与其对应的目标语言的语音信号通过音频输入单元进行分段录入,并使其分段的源语言的语音信号和目标语言的语音信号一一对应录入;S1, the voice signal of the source language and the voice signal of the target language corresponding thereto are segmented and input through the audio input unit, and the segmented source language voice signal and the voice signal of the target language are input one by one;
    S2、通过音频处理单元分别将两段一一对应的语音信号进行处理,使两段一一对应的语音信号两端的中点对齐排列录制,并将其中一段语音信号录制形成左声道语音,另一段语音信号录制形成右声道语音,处理好的两段相对应的语音信号存储到存储单元内;S2. The audio processing unit respectively processes the two-segment-corresponding speech signals, so that the midpoints of the two-to-one-corresponding speech signals are aligned and recorded, and one of the speech signals is recorded to form a left-channel speech, and another A voice signal is recorded to form a right channel voice, and the processed two corresponding voice signals are stored in the storage unit;
    S3、然后音频输出单元提取存储单元内的语音信号,使左声道语音通过左声道输入到左声道耳机,右声道语音通过右声道输入到右声道耳机,最后左右声 道耳机进行同步输出。S3, then the audio output unit extracts the voice signal in the storage unit, so that the left channel voice is input to the left channel earphone through the left channel, and the right channel voice is input to the right channel earphone through the right channel, and finally the left and right sounds The headphones are synchronized.
  8. 根据权利要求7所述的多信息同步编码学习方法,其特征在于:还包括步骤The multi-information synchronous coding learning method according to claim 7, further comprising the step
    S4、将与所述源语言相对应的字幕、视频和图片通过显示单元与音频输出单元进行同步输出。S4. Synchronize the subtitles, videos, and pictures corresponding to the source language through the display unit and the audio output unit.
  9. 根据权利要求7或8所述的多信息同步编码学习方法,其特征在于:所述步骤S2和S3之间还包括步骤The multi-information synchronous coding learning method according to claim 7 or 8, wherein the step S2 and S3 further comprise steps
    S201、通过文字输入设备或语音输入设备采集文字资料或即时语音信号,文字处理单元或语音处理单元利用存储单元中存储的左右声道语音(相对应的源语言和目标语言)对所述文字资料或即时语音信号进行对比,并将对比得到的两段相对应的左右声道语音(相对应的源语言和目标语言)发送至音频输出单元进行输出。S201: Collect text data or an instant voice signal by using a text input device or a voice input device, and the word processing unit or the voice processing unit uses the left and right channel voices (corresponding source language and target language) stored in the storage unit to the text data. Or the instant voice signal is compared, and the corresponding two corresponding left and right channel voices (corresponding source language and target language) are sent to the audio output unit for output.
  10. 根据权利要求7所述的多信息同步编码学习方法,其特征在于:所述目标语言为源语言的翻译、解释或说明。 The multi-information synchronous coding learning method according to claim 7, wherein the target language is a translation, interpretation or description of the source language.
PCT/CN2014/093937 2013-12-17 2014-12-16 Multi-information synchronization code learning device and method WO2015090182A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310693347.4 2013-12-17
CN201310693347.4A CN103680231B (en) 2013-12-17 2013-12-17 Multi information synchronous coding learning device and method

Publications (1)

Publication Number Publication Date
WO2015090182A1 true WO2015090182A1 (en) 2015-06-25

Family

ID=50317634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/093937 WO2015090182A1 (en) 2013-12-17 2014-12-16 Multi-information synchronization code learning device and method

Country Status (2)

Country Link
CN (1) CN103680231B (en)
WO (1) WO2015090182A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103680231B (en) * 2013-12-17 2015-12-30 深圳环球维尔安科技有限公司 Multi information synchronous coding learning device and method
CN107708006B (en) * 2017-08-23 2020-08-28 广东思派康电子科技有限公司 Computer-readable storage medium, real-time translation system
CN107656923A (en) * 2017-10-13 2018-02-02 深圳市沃特沃德股份有限公司 Voice translation method and device
CN109275057A (en) * 2018-08-31 2019-01-25 歌尔科技有限公司 A kind of translation earphone speech output method, system and translation earphone and storage medium
CN109634553A (en) * 2018-12-17 2019-04-16 聚好看科技股份有限公司 A kind of display methods, control device and display terminal for drawing this
CN111179657A (en) * 2020-02-22 2020-05-19 李孝龙 Multi-language intelligent learning machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2049006U (en) * 1989-06-05 1989-12-06 傅震生 Playback recorder for studying language
CN1114444A (en) * 1994-06-06 1996-01-03 袁鸣 Method for developing left and right brain to coordinatively study forein language utilizing sense of hearing and apparatus thereof
CN1367906A (en) * 1999-07-31 2002-09-04 朴奎珍 Study method and apparatus using digital audio and caption data
CN1802679A (en) * 2003-07-08 2006-07-12 I.P.投资有限公司 Knowledge acquisition system, apparatus and course
CN103680231A (en) * 2013-12-17 2014-03-26 深圳环球维尔安科技有限公司 Multi-information synchronous encoding and learning device and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001249679A (en) * 2000-03-03 2001-09-14 Rikogaku Shinkokai Foreign language self-study system
KR100568167B1 (en) * 2000-07-18 2006-04-05 한국과학기술원 Method of foreign language pronunciation speaking test using automatic pronunciation comparison method
JP2005107483A (en) * 2003-09-11 2005-04-21 Nippon Telegr & Teleph Corp <Ntt> Word learning method, word learning apparatus, word learning program, and recording medium with the program recorded thereon, and character string learning method, character string learning apparatus, character string learning program, and recording medium with the program recorded thereon
CN101136232A (en) * 2007-10-15 2008-03-05 殷亮 Double subtitling double track data media and player having parent language of foreign languages

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2049006U (en) * 1989-06-05 1989-12-06 傅震生 Playback recorder for studying language
CN1114444A (en) * 1994-06-06 1996-01-03 袁鸣 Method for developing left and right brain to coordinatively study forein language utilizing sense of hearing and apparatus thereof
CN1367906A (en) * 1999-07-31 2002-09-04 朴奎珍 Study method and apparatus using digital audio and caption data
CN1802679A (en) * 2003-07-08 2006-07-12 I.P.投资有限公司 Knowledge acquisition system, apparatus and course
CN103680231A (en) * 2013-12-17 2014-03-26 深圳环球维尔安科技有限公司 Multi-information synchronous encoding and learning device and method

Also Published As

Publication number Publication date
CN103680231A (en) 2014-03-26
CN103680231B (en) 2015-12-30

Similar Documents

Publication Publication Date Title
WO2015090182A1 (en) Multi-information synchronization code learning device and method
Hendrickx et al. Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis
JP4439740B2 (en) Voice conversion apparatus and method
Chern et al. A smartphone-based multi-functional hearing assistive system to facilitate speech recognition in the classroom
WO2018194710A1 (en) Wearable auditory feedback device
Vlaming et al. HearCom: Hearing in the communication society
CN107112026A (en) System, the method and apparatus for recognizing and handling for intelligent sound
CN113228029A (en) Natural language translation in AR
JP3670180B2 (en) hearing aid
US10791404B1 (en) Assisted hearing aid with synthetic substitution
Williges et al. Coherent coding of enhanced interaural cues improves sound localization in noise with bilateral cochlear implants
US11412341B2 (en) Electronic apparatus and controlling method thereof
Gick et al. The temporal window of audio-tactile integration in speech perception
Bicevskis et al. Visual-tactile integration in speech perception: Evidence for modality neutral speech primitives
US20170118571A1 (en) Electronic apparatus and sound signal adjustment method thereof
US20220329966A1 (en) Electronic apparatus and controlling method thereof
US20150049879A1 (en) Method of audio processing and audio-playing device
Brandenburg et al. Creating auditory illusions with binaural technology
He et al. Mandarin tone identification in cochlear implant users using exaggerated pitch contours
Rudmann et al. Bimodal displays improve speech comprehension in environments with multiple speakers
Siddig et al. Perception Deception: Audio-Visual Mismatch in Virtual Reality Using The McGurk Effect.
US9973853B2 (en) Fixed apparatus and audio collection apparatus
Zenke et al. Spatial release of masking in children and adults in non-individualized virtual environments
CN204204219U (en) Multi information synchronous coding learning device
Sheffield et al. The effect of sound localization on auditory-only and audiovisual speech recognition in a simulated multitalker environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14872430

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 23.11.2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14872430

Country of ref document: EP

Kind code of ref document: A1