CN110047488A - Voice translation method, device, equipment and control equipment - Google Patents
Voice translation method, device, equipment and control equipment Download PDFInfo
- Publication number
- CN110047488A CN110047488A CN201910154764.9A CN201910154764A CN110047488A CN 110047488 A CN110047488 A CN 110047488A CN 201910154764 A CN201910154764 A CN 201910154764A CN 110047488 A CN110047488 A CN 110047488A
- Authority
- CN
- China
- Prior art keywords
- language text
- audio data
- modification rule
- text
- language
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/109—Font handling; Temporal or kinetic typography
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/005—Language recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Abstract
The application provides voice translation method, device, equipment and control equipment.Method includes the language text being converted to by obtaining audio data and speech processing device based on the audio data;The audio data and the language text are sent to controlling terminal, so that the controlling terminal verifies the language text according to the audio data, and when verification determines that the language text is wrong, generates modification rule;The modification rule that the controlling terminal is sent is received, and according to the modification rule, the language text is calibrated.Voice translation method, device, equipment and control equipment provided by the present application, improve the accuracy of language text translation result, and can calibrate in real time according to modification rule to language text, do not influence meeting and be normally carried out.
Description
Technical field
This application involves field of computer technology more particularly to a kind of voice translation method, device, equipment and control to set
It is standby.
Background technique
With the development of speech recognition technology, people are no longer content with the automation of text translation, the demands of voiced translation
Also increasing, the translation for being widely used in meeting is supported.
Speech recognition is translated as target language text by voiced translation, i.e., while user inputs voice, translation system
According to the translation result of the direct productive target language of user speech.In the prior art, voiced translation generally comprises two systems, uses
Family end carries out the input and display of voice, and speech processing device end carries out the identification and translation of voice.
Since voice input and user terminal show the diversity of equipment, it is inaccurate usually to there is translation in current voiced translation
Really, the problems such as spoken and written languages poor display effect, participant can not normally watch translation result, and meeting is caused to interrupt or do
The speech for the person that disturbs conference speed.
Summary of the invention
The application provides a kind of voice translation method, device, equipment and control equipment, is turned over solving voice in the prior art
Translate the technical problem of inaccuracy.
In a first aspect, the embodiment of the invention provides a kind of voice translation methods, comprising:
Obtain the language text that audio data and speech processing device are converted to based on the audio data;
The audio data and the language text are sent to controlling terminal, so that the controlling terminal is according to the audio
Data verify the language text, and when verification determines that the language text is wrong, generate modification rule;
The modification rule that the controlling terminal is sent is received, and according to the modification rule, to the language text
It is calibrated.
Second aspect, the embodiment of the invention provides a kind of voice translation methods, comprising:
Receive the audio data that user terminal is sent and the language text being converted to based on the audio data;
The language text is verified according to the audio data, and determines that the language text is wrong in verification
When, generate modification rule;
The modification rule is sent to user terminal, so that the user terminal is according to the modification rule, to institute's predicate
Speech text is calibrated.
The third aspect, the embodiment of the invention provides a kind of speech translation apparatus, comprising:
Obtain module, the language being converted to for obtaining audio data and speech processing device based on the audio data
Say text;
First sending module, for sending the audio data and the language text to controlling terminal, so that the control
Terminal processed verifies the language text according to the audio data, and when verification determines that the language text is wrong,
Generate modification rule;
Calibration module, the modification rule sent for receiving the controlling terminal, and according to the modification rule, it is right
The language text is calibrated.
Fourth aspect, the embodiment of the invention provides a kind of speech translation apparatus, comprising:
Second receiving module, for receiving the audio data of user terminal transmission and being converted to based on the audio data
Language text;
Correction verification module for being verified according to the audio data to the language text, and is verifying described in determination
When language text is wrong, modification rule is generated;
Second sending module, for sending the modification rule to user terminal, so that the user terminal is according to
Modification rule calibrates the language text.
5th aspect, the embodiment of the invention provides a kind of speech translation apparatus, including memory, processor;
Memory: for storing the processor-executable instruction;
Wherein, the processor is configured to: execute the executable instruction to realize side described in above-mentioned first aspect
Method.
6th aspect, the embodiment of the invention provides a kind of control equipment, including memory, processor;
Memory: for storing the processor-executable instruction;
Wherein, the processor is configured to: execute the executable instruction to realize side described in above-mentioned second aspect
Method.
7th aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
Computer executed instructions are stored in medium, for realizing above-mentioned first party when the computer executed instructions are executed by processor
Method described in face, or realize method described in above-mentioned second aspect.
Eighth aspect, the embodiment of the invention provides a kind of speech translation systems, comprising:
Control described in speech translation apparatus described in speech processing device and above-mentioned 5th aspect and above-mentioned 6th aspect
Control equipment.
Voice translation method and device provided in an embodiment of the present invention, obtain audio data and speech processing device is based on
The language text that the audio data is converted to;The audio data and the language text are sent to controlling terminal, so that
The controlling terminal verifies the language text according to the audio data, and determines that the language text has in verification
It mistakes, generates modification rule;The modification rule that the controlling terminal is sent is received, and according to the modification rule, to institute
It states language text to be calibrated, language text after being calibrated, improves the accuracy of language text translation result, and correct rule
Then it can send and adjust in real time, with the carry out real time calibration to language text, not influence being normally carried out for meeting.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is the configuration diagram of speech translation system provided in an embodiment of the present invention;
Fig. 2 is the flow diagram for the voice translation method that one embodiment of the invention provides;
Fig. 3 be another embodiment of the present invention provides voice translation method flow diagram;
Fig. 4 is the flow diagram for the voice translation method that further embodiment of this invention provides;
Fig. 5 is the flow diagram for the voice translation method that yet another embodiment of the invention provides;
Fig. 6 is the interaction signaling diagram for the voice translation method that one embodiment of the invention provides;
Fig. 7 be another embodiment of the present invention provides voice translation method interaction signaling diagram;
Fig. 8 is the functional block diagram for the speech translation apparatus that one embodiment of the invention provides;
Fig. 9 be another embodiment of the present invention provides speech translation apparatus functional block diagram;
Figure 10 is the hardware structural diagram for the speech translation apparatus that one embodiment of the invention provides;
Figure 11 is the hardware structural diagram for the control equipment that one embodiment of the invention provides;
Figure 12 is the structural schematic diagram of speech translation system provided in an embodiment of the present invention.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings
It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate the concept of the disclosure.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
In addition, reference term " one embodiment ", " some embodiments ", " example ", " specific example " or " some examples "
Deng description to mean that particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained in of the invention
In at least one embodiment or example.In the present specification, schematic expression of the above terms are necessarily directed to identical
Embodiment or example.
Voice translation method provided by the present application, the configuration diagram suitable for speech translation system shown in FIG. 1.To scheme
For speech translation system shown in 1, speech translation system includes user terminal 10, speech processing device 20 and controlling terminal
30, wherein user terminal 10 can be the terminal devices such as mobile phone, computer, car-mounted terminal, smart home device, robot, herein
It is not construed as limiting.User can carry out specified translation languages, reading by user terminal 10 and show the text of the language after voiced translation
The business processings such as this;User terminal 10 can have audio collecting device, for carrying out the acquisition of audio data;User terminal 10 wraps
At least one image or text importing unit are included, for showing that speech processing device 20 translates obtained language text.
Speech processing device 20 is used to carry out the identification and translation of audio data.
Controlling terminal 30 can calculate for desktop PC, notebook, palm PC and cloud speech processing device etc.
Equipment, it is not limited here.User can carry out the business processings such as instruction transmission, rule settings by controlling terminal 30.
Controlling terminal 30 is communicated by network with user terminal 10;Optionally, controlling terminal 30 can pass through network and language
Sound processing equipment 20 communicates, and speech processing device 20 is communicated with user terminal 10 by network, thus realize controlling terminal 30 and
Indirect communication between user terminal 10.One controlling terminal 30 can be communicated with multiple user terminals 10, a voice
Processing equipment 20 can also be communicated with multiple user terminals 10.Network among the above can be adapted for different network systems
Formula.
User carries out voice input on user terminal 10, and sets the language languages after speech recognition translation, and user is whole
The audio data and language languages that end 10 will acquire are sent to speech processing device 20, and speech processing device 20 is to audio data
It is identified and is translated, audio data is converted into the language text of setting languages, and the language text is fed back into user's end
Language text and audio data are sent to controlling terminal 30 by end 10, user terminal 10, controlling terminal 30 according to language text and
Audio data generates modification rule or control instruction, and the modification rule or control instruction are sent to the user terminal 10, user
Terminal 10 is calibrated language text or is adjusted according to modification rule or control instruction, and the language text after being calibrated mentions
The high accuracy of language text translation result;Controlling terminal 30 and the interaction of 10 real-time perfoming of user terminal, realize language text
This real time calibration;And a controlling terminal 30 can be interacted with multiple user terminals 10, be suitable for more people give a lecture or with
The scene of multiple user terminal cooperations.
How the technical solution of the application and the technical solution of the application are solved with specifically embodiment below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, embodiments herein is described.
Fig. 2 is the flow diagram for the voice translation method that one embodiment of the invention provides.The executing subject of the present embodiment
For the user terminal in embodiment illustrated in fig. 1, as shown in Fig. 2, this method comprises:
S201, the language text that audio data and speech processing device are converted to based on the audio data is obtained.
In the present embodiment, the source of audio data includes following any one: the audio collection that user terminal carries is set
Audio output unit that is standby and carrying.
In one embodiment, user terminal includes at least one audio collecting device, and user terminal is taken in the machine
The audio collecting device of load sends the first command signal, so that the audio collecting device is acquired according to first command signal
Audio data.Wherein, first command signal can be specified by user, can also be sent by controlling terminal.Optionally, audio
Acquisition equipment is microphone.
In order to clearly illustrate the present embodiment, in a kind of possible conference applications scene, user terminal is that setting is being said
The computer of platform, including multiple microphones and projector, controlling terminal is other one computer at meeting scene, by grasping
It is operated as personnel.Speech processing device realizes network communication by wireless networking arrangement with user terminal.
When meeting scene prepares, the wireless microphone that audience is used is labeled as No. two microphones, and such as audience needs in meeting
It is putd question to, operator sends following command signal to user terminal by controlling terminal: audio collecting device is set
For No. two microphones.No. two microphones can acquire the audio data for obtaining audience at this time.
In another embodiment, for user terminal equipped with other audio output units, user terminal obtains control eventually
The second command signal sent is held, and is acquired and is obtained from the audio output unit that the machine is carried according to second command signal
Audio data.Optionally, the second command signal can also be generated by user terminal itself.
Specifically, source of the user terminal using the audio output of other audio output units as audio data.If with
Family terminal operating Windows operating system, user terminal can pass through Windows audio session API (Windows Audio
Session API) obtain the audio output data of other audio output units;If user terminal runs linux operation system
System, user terminal can obtain the audio output data of other programs by Pulse Audio program.Wherein, Pulse Audio is
One acoustic vocal processing equipment, a background process receive voice input from one or more sources of sound (process or input equipment)
Then sound is redirected to one or more slots (sound card, telecommunication network Pulse Audio service or other processes).
In a kind of possible conference applications scene, speaker has played English video, and without subtitle.Operation passes through control
Terminal processed such as gives an order to user terminal transmission: audio input source is set as obtaining from video player.Then video
Sino-British subtitle is shown while broadcasting, in user terminal screen, wherein Chinese subtitle is object language languages, by user in user
Terminal is specified.This programme directly acquires the scheme of audio data compared to audio collecting device, and voice distortion is few, convenient for improving language
Sound translates accuracy.
S202, the audio data and the language text are sent to controlling terminal, so that the controlling terminal is according to institute
It states audio data to verify the language text, and when verification determines that the language text is wrong, generates modification rule.
In the present embodiment, user terminal sends audio data and language text to controlling terminal.Optionally, user terminal
Can also be sent to controlling terminal the number of user terminal, the number of audio collecting device, the display data of language text and
The dimension data etc. of the display unit of user terminal.Wherein, the display data of language text include language text in user terminal
The final effect of display.
Optionally, controlling terminal returns to modification rule by operator.
S203, the modification rule that the controlling terminal is sent is received, and according to the modification rule, to the language
Text is calibrated.
In the present embodiment, modification rule includes the increase, modification or deletion of language text.
Optionally, user terminal calibrates language text according to modification rule, language text and shows after being calibrated
Show.
In one embodiment, after the language text that user terminal is converted to received from speech processing device, according to
The first current modification rule of user terminal carries out the calibration of language text, language text and shows after being calibrated, simultaneously general
Language text is sent to controlling terminal after the calibration, so that controlling terminal verifies language text after calibration, and is verifying
When determining that language text is wrong after calibrating, the second modification rule is generated;User terminal receives above-mentioned second modification rule, to next
A language text being converted to is calibrated.
It in another embodiment, will after the language text that user terminal is converted to received from speech processing device
The language text is sent to controlling terminal, so that controlling terminal is verified according to language text, and after verification determines calibration
When language text is wrong, modification rule is generated;User terminal receives the modification rule, is calibrated simultaneously to above-mentioned language text
Display.
It in yet another embodiment, will after the language text that user terminal is converted to received from speech processing device
The language text is sent to controlling terminal, so that controlling terminal is verified according to language text, and after verification determines calibration
When language text is wrong, modification rule is generated;User terminal receives modification rule, and above-mentioned language text is calibrated and shown
Show.Meanwhile user terminal sends above-mentioned modification rule to speech processing device, so that the speech processing device is according to the amendment
Rule calibrates next speech text being converted to.
Voice translation method provided in an embodiment of the present invention is based on institute by obtaining audio data and speech processing device
State the language text that audio data is converted to;The audio data and the language text are sent to controlling terminal, so that institute
It states controlling terminal and the language text is verified according to the audio data, and determine that the language text is wrong in verification
When, generate modification rule;The modification rule that the controlling terminal is sent is received, and according to the modification rule, to described
Language text is calibrated, language text after being calibrated, and improves the accuracy of language text translation result, and modification rule
It can send and adjust in real time, the calibration of real-time perfoming language text does not influence meeting and is normally carried out.
In practical application scene, which is sent at voice by the languages of user's appointed language text, user terminal
Manage equipment.Speech processing device is converted to two steps of identification and translation that language text includes audio data based on audio data
Suddenly, speech processing device carries out identification to audio data first and obtains first language text identical with audio data languages, so
Translation is carried out to first language text afterwards and obtains second language text identical with specified languages.
User terminal obtains the language text that speech processing device is converted to based on audio data and display, wherein should
Language text includes above-mentioned first language text and second language text.It should be understood that the language text of display is the first language
Say text and/or second language text.
The calibration of language text can both be based on above-mentioned first language text, can also be based on above-mentioned second language text,
Below by embodiment shown in Fig. 3, the calibration process of language text is described in detail.
Fig. 3 is the flow diagram for the voice translation method that further embodiment of this invention provides.Language text includes being based on
The second language text that first language text is translated, wherein first language text is the text obtained based on audio data identification
This;Modification rule is to calibrate rule for calibrating the identification of first language text.
As shown in figure 3, this method can also include:
S301, according to the modification rule, the first language text is calibrated, after being calibrated first language text
This.In the present embodiment, modification rule is to calibrate rule for calibrating the identification of first language text, and identification calibration rule includes
It is at least one of following: increase, modification and the deletion of first language text.
S302, Xiang Suoshu speech processing device send first language text after the calibration, so that speech processing device pair
First language text is translated after the calibration, the language text after being calibrated.
S303, the language text after the calibration that the speech processing device is sent is received.
The translation of audio data includes two steps of speech recognition and translation, executes completion, language by speech processing device
Sound processing equipment carries out speech recognition to audio data and obtains first language text, and first language text is identical with audio data
Languages, speech processing device carry out translation to first language text and obtain second language text, second language text and target language
Speech text is identical languages, and in the present embodiment, user terminal calibrates first language text, the first language after being calibrated
It says text, first language text after calibration is translated by speech processing device, obtains final second language text.
Optionally, if modification rule is to calibrate rule for calibrating the translation of second language text, user terminal can root
According to the modification rule, the second language text is calibrated, the language text after being calibrated.
Voice translation method provided in an embodiment of the present invention, multiple user terminals receive the respective modification rule of matching respectively
And individually calibrated, the workload of speech processing device is greatly reduced, the efficiency of voiced translation calibration is improved;It can be more
Add and is timely and accurately suitable for different voiced translation scenes;Furthermore first language text and second language text can be carried out respectively
This calibration, it is convenient that subsequent speech recognition is carried out convenient for being counted to the mistake in speech processing device speech recognition
Optimization.
Optionally, the calibration of speech text can be carried out by speech processing device.In one embodiment, user terminal to
Speech processing device sends above-mentioned modification rule, so that the speech processing device is converted according to the modification rule to next
To speech text calibrated.
If modification rule is identification calibration rule, speech processing device is in subsequent speech recognition translation, according to identification
The first language text that calibration rule obtains audio data Direct Recognition is calibrated, then to the first language text after calibration
This is translated, second language text after being calibrated, and will calibration after first language text and calibration after second language text
It is sent to the user terminal.It should be understood that the calibration occurs after speech recognition, before text translation.
If modification rule is translation calibration rule, speech processing device is in subsequent speech recognition translation, according to translation
Calibration rule directly calibrates second language text.At this point, the calibration occurs that it is whole to send user after text translation
Before end.
In order to clearly illustrate the present embodiment, in a kind of possible conference applications scene, in multi-person conference, wherein giving a lecture
The speech of person is Chinese, and first language text is Chinese, and second language text is English.Speaker one mentions " double to take the photograph " word,
Mean that there are two cameras, user terminal to be shown as " Dichromatic " (double-colored English) for tool in mobile phone, i.e., speech processes are set
It is standby mistake occur at speech recognition end, " double to take the photograph " are identified as " double-colored ", cause subsequent translation inaccurate;User terminal will be shown
Language text and audio data be sent to controlling terminal, after operator sees this mistake, according to audio data, judgement should
Mistake occurs when obtaining first language text, is added to following identification calibration rule in controlling terminal, user terminal receives should
Modification rule, and the modification rule is sent into speech processing device, so that speech processing device is calibrated.
Identification calibration rule: in next speech processes, " double-colored " in first language text is revised as " double
It takes the photograph ".
Speech processing device carries out identification to audio data and obtains first language text, carries out school according to identification calibration rule
" double-colored " in first language text is revised as " double to take the photograph ", then translated to " double to take the photograph " by standard, obtains " Double
The second language text of cell ", user terminal obtain the second language text and show.Hereafter, speaker mentions " double to take the photograph "
When, " Double cell " is shown as on subtitle.
Optionally, it if modification rule is translation calibration rule, is modified after obtaining second language text.
Voice translation method provided in an embodiment of the present invention is carried out the calibration of language text by speech processing device, by language
The calibration of speech text is divided into first language text and second language text carries out respectively, can unite to the mistake in speech recognition
Meter, it is convenient that subsequent speech recognition is optimized.
Fig. 4 is the flow diagram for the voice translation method that further embodiment of this invention provides, and the present embodiment is in above-mentioned reality
On the basis of applying example, the display calibration to language text is increased, for example, this method is also on the basis of embodiment shown in Fig. 2
May include:
S401, display data of the language text in the machine are obtained;Wherein, the display data include the language
Display picture or display video of the text in the machine.
In one embodiment, the display picture of language text is obtained according to time sampling screenshot.
In another embodiment, the mode according to video flowing obtains the display video of language text.
In practical application, any of the above-described mode can be selected according to network bandwidth.
S402, the display data are sent to controlling terminal, so that the controlling terminal is generated according to the display data
Display parameters adjustment instruction.
In the present embodiment, the number and size of user terminal display equipment can also be sent to controlling terminal.
S403, the display parameters adjustment instruction that the controlling terminal is sent is received, and according to the display parameters tune
Whole instruction is adjusted display data of the language text in the machine.
In one embodiment, user terminal is according to the display parameters adjustment instruction, in the machine for showing institute
The window for stating language text is adjusted, wherein including at least one of following to the adjustment of the window: the size of window, position
It sets, background color, background transparent degree;
In another embodiment, user terminal according to the display parameters adjustment instruction to the language text as
It is at least one of lower to be adjusted: font size, font, color, transparency and the language text residence time.
Optionally, language text is shown with translucent window top set, i.e., other demonstration journeys for not interfering user terminal to carry
Sequence can also ensure that user can clearly read language text.
In a kind of possible conference applications scene, there is the audience of heel row to indicate not seeing language text content.Operator
Member responds the demand, sends following display parameters adjustment instruction to user terminal by controlling terminal: by the font of language text
Size is adjusted to 32px.User terminal receives the display parameters adjustment instruction, and is adjusted to language text according to the rule
32px influences the technical issues of meeting carries out to solve language text poor display effect.
Voice translation method provided in an embodiment of the present invention, the diversity for solving user terminal display equipment lead to language
The bad technical problem of character display effect, if text importing size is improper, display area is improper, and equipment color difference causes text aobvious
Show the problems such as color is improper, and the display adjustment of language text does not influence meeting progress, greatly improves voiced translation
Application effect.
Fig. 5 is the flow diagram for the voice translation method that yet another embodiment of the invention provides.The execution master of the present embodiment
Body is the controlling terminal in embodiment illustrated in fig. 1, as shown in figure 5, this method comprises:
S501, the audio data that user terminal is sent and the language text being converted to based on the audio data are received;
S502, the language text is verified according to the audio data, and determines the language text in verification
When wrong, modification rule is generated;
S503, the modification rule is sent to user terminal, so that user terminal is according to the modification rule, to institute's predicate
Speech text is calibrated.
The specific implementation principle of method and process may refer to any of the above-described embodiment in the present embodiment, no longer superfluous herein
It states.
Voice translation method provided in this embodiment receives audio data and be based on the audio number that user terminal is sent
According to the language text being converted to;The language text is verified according to the audio data, and is being verified described in determination
When language text is wrong, modification rule is generated;The modification rule is sent to user terminal, so that user terminal is repaired according to
Positive rule, calibrates the language text, is capable of the translation accuracy rate of effectively language text, language text may be implemented
Real time calibration, do not influence meeting and be normally carried out, and support multiple user terminals simultaneously, convenient for be applied to more people give a lecture or with it is more
The scene of a user terminal cooperation.
Fig. 6 is the interaction signaling diagram for the voice translation method that one embodiment of the invention provides, as shown in fig. 6, this method can
To include:
S601, user terminal obtain audio data.
S602, user terminal send the audio data to speech processing device.
S603, speech processing device are based on the audio data and are converted to language text.
S604, speech processing device send the language text to user terminal.
S605, user terminal send the audio data and the language text to controlling terminal.
S606, controlling terminal verify the language text according to the audio data, and are verifying described in determination
When language text is wrong, modification rule is generated.
S607, controlling terminal send the modification rule to user terminal.
S608, user terminal calibrate the language text according to the modification rule.
The specific embodiment of the present embodiment may refer to above-mentioned Fig. 1 and embodiment shown in fig. 5, and details are not described herein again.
Fig. 7 be another embodiment of the present invention provides voice translation method interaction signaling diagram, as shown in fig. 7, this method
May include:
S701, user terminal obtain audio data.
S702, user terminal send the audio data to speech processing device.
S703, speech processing device are based on the audio data and are converted to language text.
S704, speech processing device send the language text to user terminal.
S705, user terminal obtain display data of the speech text in the machine.
S706, user terminal send the audio data, the language text and the display data to controlling terminal.
S707, controlling terminal verify the language text according to the audio data, and are verifying described in determination
When language text is wrong, modification rule is generated;Meanwhile controlling terminal generates display parameters adjustment according to the display data and refers to
It enables.
It should be understood that controlling terminal verifies the display data, and in the display effect of the display data
When bad, display parameters adjustment instruction is generated.
S708, controlling terminal send the modification rule and the display parameters adjustment instruction to user terminal.
S709, user terminal according to the display parameters adjustment instruction, to display data of the language text in the machine into
Row adjustment.
S710, user terminal send the modification rule to the speech processing device.
S711, speech processing device calibrate next speech text being converted to according to the modification rule.
The specific embodiment of the present embodiment may refer to Fig. 1, Fig. 4 and embodiment shown in fig. 5, and details are not described herein again.
Provided voice translation method based on the above embodiment, the embodiment of the present invention further provide the realization above method
Embodiment is respectively executing subject and using controlling terminal as the Installation practice of executing subject using user terminal.
Fig. 8 is the structural schematic diagram for the speech translation apparatus that one embodiment of the invention provides.The speech translation apparatus 80 is answered
For user terminal, as shown in figure 8, the speech translation apparatus includes obtaining module 810, the first sending module 820 and calibration
Module 830.
Module 810 is obtained, the audio data is based on for obtaining audio data and speech processing device and is converted to
Language text.
First sending module 820, for sending the audio data and the language text to controlling terminal, so that described
Controlling terminal verifies the language text according to the audio data, and determines that the language text is wrong in verification
When, generate modification rule.
Calibration module 830, the modification rule sent for receiving the controlling terminal, and advised according to the amendment
Then, the language text is calibrated.
Speech translation apparatus provided in an embodiment of the present invention obtains module and obtains audio data and speech processing device base
In the language text that the audio data is converted to;First sending module sends the audio data and described to controlling terminal
Language text so that the controlling terminal verifies the language text according to the audio data, and is determined in verification
When the language text is wrong, modification rule is generated;Calibration module receives the modification rule that the controlling terminal is sent, and
According to the modification rule, the language text is calibrated, language text after being calibrated, improves language text translation
As a result accuracy, and modification rule can send and adjust according to real-time, the calibration of real-time perfoming language text does not influence meeting
It is normally carried out.
Optionally, language text includes the first language text that speech recognition acquisition is carried out to the audio data;Calibration
Module 830 is specifically used for: according to the modification rule, calibrating to the first language text, the first language after being calibrated
Say text;First language text after sending the calibration to the speech processing device, so that speech processing device is to the school
First language text is translated after standard, the language text after being calibrated;Receive the calibration that the speech processing device is sent
Language text afterwards.
Optionally, language text includes the second language text translated based on first language text, wherein described
One language text is the text obtained based on audio data identification.Calibration module 830, also particularly useful for: it is repaired according to described
Positive rule, calibrates the second language text, the language text after being calibrated.
Optionally, which further includes third sending module (Fig. 8 is not shown), is used for the speech processes
Equipment sends the modification rule, so that the speech processing device is according to the modification rule to next language being converted to
Sound text is calibrated.
Optionally, which further includes display adjustment module (Fig. 8 is not shown), is specifically used for: described in acquisition
Display data of the language text in the machine;Wherein, the display data include display figure of the language text in the machine
Piece or display video;The display data are sent to controlling terminal, so that the controlling terminal is generated according to the display data
Display parameters adjustment instruction;The display parameters adjustment instruction that the controlling terminal is sent is received, and is joined according to the display
Number adjustment instruction, is adjusted display data of the language text in the machine.
Display adjustment module is also particularly useful for according to the display parameters adjustment instruction, to described for showing in the machine
The window of language text is adjusted, wherein including at least one of following to the adjustment of the window: the size of window, position,
Background color, background transparent degree;And/or according to the display parameters adjustment instruction at least one of following of the language text
Be adjusted: font size, font, color, transparency and the language text residence time.
Optionally, module 810 is obtained, the first instruction is sent specifically for the audio collecting device carried in the machine and believes
Number, so that the audio collecting device acquires audio data according to first command signal;Or the second command signal is received, and
It is acquired from the audio output unit that the machine is carried according to second command signal and obtains audio data;Wherein, described second
It is the controlling terminal or the machine that command signal, which derives from,.
The speech translation apparatus 80 of embodiment illustrated in fig. 8 can be used for executing in the above method using user terminal as executing subject
Technical solution, details are not described herein again for the present embodiment.
Fig. 9 be another embodiment of the present invention provides speech translation apparatus structural schematic diagram.The speech translation apparatus 90
Applied to controlling terminal, as shown in figure 9, the speech translation apparatus includes the first receiving module 910, correction verification module 920 and the
Two sending modules 930.
First receiving module 910, for receiving the audio data of user terminal transmission and being converted based on the audio data
Obtained language text.
Correction verification module 920 determines institute for verifying according to the audio data to the language text, and verifying
State language text it is wrong when, generate modification rule.
Second sending module 930, for sending the modification rule to user terminal, so that user terminal is repaired according to
Positive rule, calibrates the language text.
The speech translation apparatus 90 of embodiment illustrated in fig. 9 can be used for executing in the above method using controlling terminal as executing subject
Technical solution, details are not described herein again for the present embodiment.
It should be understood that the division of the modules of speech translation apparatus shown in figure 8 above and Fig. 9 is only a kind of logic function
Division, can completely or partially be integrated on a physical entity in actual implementation, can also be physically separate;And these moulds
Block can be realized all by way of processing element calls with software;It can also all realize in the form of hardware;It can be with
Part of module realizes that part of module passes through formal implementation of hardware with software by way of processing element calls.Furthermore these
Module completely or partially can integrate together, can also independently realize.Processing element described here can be a kind of integrated
Circuit, the processing capacity with signal.During realization, each step of the above method or the above modules can pass through place
The instruction of the integrated logic circuit or software form of managing the hardware in device element is completed.
Figure 10 is the hardware structural diagram for the speech translation apparatus that one embodiment of the invention provides.As shown in Figure 10, originally
The speech translation apparatus 100 that embodiment provides includes: at least one processor 101, processor 102 and computer program;Its
In, computer program is stored in memory 101, and is configured as being executed by processor 102 to realize as above-mentioned with user's end
End is the voice translation method of executing subject.Speech translation apparatus 100 further includes communication component.Wherein, processor 102, storage
Device 101 and communication component are connected by bus.
It will be understood by those skilled in the art that Figure 10 is only the example of speech translation apparatus, does not constitute and voice is turned over
Translate the restriction of equipment, speech translation apparatus may include than illustrating more or fewer components, perhaps combine certain components or
Different components, such as the speech translation apparatus can also include input-output equipment, network access equipment, bus etc..This
In embodiment, speech translation apparatus includes at least one audio collecting device and image-display units.
Figure 11 is the hardware structural diagram for the control equipment that one embodiment of the invention provides.As shown in figure 11, the control
Equipment 110 includes: at least one processor 111, processor 112 and computer program;Wherein, computer program is stored in
In reservoir, and it is configured as being executed by processor to realize as above-mentioned using controlling terminal as the voice translation method of executing subject.
In addition, it is stored thereon with computer program the embodiment of the invention provides a kind of readable storage medium storing program for executing, the computer
Program be executed by processor using realize it is above-mentioned using user terminal as executing subject when any implementation described in method, or
Realize it is above-mentioned using controlling terminal as executing subject when any implementation described in method.
Above-mentioned readable storage medium storing program for executing can be by any kind of volatibility or non-volatile memory device or they
Combination is realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory
Reservoir, disk or CD.Readable storage medium storing program for executing can be any usable medium that general or specialized computer can access.
Figure 12 is the hardware structural diagram for the speech translation system that one embodiment of the invention provides.As shown in figure 12, language
Sound translation system 120 includes speech translation apparatus 100, speech processing device 20 and control equipment 110.Wherein, speech processes
Equipment 20, for carrying out the identification and translation of audio data.Speech translation apparatus 100 can be in embodiment described in above-mentioned Figure 10
Speech translation apparatus;Controlling equipment 110 can be the control equipment in embodiment described in above-mentioned Figure 11.
The part being not described in detail in the present embodiment can refer to the related description of the corresponding embodiment of method.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (14)
1. a kind of voice translation method characterized by comprising
Obtain the language text that audio data and speech processing device are converted to based on the audio data;
The audio data and the language text are sent to controlling terminal, so that the controlling terminal is according to the audio data
The language text is verified, and when verification determines that the language text is wrong, generates modification rule;
The modification rule that the controlling terminal is sent is received, and according to the modification rule, the language text is carried out
Calibration.
2. the method according to claim 1, wherein the language text includes carrying out language to the audio data
The first language text that sound identification obtains;
It is described according to the modification rule, the language text is calibrated, comprising:
According to the modification rule, the first language text is calibrated, first language text after being calibrated;
First language text after sending the calibration to the speech processing device, so that after speech processing device is to the calibration
First language text is translated, the language text after being calibrated;
Language text after receiving the calibration that the speech processing device is sent.
3. the method according to claim 1, wherein the language text includes being translated based on first language text
Obtained second language text, wherein the first language text is the text obtained based on audio data identification;
It is described according to the modification rule, the language text is calibrated, comprising:
According to the modification rule, the second language text is calibrated, the language text after being calibrated.
4. the method according to claim 1, wherein described according to the modification rule, to the language text
After being calibrated, the method also includes:
The modification rule is sent to the speech processing device, so that the speech processing device is according to the modification rule pair
Next speech text being converted to is calibrated.
5. the method according to claim 1, wherein the acquisition audio data and speech processing device are based on
After the language text that the audio data is converted to, the method also includes:
Obtain display data of the language text in the machine;Wherein, the display data include the language text at this
Display picture or display video on machine;
The display data are sent to controlling terminal, so that the controlling terminal generates display parameters tune according to the display data
Whole instruction;
The display parameters adjustment instruction that the controlling terminal is sent is received, and according to the display parameters adjustment instruction, it is right
Display data of the language text in the machine are adjusted.
6. according to the method described in claim 5, it is characterized in that, according to the display parameters adjustment instruction, to the language
Display data of the text in the machine are adjusted, comprising:
According to the display parameters adjustment instruction, in the machine for showing that the window of the language text is adjusted, wherein
Adjustment to the window includes at least one of following: the size of window, position, background color, background transparent degree;
And/or
At least one of following of the language text is adjusted according to the display parameters adjustment instruction: font size, font, face
The residence time of color, transparency and the language text.
7. the method according to claim 1, wherein the acquisition audio data includes:
The audio collecting device that carries in the machine sends the first command signal, so that the audio collecting device is according to described the
One command signal acquires audio data;
Or
The second command signal is obtained, and acquires and obtains on the audio output unit carried from the machine according to second command signal
Obtain audio data;Wherein, the source of second command signal is the controlling terminal or the machine.
8. a kind of voice translation method characterized by comprising
Receive the audio data that user terminal is sent and the language text being converted to based on the audio data;
The language text is verified according to the audio data, and when verification determines that the language text is wrong, it is raw
At modification rule;
The modification rule is sent to user terminal, so that the user terminal is according to the modification rule, it is literary to the language
This is calibrated.
9. a kind of speech translation apparatus characterized by comprising
Module is obtained, the language text being converted to for obtaining audio data and speech processing device based on the audio data
This;
First sending module, for sending the audio data and the language text to controlling terminal, so that the control is eventually
End verifies the language text according to the audio data, and when verification determines that the language text is wrong, generates
Modification rule;
Calibration module, the modification rule sent for receiving the controlling terminal, and according to the modification rule, to described
Language text is calibrated.
10. a kind of speech translation apparatus characterized by comprising
Second receiving module, the language for receiving the audio data of user terminal transmission and being converted to based on the audio data
Say text;
Correction verification module determines the language for verifying according to the audio data to the language text, and in verification
When text is wrong, modification rule is generated;
Second sending module, for sending the modification rule to user terminal, so that the user terminal is according to the amendment
Rule calibrates the language text.
11. a kind of speech translation apparatus, which is characterized in that including memory, processor;
Memory: for storing the processor-executable instruction;
Wherein, the processor is configured to: execute the executable instruction to realize as described in any one of claim 1 to 7
Method.
12. a kind of control equipment, which is characterized in that including memory, processor;
Memory: for storing the memory of the processor-executable instruction;
Wherein, the processor is configured to: execute the executable instruction to realize method according to claim 8.
13. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium
It executes instruction, for realizing side as described in any one of claim 1 to 7 when the computer executed instructions are executed by processor
Method, or realize method according to claim 8.
14. a kind of speech translation system characterized by comprising
Speech processing device, and control described in speech translation apparatus as claimed in claim 11 and claim 12 are set
It is standby.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910154764.9A CN110047488B (en) | 2019-03-01 | 2019-03-01 | Voice translation method, device, equipment and control equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910154764.9A CN110047488B (en) | 2019-03-01 | 2019-03-01 | Voice translation method, device, equipment and control equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110047488A true CN110047488A (en) | 2019-07-23 |
CN110047488B CN110047488B (en) | 2022-04-12 |
Family
ID=67274331
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910154764.9A Active CN110047488B (en) | 2019-03-01 | 2019-03-01 | Voice translation method, device, equipment and control equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110047488B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110457717A (en) * | 2019-08-07 | 2019-11-15 | 深圳市博音科技有限公司 | Remote translating system and method |
CN111681643A (en) * | 2020-05-29 | 2020-09-18 | 标贝(北京)科技有限公司 | Speech recognition post-processing method, device, system and storage medium |
CN113591491A (en) * | 2020-04-30 | 2021-11-02 | 阿里巴巴集团控股有限公司 | System, method, device and equipment for correcting voice translation text |
CN113867665A (en) * | 2021-09-17 | 2021-12-31 | 珠海格力电器股份有限公司 | Display language modification method and device, electrical equipment and terminal equipment |
CN113891168A (en) * | 2021-10-19 | 2022-01-04 | 北京有竹居网络技术有限公司 | Subtitle processing method, subtitle processing device, electronic equipment and storage medium |
CN115086753A (en) * | 2021-03-16 | 2022-09-20 | 北京有竹居网络技术有限公司 | Live video stream processing method and device, electronic equipment and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070016401A1 (en) * | 2004-08-12 | 2007-01-18 | Farzad Ehsani | Speech-to-speech translation system with user-modifiable paraphrasing grammars |
US20070276649A1 (en) * | 2006-05-25 | 2007-11-29 | Kjell Schubert | Replacing text representing a concept with an alternate written form of the concept |
WO2008137341A1 (en) * | 2007-05-07 | 2008-11-13 | Microsoft Corporation | Document translation system |
US20090055158A1 (en) * | 2007-08-21 | 2009-02-26 | Kabushiki Kaisha Toshiba | Speech translation apparatus and method |
KR20090061158A (en) * | 2007-12-11 | 2009-06-16 | 한국전자통신연구원 | Method and apparatus for correcting of translation error by using error-correction pattern in a translation system |
CN101458681A (en) * | 2007-12-10 | 2009-06-17 | 株式会社东芝 | Voice translation method and voice translation apparatus |
KR20100138194A (en) * | 2009-06-24 | 2010-12-31 | 엔에이치엔(주) | System and method for recommendding japanese language automatically using tranformatiom of romaji |
CN102959537A (en) * | 2010-06-25 | 2013-03-06 | 乐天株式会社 | Machine translation system and method of machine translation |
US20130132079A1 (en) * | 2011-11-17 | 2013-05-23 | Microsoft Corporation | Interactive speech recognition |
CN104516876A (en) * | 2013-09-30 | 2015-04-15 | 株式会社东芝 | Speech translation system and speech translation method |
CN106445926A (en) * | 2013-10-23 | 2017-02-22 | 日耀有限公司 | Translation support system identification control method and server |
CN107632982A (en) * | 2017-09-12 | 2018-01-26 | 郑州科技学院 | The method and apparatus of voice controlled foreign language translation device |
CN107844481A (en) * | 2017-11-21 | 2018-03-27 | 新疆科大讯飞信息科技有限责任公司 | Text recognition error detection method and device |
CN108399166A (en) * | 2018-02-07 | 2018-08-14 | 深圳壹账通智能科技有限公司 | Text interpretation method, device, computer equipment and storage medium |
CN108615527A (en) * | 2018-05-10 | 2018-10-02 | 腾讯科技(深圳)有限公司 | Data processing method, device based on simultaneous interpretation and storage medium |
CN108710616A (en) * | 2018-05-23 | 2018-10-26 | 科大讯飞股份有限公司 | A kind of voice translation method and device |
-
2019
- 2019-03-01 CN CN201910154764.9A patent/CN110047488B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070016401A1 (en) * | 2004-08-12 | 2007-01-18 | Farzad Ehsani | Speech-to-speech translation system with user-modifiable paraphrasing grammars |
US20070276649A1 (en) * | 2006-05-25 | 2007-11-29 | Kjell Schubert | Replacing text representing a concept with an alternate written form of the concept |
WO2008137341A1 (en) * | 2007-05-07 | 2008-11-13 | Microsoft Corporation | Document translation system |
US20090055158A1 (en) * | 2007-08-21 | 2009-02-26 | Kabushiki Kaisha Toshiba | Speech translation apparatus and method |
CN101458681A (en) * | 2007-12-10 | 2009-06-17 | 株式会社东芝 | Voice translation method and voice translation apparatus |
KR20090061158A (en) * | 2007-12-11 | 2009-06-16 | 한국전자통신연구원 | Method and apparatus for correcting of translation error by using error-correction pattern in a translation system |
KR20100138194A (en) * | 2009-06-24 | 2010-12-31 | 엔에이치엔(주) | System and method for recommendding japanese language automatically using tranformatiom of romaji |
CN102959537A (en) * | 2010-06-25 | 2013-03-06 | 乐天株式会社 | Machine translation system and method of machine translation |
US20130132079A1 (en) * | 2011-11-17 | 2013-05-23 | Microsoft Corporation | Interactive speech recognition |
CN104516876A (en) * | 2013-09-30 | 2015-04-15 | 株式会社东芝 | Speech translation system and speech translation method |
CN106445926A (en) * | 2013-10-23 | 2017-02-22 | 日耀有限公司 | Translation support system identification control method and server |
CN107632982A (en) * | 2017-09-12 | 2018-01-26 | 郑州科技学院 | The method and apparatus of voice controlled foreign language translation device |
CN107844481A (en) * | 2017-11-21 | 2018-03-27 | 新疆科大讯飞信息科技有限责任公司 | Text recognition error detection method and device |
CN108399166A (en) * | 2018-02-07 | 2018-08-14 | 深圳壹账通智能科技有限公司 | Text interpretation method, device, computer equipment and storage medium |
CN108615527A (en) * | 2018-05-10 | 2018-10-02 | 腾讯科技(深圳)有限公司 | Data processing method, device based on simultaneous interpretation and storage medium |
CN108710616A (en) * | 2018-05-23 | 2018-10-26 | 科大讯飞股份有限公司 | A kind of voice translation method and device |
Non-Patent Citations (2)
Title |
---|
ENRIQUE VIDAL ET AL: "Computer-Assisted Translation Using Speech Recognition", 《IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING》 * |
吴双志: "语音翻译中口语文本规范化的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110457717A (en) * | 2019-08-07 | 2019-11-15 | 深圳市博音科技有限公司 | Remote translating system and method |
CN110457717B (en) * | 2019-08-07 | 2023-04-07 | 深圳市博音科技有限公司 | Remote translation system and method |
CN113591491A (en) * | 2020-04-30 | 2021-11-02 | 阿里巴巴集团控股有限公司 | System, method, device and equipment for correcting voice translation text |
CN113591491B (en) * | 2020-04-30 | 2023-12-26 | 阿里巴巴集团控股有限公司 | Speech translation text correction system, method, device and equipment |
CN111681643A (en) * | 2020-05-29 | 2020-09-18 | 标贝(北京)科技有限公司 | Speech recognition post-processing method, device, system and storage medium |
CN115086753A (en) * | 2021-03-16 | 2022-09-20 | 北京有竹居网络技术有限公司 | Live video stream processing method and device, electronic equipment and storage medium |
CN113867665A (en) * | 2021-09-17 | 2021-12-31 | 珠海格力电器股份有限公司 | Display language modification method and device, electrical equipment and terminal equipment |
CN113891168A (en) * | 2021-10-19 | 2022-01-04 | 北京有竹居网络技术有限公司 | Subtitle processing method, subtitle processing device, electronic equipment and storage medium |
CN113891168B (en) * | 2021-10-19 | 2023-12-19 | 北京有竹居网络技术有限公司 | Subtitle processing method, subtitle processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110047488B (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110047488A (en) | Voice translation method, device, equipment and control equipment | |
CN108877770B (en) | Method, device and system for testing intelligent voice equipment | |
CN110473525B (en) | Method and device for acquiring voice training sample | |
WO2019237657A1 (en) | Method and device for generating model | |
US11527233B2 (en) | Method, apparatus, device and computer storage medium for generating speech packet | |
KR20180127136A (en) | Double-sided display simultaneous translation device, method and apparatus and electronic device | |
CN109635305B (en) | Voice translation method and device, equipment and storage medium | |
CN112135160A (en) | Virtual object control method and device in live broadcast, storage medium and electronic equipment | |
US20210304757A1 (en) | Removal of identifying traits of a user in a virtual environment | |
CN104407834A (en) | Message input method and device | |
CN112653902B (en) | Speaker recognition method and device and electronic equipment | |
US10970909B2 (en) | Method and apparatus for eye movement synthesis | |
CN108028044A (en) | The speech recognition system of delay is reduced using multiple identifiers | |
CN111223015B (en) | Course recommendation method and device and terminal equipment | |
CN108629241B (en) | Data processing method and data processing equipment | |
KR20210146636A (en) | Method and system for providing translation for conference assistance | |
CN110611842A (en) | Video transmission management method based on virtual machine and related device | |
CN112447168A (en) | Voice recognition system and method, sound box, display device and interaction platform | |
KR101637975B1 (en) | Speaking test system, device and method thereof | |
CN110335237B (en) | Method and device for generating model and method and device for recognizing image | |
CN108241404A (en) | A kind of method, apparatus and electronic equipment for obtaining the off-line operation time | |
CN109903054B (en) | Operation confirmation method and device, electronic equipment and storage medium | |
US10990351B2 (en) | Voice-based grading assistant | |
CN110717315A (en) | System data batch modification method and device, storage medium and electronic equipment | |
CN114840576A (en) | Data standard matching method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |