CN106528545A - Voice message processing method and device - Google Patents
Voice message processing method and device Download PDFInfo
- Publication number
- CN106528545A CN106528545A CN201610912091.5A CN201610912091A CN106528545A CN 106528545 A CN106528545 A CN 106528545A CN 201610912091 A CN201610912091 A CN 201610912091A CN 106528545 A CN106528545 A CN 106528545A
- Authority
- CN
- China
- Prior art keywords
- voice messaging
- translation
- translation strategy
- position information
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a voice message processing method and device. The voice message processing method comprises the following steps: acquiring voice information of a voice source and target position information; determining a target translation strategy according to the target position information; translating the voice information according to the target translation strategy to obtain translation information; and outputting the translation information. With the adoption of the voice information processing method, a user can perform translation without repeatedly selecting translation ways; the operation is simple, and the conversation efficiency is high.
Description
Technical field
A kind of the present invention relates to field of computer technology, more particularly to processing method and processing device of voice messaging.
Background technology
It is with the fast development of World Economics, more and more using the exchange between the people of different language, double right
In words scene, if dialogue both sides are ignorant of the language of other side, the personnel with the help of an interpreter that generally require are carried out to the chat script of both sides
Translation, to realize the communication of two people.Although this mode for adopting translator to be translated can be with accurate reception and registration both sides
Chat script, but it is costly.
For realizing that low cost is linked up, software translation with the help of an interpreter is more likely at present, that is, in dialog procedure, passing through
Microphone gathers the conversation content of user, and the conversation content is analyzed via translation software, is specified using user afterwards
Translation languages the conversation content for analyzing is translated, and the data after translation are realized into two people's by speech play
Link up.But, there is a great defect in this communication way:After having gathered voice every time, user will stop hand
It is dynamic to select required interpreter language, cause conversational operation loaded down with trivial details, dialogue efficiency is low.
The content of the invention
It is an object of the invention to provide a kind of processing method and processing device of voice messaging, to solve existing voice translation side
The technical problem that method is cumbersome, dialogue efficiency is low.
For solving above-mentioned technical problem, the embodiment of the present invention provides technical scheme below:
A kind of processing method of voice messaging, which includes:
Obtain the voice messaging and target position information of sound source;
Target Translation Strategy is determined according to the target position information;
The voice messaging is translated using the target Translation Strategy, obtain translation information;
Export the translation information.
For solving above-mentioned technical problem, the embodiment of the present invention also provides technical scheme below:
A kind of processing meanss of voice messaging, which includes:
Acquisition module, for obtaining the voice messaging and target position information of sound source;
Determining module, for determining target Translation Strategy according to the target position information;
Translation module, for translating to the voice messaging using the target Translation Strategy, obtains translation information;
Output module, for exporting the translation information.
The processing method and processing device of voice messaging of the present invention, by obtaining voice messaging and the target location of sound source
Information, and target Translation Strategy is determined according to target position information, afterwards, voice messaging is carried out using the target Translation Strategy
Translation, obtains translation information, and exports the translation information, is capable of achieving translation behaviour so as to be input into repeatedly interpretative system without the need for user
Make, it is simple to operate, talk with efficiency high.
Description of the drawings
Below in conjunction with the accompanying drawings, described in detail by the specific embodiment to the present invention, technical scheme will be made
And other beneficial effects are apparent.
Fig. 1 a are the schematic diagram of a scenario of the processing system of voice messaging provided in an embodiment of the present invention.
Fig. 1 b are the schematic flow sheet of the processing method of voice messaging provided in an embodiment of the present invention.
Fig. 2 a are the schematic flow sheet of the processing method of voice messaging provided in an embodiment of the present invention.
Fig. 2 b are dual microphone gatherer process schematic diagram provided in an embodiment of the present invention.
Fig. 3 a are the structural representation of the processing meanss of voice messaging provided in an embodiment of the present invention.
Fig. 3 b are the structural representation of another processing meanss of voice messaging provided in an embodiment of the present invention.
Fig. 4 is the structural representation of terminal provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.It is based on
Embodiment in the present invention, the every other enforcement obtained under the premise of creative work is not made by those skilled in the art
Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of processing method of voice messaging, apparatus and system.
Fig. 1 a are referred to, the processing system of the voice messaging can include any one voice that the embodiment of the present invention is provided
The processing meanss of information, the processing meanss of the voice messaging specifically can with it is integrated in the terminal, the terminal can be mobile phone, flat board
Computer or other there is the equipment of interpretative function.
Wherein, terminal can obtain the voice messaging and target position information of sound source, and be determined according to target position information
Target Translation Strategy, afterwards, is translated to voice messaging using the target Translation Strategy, obtains translation information, and output should
Translation information.
Wherein, the sound source can include people or can sound producing body, such as can broadcast for the voice in video call process
Put equipment.The target position information can refer to the relative position of sound source and terminal, be mainly used in distinguishing different objects of speaking.Should
Target Translation Strategy can according to the actual requirements depending on, which generally includes opriginal language to be translated and the target finally translated into
Language, if be that " by translator of Chinese into English ", then opriginal language is Chinese, and object language is English than the target Translation Strategy
Text.As dialogue both sides P1And P2When engaging in the dialogue positioned at terminal both sides, can be according to positional information of the object relative to terminal of speaking
To judge that whom current speaker is, so as to select suitable target Translation Strategy to be translated, and it is possible to further will translation
The content for going out is played back by loudspeaker, to talk with both sides P1And P2Can hear.
It is described in detail respectively below.It should be noted that, the sequence number of following examples is preferentially suitable not as embodiment
The restriction of sequence.
First embodiment
The angle of the processing meanss from voice messaging is described by the present embodiment, and the processing meanss of the voice messaging can be with
It is integrated in the terminal.
Fig. 1 b are referred to, Fig. 1 b specifically describe the processing method of the voice messaging of first embodiment of the invention offer, its
Can include:
S101, the voice messaging and target position information that obtain sound source.
In the present embodiment, the sound source can include people or can sound producing body, such as can be in video call process
Voice playing equipment.The voice messaging can include the information such as voice content, volume and tone color.The target position information can refer to
Sound source and the relative position of terminal (or build-in components in terminal), are mainly used in distinguishing the object of speaking of diverse location.The voice
Information can be obtained by sound collection equipment, and the target position information can be obtained according to the voice messaging of collection, it is also possible to
Obtained by some detection means detections, such as can be obtained by the sensing of the infrared equipment of terminal built-in.
For example, above-mentioned steps S101 can specifically include:
1-1, it is utilized respectively the sound sent to sound source by multiple audio collection units and is acquired, obtains multiple with same
The voice messaging of one voice content.
In the present embodiment, the audio collection unit can include microphone, and the plurality of audio collection unit can show as
Microphone array, each of which audio collection unit have different installation sites, the plurality of audio collection unit in the terminal
Quantity can according to the actual requirements depending on, can such as be 2 or 3, etc..
1-2, target position information is determined according to the voice messaging and audio collection unit.
For example, above-mentioned steps 1-2 can specifically include:
Obtain the volume value of each voice messaging, and the mark of each audio collection unit;
Target position information is determined according to the size and mark of the volume value.
In the present embodiment, process (such as Fourier transformation) can be digitized to obtain volume value to voice messaging.
The mark is mainly used in distinguishing different audio collection units, the installation site which can be according to audio collection unit in the terminal
To set, such as the mark of the audio collection unit can be set as M1, M2 successively until Mn from left to right.Believe the target location
Breath refers mainly to position of the sound source relative to audio collection unit, and which can have many forms, can such as show as orientation
" left side ", " in " or " right side ", or mark M1, M2 or Mn, each of which mark represents a position, certainly, is to improve precision,
The identification sets with ordering rule, such as M1M2M3 or M1M3M2, etc. can also be shown as.
It should be noted that as each audio collection unit has different installation sites in the terminal, and from sound source
Nearer, the volume that audio collection unit is collected is bigger, therefore the voice collected by same sound source, each audio collection unit
The content of information is identical with tone color, and volume is different, so as to only it is to be understood that the volume value of the voice of each audio collection unit collection
The target position information of sound source is just can determine that, namely position of the sound source relative to audio collection unit can be determined according to volume value
Put.
For example, above-mentioned steps " determining target position information according to the size and mark of the volume value " can specifically include:
2-1, the mark that max volume is worth corresponding audio collection unit is obtained, or according to the size of volume value to audio frequency
The mark of collecting unit is ranked up, to obtain identification sets after sequence;
2-2, by the mark of acquisition or sequence after identification sets be defined as target position information.
In the present embodiment, sequentially mark can be ranked up from big to small or from small to large by volume value, be sorted
Identification sets afterwards.It is easily understood that as the form of expression of target position information has various, therefore the determination of the target position information
Mode can also have various, such as when target position information shows as identifying or during the identification sets with ordering rule, the mesh
Cursor position information can be directly identification sets after the mark or sequence for obtaining.Such as when showing as orientation when target position information
During information, needs are further searched corresponding from default azimuth information storehouse according to identification sets after the mark or sequence for obtaining
Azimuth information as target position information, wherein preserve in the orientation information bank be mark or identification sets and azimuth information it
Between incidence relation, its can terminal when dispatching from the factory producer set, set when such as dispatching from the factory:M1 or M1M2 pair
Answer azimuth information " left side ", M2 or M2M1 correspondences azimuth information " right side " etc..
S102, target Translation Strategy is determined according to target position information.
In the present embodiment, the target Translation Strategy can according to the actual requirements depending on, which can generally include to be translated
Opriginal language and the object language finally translated into, if be " by translator of Chinese into English ", then just than the target Translation Strategy
Beginning language is Chinese, and object language is English.
For example, above-mentioned steps S102 can specifically include:
Corresponding Translation Strategy is selected according to target position information from the Translation Strategy set set up;
The Translation Strategy of selection is defined as into target Translation Strategy.
In the present embodiment, the Translation Strategy in the Translation Strategy set can according to the actual requirements depending on, which can include
" by translator of Chinese into English ", " by translator of Japanese into English " or " by translator of English into Chinese " etc..Actual application
In, need the incidence relation set up between Translation Strategy and positional information in Translation Strategy set in advance, now, the target position
Confidence breath can be obtained, or by multiple audio collecting devices by the built-in device in terminal, such as camera detection
The voice messaging of collection user is determining.
When the positional information in the incidence relation is to gather the voice messaging of user come really by multiple audio collecting devices
When fixed, before above-mentioned steps S101, the processing method of the voice messaging can also include:
The voice messaging first of sound source is gathered using the audio collection unit;
Obtain the current Translation Strategy of user input;
Translation Strategy set is set up according to voice messaging first and current Translation Strategy.
In the present embodiment, this be when voice messaging can be that terminal opens voice translation functions first, for the first time the language of collection
Sound, and for guaranteeing the accuracy of follow-up location infomation detection, this first voice messaging can be multistage voice composition, or
The voice segments of specified duration.
For example, above-mentioned steps " setting up Translation Strategy set according to voice messaging first and current Translation Strategy " specifically can be with
Including:
Obtain the volume value of voice messaging first;
Current location information is determined according to the mark of the volume value and audio collection unit of voice messaging first;
Translation Strategy set is set up according to current location information and current Translation Strategy.
In the present embodiment, the determination mode of current location information can have various, if than the performance of current location information
When form is mark or identification sets, the mark of the corresponding audio collection unit of max volume value can be obtained, or presses volume value
Sequentially it is ranked up, identification sets after being sorted, now, the mark for getting or sequence from big to small or from small to large to mark
Identification sets are current location information afterwards, and need current location information is stored in Translation Strategy set.
If be azimuth information than the form of expression of current location information, can further according to the mark that gets or
After sequence, identification sets judge the current location information of user, such as by identification sets and above-mentioned steps after the mark for getting or sequence
Azimuth information storehouse in 2-2 is matched, and is matched the azimuth information for obtaining and is current location information, such as " left side " or " right side "
Deng.
Additionally, the current location information can also be what user was manually entered, such as terminal can display to the user that one
Positional information choice box, the selection inframe can provide " left side ", " in " and multiple positional informations such as " right side " so that user selects, or
The person current location information can also be terminal by built-in device voluntarily detect, etc..
For example, " setting up Translation Strategy set according to current location information and current Translation Strategy specifically can be with for above-mentioned steps
Including:
The incidence relation set up between current location information and current Translation Strategy;
The incidence relation is stored in Translation Strategy set.
Now, above-mentioned steps " select corresponding translation according to target position information from the Translation Strategy set set up
Strategy " can specifically include:
Translation corresponding with the target position information is selected from the Translation Strategy set set up according to the incidence relation
Strategy.
In the present embodiment, if dialogue both sides (or dialogue is multi-party) are during dialogue, the sound of speaking of either one, position
These information have been set when speaking first with Translation Strategy, then afterwards in the case where institute's station location is constant, arbitrary
When side speaks, terminal can determine the target position information of the user according to the voice messaging of collection, and according to the target position
The corresponding Translation Strategy of information searching is put to carry out translating operation, is manually selected without the need for user, it is simple to operation, can maximum journey
The probability that the reduction dialogue of degree is interrupted, improves the fluency linked up.
S103, the voice messaging is translated using the target Translation Strategy, obtain translation information.
In the present embodiment, semantic analysis can be carried out to voice messaging first with opriginal language to be translated, then be utilized
The semantic meaning representation for analyzing out, is obtained translation information by the object language finally translated into.
S104, export the translation information.
In the present embodiment, the content after translation can be carried out speech play by equipment such as loudspeakers, so that user can
To hear.It is pointed out that during broadcasting, the plurality of audio collection unit can not carry out voice collecting operation.
From the foregoing, the present embodiment provide voice messaging processing method, by obtain sound source voice messaging and
Target position information, and target Translation Strategy is determined according to target position information, afterwards, using the target Translation Strategy to voice
Information is translated, and is obtained translation information, and is exported the translation information, is selected manually relative to user is needed in prior art repeatedly
For selecting interpretative system, translation is capable of achieving without the need for manually operated, simple to operate, dialogue efficiency high links up fluency good.
Second embodiment
Citing is described in further detail by method according to described by embodiment one below.
In the present embodiment, will be integrated in the terminal with the processing meanss of the voice messaging, dialogue participation number is two people
As a example by be described in detail.
As shown in Figure 2 a and 2 b, a kind of processing method of voice messaging, idiographic flow can be as follows:
S201, terminal are utilized respectively the voice messaging first that multiple audio collection units gather sound source, and it is defeated to obtain user
The current Translation Strategy for entering.
For example, the plurality of audio collection unit can be dual microphone, and the sound source can be dialogue both sides P1Or P2.The head
One section of voice messaging of a length of 1 minute when secondary voice messaging can be collection.Specifically, terminal is in collection voice messaging first
During, P1Or P2Also need to be manually entered the Translation Strategy needed for oneself, such as terminal can provide Translation Strategy choice box,
For user select, in the Translation Strategy choice box can include " by translator of Chinese into English ", " by translator of Japanese into English " and
Multiple options such as " by translator of English into Chinese ".
S202, terminal obtain the volume value of voice messaging first, and the mark of each audio collection unit.
For example, the mark of the plurality of audio collection unit can be labeled as M1 and M2 successively according to order from left to right.
The volume value is volume mean value, and which can include L1And L2, wherein L1=30 decibels, L2=34 decibels, and the corresponding volumes of M1
It is worth for L1, the corresponding volume values of M2 are L2。
S203, terminal volume value and mark determination current location information according to voice messaging first.
For example, if the form of expression of current location information is mark or identification sets, max volume value correspondence can be obtained
Audio collection unit mark M2, or by volume value, from big to small or from small to large order is ranked up to mark, is obtained
Identification sets M2M1 after sequence, now, M2 or M2M1 are current location information.
If the form of expression of current location information is azimuth information, such as " left side ", " in " or " right side ", can further root
The current location information of user is judged according to identification sets M2M1 after the mark M2 or sequence for getting, such as by mark M2 for getting
Or after sequence, identification sets M2M1 are matched with azimuth information storehouse, obtain current location information " right side ", azimuth information storehouse herein
Middle preservation is mark or the incidence relation between identification sets and azimuth information, and which can be that producer has set when terminal is dispatched from the factory
Put, set when such as dispatching from the factory:M1 or M1M2 correspondences azimuth information " left side ", M2 or M2M1 correspondence azimuth information " right side " etc.
Deng.
The incidence relation that S204, terminal are set up between current location information and current Translation Strategy, and by the incidence relation
It is stored in Translation Strategy set.
For example, current location information M2 or M2M1 or " right side " are carried out with current Translation Strategy " by translator of Chinese into English "
Associate and store, current location information M1 or M1M2 or " left side " are closed with current Translation Strategy " by translator of English into Chinese "
Join and store.
S205, terminal gather the voice messaging of sound source using the audio collection unit, and obtain the sound of each voice messaging
Value.
For example, after terminal establishes Translation Strategy set, in the case where institute's station location is constant, as long as either one is opened
Beginning speaks, and terminal gathers the voice messaging at the moment using microphone, and just can determine that according to the voice messaging at the moment
Current speaker is P1Or P2, so as to select suitable Translation Strategy, and could be really after need not obtaining having gathered one section of voice
It is fixed, it is convenient and swift.
S206, terminal determine target position information according to the size and mark of the volume value of each voice messaging.
For example, when target position information shows as identifying or during the identification sets with ordering rule, the target location is believed
Breath can be directly identification sets after the mark or sequence for obtaining.When target position information shows as azimuth information, need into
One step searches corresponding azimuth information as target position from azimuth information storehouse according to identification sets after the mark or sequence for obtaining
Confidence ceases, and the azimuth information such as searched according to M1 or M1M2 is " left side ", namely can determine whether the artificial P of current session1。
S207, terminal select corresponding Translation Strategy to turn over as target from Translation Strategy set according to target position information
Translate strategy.
For example, the target that terminal can be determined from Translation Strategy set according to target position information M1 or M1M2 or " left side "
Translation Strategy is " by translator of Chinese into English ".
S208, terminal are translated to the voice messaging using the target Translation Strategy, obtain translation information, and output should
Translation information.
For example, the Chinese speech that P1 says can be translated into English voice by terminal, and be played back by loudspeaker, so as to P2
Can hear.
From the foregoing, the processing method of the voice messaging of the present embodiment offer, wherein terminal can be utilized respectively multiple
The voice messaging first of audio collection unit collection sound source, and the current Translation Strategy of user input is obtained, afterwards, obtain first
The volume value of voice messaging, and the mark of each audio collection unit, and the volume value according to voice messaging first and mark
Determine current location information, then, the incidence relation set up between current location information and current Translation Strategy, and this is associated
Relation is stored in Translation Strategy set, so, and subsequently during the sound source is spoken, terminal can utilize the audio collection
The voice messaging of unit collection sound source, and the volume value of each voice messaging is obtained, then, according to the volume of each voice messaging
The size and mark of value determines target position information, and selects corresponding turning over from Translation Strategy set according to target position information
Strategy is translated as target Translation Strategy, afterwards, the voice messaging is translated using the target Translation Strategy, obtain translation letter
Breath, and the translation information is exported, follow-up translating operation is capable of achieving so as to user need to only be input into a Translation Strategy, without the need for anti-
It is multiple to be input into, it is simple to operate, and dialogue can be avoided as far as possible from being interrupted, communication fluency is good, talks with efficiency high.
3rd embodiment
On the basis of two methods described of embodiment one and embodiment, the present embodiment is by the processing meanss from voice messaging
Angle is further described below, and refers to Fig. 3 a, and Fig. 3 a specifically describe the voice messaging of third embodiment of the invention offer
Processing meanss, which can include:Acquisition module 10, determining module 20, translation module 30 and output module 40, wherein:
(1) acquisition module 10
Acquisition module 10, for obtaining the voice messaging and target position information of sound source.
In the present embodiment, the sound source can include people or can sound producing body, such as can be in video call process
Voice playing equipment.The voice messaging can include the information such as voice content, volume and tone color.The target position information can refer to
Sound source and the relative position of terminal (or build-in components in terminal), are mainly used in distinguishing the object of speaking of diverse location.Obtain mould
Block 10 can obtain the voice messaging by sound collection equipment, can be according to the voice messaging of collection or some detection means
Target position information is obtained, such as can be by the infrared equipment induction targets positional information of terminal built-in.
For example, Fig. 3 b are referred to, the acquisition module 10 can specifically include:First collection submodule 11 and first determines son
Module 12, wherein,
First collection submodule 11, adopts for being utilized respectively the sound sent to sound source by multiple audio collection units
Collection, obtains multiple voice messagings with same voice content.
In the present embodiment, the audio collection unit can include microphone, and the plurality of audio collection unit can show as
Microphone array, each of which audio collection unit have different installation sites, the plurality of audio collection unit in the terminal
Quantity can according to the actual requirements depending on, can such as be 2 or 3, etc..
First determination sub-module 12, for determining target position information according to the voice messaging and audio collection unit.
For example, first determination sub-module 12 specifically can be used for:
Obtain the volume value of each voice messaging, and the mark of each audio collection unit;
Target position information is determined according to the size and mark of the volume value.
In the present embodiment, the first determination sub-module 12 can be digitized process to voice messaging, and (such as Fourier becomes
Change) obtaining volume value.The mark is mainly used in distinguishing different audio collection units, and which can exist according to audio collection unit
Setting, such as the mark of the audio collection unit can be set as that M1, M2 are straight to installation site in terminal successively from left to right
To Mn.The target position information refers mainly to position of the sound source relative to audio collection unit, and which can have many forms, than
Can such as show as orientation " left side ", " in " or " right side ", or mark M1, M2 or Mn, each of which mark represent a position, when
So, it is to improve precision, the identification sets with ordering rule, such as M1M2M3 or M1M3M2, etc. can also be shown as.
It should be noted that as each audio collection unit has different installation sites in the terminal, and from sound source
Nearer, the volume that audio collection unit is collected is bigger, therefore the voice collected by same sound source, each audio collection unit
The content of information is identical with tone color, and volume is different, so as to only it is to be understood that the volume value of the voice of each audio collection unit collection
The target position information of sound source is just can determine that, namely position of the sound source relative to audio collection unit can be determined according to volume value
Put.
For example, above-mentioned first determination sub-module 12 specifically can be used for:
Obtain the mark that max volume is worth corresponding audio collection unit, or according to the size of volume value to audio collection
The mark of unit is ranked up, to obtain identification sets after sequence;
Identification sets after the mark of acquisition or sequence are defined as into target position information.
In the present embodiment, the first determination sub-module 12 can by volume value from big to small or from small to large order to identify into
Row sequence, identification sets after being sorted.It is easily understood that as the form of expression of target position information has various, therefore the mesh
The determination mode of cursor position information can also have various, such as when target position information shows as identifying or has ordering rule
Identification sets when, the first determination sub-module 12 can directly using obtain mark or sequence after identification sets as the target location
Information.Such as when when target position information shows as azimuth information, the first determination sub-module 12 is needed further according to acquisition
Mark or sequence after identification sets search corresponding azimuth information from default azimuth information storehouse as target position information,
What is wherein preserved in the orientation information bank is mark or the incidence relation between identification sets and azimuth information, and which can be terminal
When dispatching from the factory, producer has set, and is set when such as dispatching from the factory:M1 or M1M2 correspondences azimuth information " left side ", M2 or M2M1 pair
Answer azimuth information " right side " etc..
(2) determining module 20
Determining module 20, for determining target Translation Strategy according to the target position information.
In the present embodiment, the target Translation Strategy can according to the actual requirements depending on, which can generally include to be translated
Opriginal language and the object language finally translated into, if be " by translator of Chinese into English ", then just than the target Translation Strategy
Beginning language is Chinese, and object language is English.
For example, the determining module 20 can specifically include:Submodule 21 and the second determination sub-module 22 is selected, wherein:
Submodule 21 is selected, for corresponding turning over being selected according to target position information from the Translation Strategy set set up
Translate strategy.
Second determination sub-module 22, for being defined as target Translation Strategy by the Translation Strategy of selection.
In the present embodiment, the Translation Strategy in the Translation Strategy set can according to the actual requirements depending on, which can include
" by translator of Chinese into English ", " by translator of Japanese into English " or " by translator of English into Chinese " etc..Actual application
In, need the incidence relation set up between Translation Strategy and positional information in Translation Strategy set in advance, now, the target position
Confidence breath can be obtained, or by multiple audio collecting devices by the built-in device in terminal, such as camera detection
The voice messaging of collection user is determining
When the positional information in the incidence relation is to gather the voice messaging of user come really by multiple audio collecting devices
When fixed, the processing meanss of the voice messaging can also include setting up module 50, and this sets up module 50 can include:Second collection
Submodule 51, acquisition submodule 52 and setting up submodule 53, wherein:
Second collection submodule 51, for obtain in the acquisition module sound source voice messaging and target position information it
Before, the voice messaging first of sound source is gathered using the audio collection unit;
Acquisition submodule 52, for obtaining the current Translation Strategy of user input;
Setting up submodule 53, for according to voice messaging and current Translation Strategy set up Translation Strategy set first.
In the present embodiment, this be when voice messaging can be that terminal opens voice translation functions first, for the first time the language of collection
Sound, and for guaranteeing the accuracy of follow-up location infomation detection, this first voice messaging can be multistage voice composition, or
The voice segments of specified duration.
For example, the setting up submodule 53 can specifically include:
Acquiring unit, for obtaining the volume value of voice messaging first;
Determining unit, for determining current location according to the mark of the volume value of voice messaging and audio collection unit first
Information;
First sets up unit, for setting up Translation Strategy set according to current location information and current Translation Strategy.
In the present embodiment, the determination mode of current location information can have various, if than the performance of current location information
When form is mark or identification sets, determining unit can obtain the mark of the corresponding audio collection unit of max volume value, or
By volume value, order is ranked up to mark from big to small or from small to large, identification sets after being sorted, now, the mark for getting
After knowing or sorting, identification sets are current location information, and need current location information is stored in Translation Strategy set.
If be azimuth information than the form of expression of current location information, determining unit can further according to getting
Mark or sequence after identification sets judge the current location information of user, such as by identification sets after the mark for getting or sequence and
Azimuth information storehouse is matched, and is matched the azimuth information for obtaining and is current location information, such as " left side " or " right side " etc..
Additionally, the current location information can also be what user was manually entered, such as terminal can display to the user that one
Positional information choice box, the selection inframe can provide " left side ", " in " and multiple positional informations such as " right side " so that user selects, or
The person current location information can also be terminal by built-in device voluntarily detect, etc..
For example, this first is set up unit and specifically can be used for:
The incidence relation set up between current location information and current Translation Strategy;The incidence relation is stored in into translation plan
In slightly gathering.
Now, the selection submodule 21 specifically can be used for:
Translation corresponding with the target position information is selected from the Translation Strategy set set up according to the incidence relation
Strategy.
In the present embodiment, if dialogue both sides (or dialogue is multi-party) are during dialogue, first sets up unit will appoint
The sound of speaking of one side, position and Translation Strategy these information are set when speaking first, then after in institute's station location not
In the case of change, when either one is spoken, the first determination sub-module 12 can determine the user's according to the voice messaging of collection
Target position information, selects submodule 21 to search corresponding Translation Strategy to carry out translating operation according to the target position information,
Manually select without the need for user, it is simple to operation, farthest can reduce talking with the probability being interrupted, improve the smoothness linked up
Property.
(3) translation module 30
Translation module 30, for translating to the voice messaging using the target Translation Strategy, obtains translation information.
In the present embodiment, translation module 30 can carry out semantic point first with opriginal language to be translated to voice messaging
Analysis, then using the object language finally translated into by the semantic meaning representation for analyzing out, obtain translation information.
(4) output module 40
Output module 40, for exporting the translation information.
In the present embodiment, the content after translation can be carried out speech play by equipment such as loudspeakers by output module 40,
So that user can hear.It is pointed out that during broadcasting, the plurality of audio collection unit can not carry out voice
Acquisition operations.
When being embodied as, above unit can be realized as independent entity, it is also possible to be combined, and made
Realize for same or several entities, being embodied as of above unit can be found in embodiment of the method above, and here is not
Repeat again.
From the foregoing, the processing meanss of the voice messaging of the present embodiment offer, obtain sound source by acquisition module 10
Voice messaging and target position information, determining module 20 determine target Translation Strategy according to target position information, afterwards, translate mould
Block 30 is translated to voice messaging using the target Translation Strategy, obtains translation information, and output module 40 exports the translation to be believed
Breath, for needing user to manually select interpretative system in prior art repeatedly, is capable of achieving translation without the need for manually operated,
It is simple to operate, talk with efficiency high, link up fluency good.
Fourth embodiment
Accordingly, the embodiment of the present invention also provides a kind of processing system of voice messaging, is carried including the embodiment of the present invention
For any one voice messaging processing meanss, the processing meanss of the voice messaging specifically can be found in embodiment three.
Wherein, the processing meanss of the voice messaging specifically can with it is integrated in the terminal, for example, can be as follows:
According to target position information, terminal, for obtaining the voice messaging and target position information of sound source, determines that target is turned over
Strategy is translated, afterwards, voice messaging is translated using the target Translation Strategy, obtain translation information, and export the translation believing
Breath.
More than each equipment be embodied as can be found in embodiment above, will not be described here.
As the processing system of the checking information can include any one voice messaging that the embodiment of the present invention is provided
Processing meanss, it is thereby achieved that achieved by the processing meanss of any one voice messaging that provided of the embodiment of the present invention
Beneficial effect, refers to embodiment above, will not be described here.
5th embodiment
Accordingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 4, the terminal can include radio frequency (RF,
Radio Frequency) circuit 601, include the memory 602 of one or more computer-readable recording mediums, defeated
Enter unit 603, display unit 604, sensor 605, voicefrequency circuit 606, Wireless Fidelity (WiFi, Wireless Fidelity)
Module 607, the processor 608 for including or more than one processing core, and the part such as power supply 609.This area skill
The restriction of the terminal structure that art personnel are illustrated in being appreciated that Fig. 4 not structure paired terminal, can include more more or more than illustrating
Few part, or some parts are combined, or different part arrangements.Wherein:
RF circuits 601 can be used to receiving and sending messages or communication process in, the reception and transmission of signal, especially, by base station
After downlink information is received, transfer to one or more than one processor 608 is processed;In addition, will be related to up data is activation to
Base station.Generally, RF circuits 601 include but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, use
Family identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier
(LNA, Low Noise Amplifier), duplexer etc..Additionally, RF circuits 601 can also by radio communication and network and its
His equipment communication.The radio communication can use arbitrary communication standard or agreement, including but not limited to global system for mobile telecommunications system
System (GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is many
Location (WCDMA, Wideband Code Division Multiple Access), Long Term Evolution (LTE, Long Term
Evolution), Email, Short Message Service (SMS, Short Messaging Service) etc..
Memory 602 can be used to store software program and module, and processor 608 is stored in memory 602 by operation
Software program and module, so as to perform various function application and data processing.Memory 602 can mainly include storage journey
Sequence area and storage data field, wherein, storing program area can storage program area, the application program (ratio needed at least one function
Such as sound-playing function, image player function etc.) etc.;Storage data field can be stored and use created data according to terminal
(such as voice data, phone directory etc.) etc..Additionally, memory 602 can include high-speed random access memory, can also include
Nonvolatile memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.Phase
Ying Di, memory 602 can also include Memory Controller, to provide processor 608 and input block 603 to memory 602
Access.
Input block 603 can be used for the numeral of receives input or character information, and produce and user's setting and function
The relevant keyboard of control, mouse, action bars, optics or trace ball signal input.Specifically, in a specific embodiment
In, input block 603 may include Touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or tactile
Control plate, user can be collected thereon or neighbouring touch operation (such as user use any suitable objects such as finger, stylus or
Operation of the annex on Touch sensitive surface or near the Touch sensitive surface), and corresponding connection dress is driven according to formula set in advance
Put.Optionally, Touch sensitive surface may include two parts of touch detecting apparatus and touch controller.Wherein, touch detecting apparatus inspection
The touch orientation of survey user, and the signal that touch operation brings is detected, transmit a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then give processor 608, and can reception processing
Order that device 608 is sent simultaneously is performed.Furthermore, it is possible to adopt resistance-type, condenser type, infrared ray and surface acoustic wave etc. various
Type realizes Touch sensitive surface.Except Touch sensitive surface, input block 603 can also include other input equipments.Specifically, other are defeated
Enter equipment and can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse
One or more in mark, action bars etc..
Display unit 604 can be used for show by user input information or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made up of figure, text, icon, video and its any combination.Show
Unit 604 may include display floater, optionally, can using liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) are configuring display floater.Further
, Touch sensitive surface can cover display floater, when Touch sensitive surface is detected thereon or after neighbouring touch operation, send process to
Device 608 is provided accordingly according to the type of touch event with preprocessor 608 on a display panel with determining the type of touch event
Visual output.Although in the diagram, Touch sensitive surface and display floater are realizing being input into and be input into as two independent parts
Function, but in some embodiments it is possible to will be Touch sensitive surface integrated with display floater and realization input and output function.
Terminal may also include at least one sensor 605, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity transducer, and wherein, ambient light sensor can be according to ambient light
Light and shade adjusting the brightness of display floater, proximity transducer can cut out display floater and/or the back of the body when terminal is moved in one's ear
Light.As one kind of motion sensor, (generally three axles) acceleration in the detectable all directions of Gravity accelerometer
Size, can detect that size and the direction of gravity when static, can be used for recognize mobile phone attitude application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Can also configure as terminal
Gyroscope, barometer, hygrometer, thermometer, the other sensors such as infrared ray sensor, will not be described here.
Voicefrequency circuit 606, loudspeaker, microphone can provide the COBBAIF between user and terminal.Voicefrequency circuit 606 can
Electric signal after the voice data for receiving is changed, is transferred to loudspeaker, is converted to voice signal output by loudspeaker;It is another
The voice signal of collection is converted to electric signal by aspect, microphone, is converted to voice data after being received by voicefrequency circuit 606, then
After voice data output processor 608 is processed, Jing RF circuits 601 being sent to such as another terminal, or by voice data
Export to memory 602 further to process.Voicefrequency circuit 606 is also possible that earphone jack, with provide peripheral hardware earphone with
The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronicses postal by WiFi module 607
Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and has accessed.Although Fig. 4 shows
WiFi module 607, but it is understood that, which is simultaneously not belonging to must be configured into for terminal, can not change as needed completely
Omit in the scope of the essence for becoming invention.
Processor 608 is the control centre of terminal, using various interfaces and the various pieces of connection whole mobile phone, is led to
Cross operation or perform the software program and/or the module that are stored in memory 602, and call and be stored in memory 602
Data, perform the various functions and processing data of terminal, so as to carry out integral monitoring to mobile phone.Optionally, processor 608 can be wrapped
Include one or more processing cores;Preferably, processor 608 can integrated application processor and modem processor, wherein, should
Operating system, user interface and application program etc. are processed mainly with processor, modem processor mainly processes radio communication.
It is understood that above-mentioned modem processor can not also be integrated in processor 608.
Terminal also includes the power supply 609 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system is logically contiguous with processor 608, so as to realize management charging, electric discharge and power managed by power-supply management system
Etc. function.Power supply 609 can also include one or more direct current or AC power, recharging system, power failure inspection
The random component such as slowdown monitoring circuit, power supply changeover device or inverter, power supply status indicator.
Although not shown, terminal can also include camera, bluetooth module etc., will not be described here.Specifically in this enforcement
In example, the processor 608 in terminal can be according to following instruction, will be the process of one or more application program corresponding
Executable file is loaded in memory 602, and by processor 608 running storage application program in the memory 602, from
And realize various functions:
Obtain the voice messaging and target position information of sound source;
Target Translation Strategy is determined according to target position information;
Voice messaging is translated using the target Translation Strategy, obtain translation information;
Export the translation information.
The implementation method of each operation specifically can be found in above-described embodiment above, and here is omitted.
The terminal can be realized achieved by the processing meanss of any one voice messaging that the embodiment of the present invention is provided
Effectively effect, refers to embodiment above, will not be described here.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
Instruct related hardware to complete with by program, the program can be stored in a computer-readable recording medium, storage
Medium can include:Read-only storage (ROM, Read Only Memory), random access memory (RAM, Random
Access Memory), disk or CD etc..
A kind of processing method of the voice messaging for being provided to the embodiment of the present invention above, device and system have been carried out in detail
Introduce, specific case used herein is set forth to the principle of the present invention and embodiment, the explanation of above example
It is only intended to help and understands the method for the present invention and its core concept;Simultaneously for those skilled in the art, according to the present invention
Thought, will change in specific embodiments and applications, in sum, this specification content should not be understood
For limitation of the present invention.
Claims (14)
1. a kind of processing method of voice messaging, it is characterised in that include:
Obtain the voice messaging and target position information of sound source;
Target Translation Strategy is determined according to the target position information;
The voice messaging is translated using the target Translation Strategy, obtain translation information;
Export the translation information.
2. the processing method of voice messaging according to claim 1, it is characterised in that the voice messaging of the acquisition sound source
And target position information, including:
It is utilized respectively the sound that multiple audio collection units send sound source to be acquired, obtains multiple with same voice
The voice messaging of appearance;
Target position information is determined according to the voice messaging and audio collection unit.
3. the processing method of voice messaging according to claim 2, it is characterised in that it is described according to the voice messaging and
Audio collection unit determines target position information, including:
Obtain the volume value of each voice messaging, and the mark of each audio collection unit;
Target position information is determined according to the size and mark of the volume value.
4. the processing method of voice messaging according to claim 1, it is characterised in that described to be believed according to the target location
Breath determines target Translation Strategy, including:
Corresponding Translation Strategy is selected according to target position information from the Translation Strategy set set up;
The Translation Strategy of selection is defined as into target Translation Strategy.
5. the processing method of voice messaging according to claim 4, it is characterised in that obtaining the target location letter of sound source
Before breath, also include:
The voice messaging first of sound source is gathered using the audio collection unit;
Obtain the current Translation Strategy of user input;
Translation Strategy set is set up according to voice messaging first and current Translation Strategy.
6. the processing method of voice messaging according to claim 5, it is characterised in that the basis first voice messaging and
Current Translation Strategy sets up Translation Strategy set, including:
Obtain the volume value of voice messaging first;
Current location information is determined according to the mark of the volume value and audio collection unit of voice messaging first;
Translation Strategy set is set up according to current location information and current Translation Strategy.
7. the processing method of voice messaging according to claim 6, it is characterised in that
It is described that Translation Strategy set is set up according to current location information and current Translation Strategy, including:Set up current location information
And the incidence relation between current Translation Strategy;The incidence relation is stored in Translation Strategy set;
It is described that corresponding Translation Strategy is selected according to target position information from the Translation Strategy set set up, including:According to
The incidence relation selects Translation Strategy corresponding with the target position information from the Translation Strategy set set up.
8. a kind of processing meanss of voice messaging, it is characterised in that include:
Acquisition module, for obtaining the voice messaging and target position information of sound source;
Determining module, for determining target Translation Strategy according to the target position information;
Translation module, for translating to the voice messaging using the target Translation Strategy, obtains translation information;
Output module, for exporting the translation information.
9. processing meanss of voice messaging according to claim 8, it is characterised in that the acquisition module is specifically included:
First collection submodule, is acquired for being utilized respectively the sound sent to sound source by multiple audio collection units, obtains
To multiple voice messagings with same voice content;
First determination sub-module, for determining target position information according to the voice messaging and audio collection unit.
10. processing meanss of voice messaging according to claim 9, it is characterised in that the first determination sub-module tool
Body is used for:
Obtain the volume value of each voice messaging, and the mark of each audio collection unit;
Target position information is determined according to the size and mark of the volume value.
The processing meanss of 11. voice messagings according to claim 8, it is characterised in that the determining module is specifically included:
Submodule is selected, for corresponding translation plan being selected from the Translation Strategy set set up according to target position information
Slightly;
Second determination sub-module, for being defined as target Translation Strategy by the Translation Strategy of selection.
The processing meanss of 12. voice messagings according to claim 11, it is characterised in that also including module is set up, described
Setting up module includes:
Second collection submodule, it is for before the voice messaging and target position information that the acquisition module obtains sound source, sharp
The voice messaging first of sound source is gathered with the audio collection unit;
Acquisition submodule, for obtaining the current Translation Strategy of user input;
Setting up submodule, for according to voice messaging and current Translation Strategy set up Translation Strategy set first.
The processing meanss of 13. voice messagings according to claim 12, it is characterised in that the setting up submodule is specifically wrapped
Include:
Acquiring unit, for obtaining the volume value of voice messaging first;
Determining unit, for determining present bit confidence according to the mark of the volume value of voice messaging and audio collection unit first
Breath;
First sets up unit, for setting up Translation Strategy set according to current location information and current Translation Strategy.
The processing meanss of 14. voice messagings according to claim 13, it is characterised in that
Described first sets up unit, is used for:The incidence relation set up between current location information and current Translation Strategy;Will be described
Incidence relation is stored in Translation Strategy set;
The selection submodule, is used for:Selected from the Translation Strategy set set up and the mesh according to the incidence relation
The corresponding Translation Strategy of cursor position information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610912091.5A CN106528545B (en) | 2016-10-19 | 2016-10-19 | Voice information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610912091.5A CN106528545B (en) | 2016-10-19 | 2016-10-19 | Voice information processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106528545A true CN106528545A (en) | 2017-03-22 |
CN106528545B CN106528545B (en) | 2020-03-17 |
Family
ID=58332748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610912091.5A Active CN106528545B (en) | 2016-10-19 | 2016-10-19 | Voice information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106528545B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509430A (en) * | 2018-04-10 | 2018-09-07 | 京东方科技集团股份有限公司 | Intelligent glasses and its interpretation method |
CN109286875A (en) * | 2018-09-29 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | For orienting method, apparatus, electronic equipment and the storage medium of pickup |
WO2019075787A1 (en) * | 2017-10-17 | 2019-04-25 | 深圳市沃特沃德股份有限公司 | Translation box and translation system |
WO2019104667A1 (en) * | 2017-11-30 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | Method for operating translating machine and finger-ring remote controller |
CN110188363A (en) * | 2019-04-26 | 2019-08-30 | 北京搜狗科技发展有限公司 | A kind of information switching method, device and interpreting equipment |
CN110457716A (en) * | 2019-07-22 | 2019-11-15 | 维沃移动通信有限公司 | A kind of speech output method and mobile terminal |
CN111312212A (en) * | 2020-02-25 | 2020-06-19 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN111428521A (en) * | 2020-03-23 | 2020-07-17 | 合肥联宝信息技术有限公司 | Data processing method and electronic equipment |
CN111795707A (en) * | 2020-07-21 | 2020-10-20 | 高超群 | New energy automobile charging pile route planning method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101957814A (en) * | 2009-07-16 | 2011-01-26 | 刘越 | Instant speech translation system and method |
CN102811284A (en) * | 2012-06-26 | 2012-12-05 | 深圳市金立通信设备有限公司 | Method for automatically translating voice input into target language |
CN103119642A (en) * | 2010-09-28 | 2013-05-22 | 雅马哈株式会社 | Audio output device and audio output method |
CN103559180A (en) * | 2013-10-12 | 2014-02-05 | 安波 | Chat translator |
CN104394265A (en) * | 2014-10-31 | 2015-03-04 | 小米科技有限责任公司 | Automatic session method and device based on mobile intelligent terminal |
CN105117391A (en) * | 2010-08-05 | 2015-12-02 | 谷歌公司 | Translating languages |
CN105794231A (en) * | 2013-11-22 | 2016-07-20 | 苹果公司 | Handsfree beam pattern configuration |
-
2016
- 2016-10-19 CN CN201610912091.5A patent/CN106528545B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101957814A (en) * | 2009-07-16 | 2011-01-26 | 刘越 | Instant speech translation system and method |
CN105117391A (en) * | 2010-08-05 | 2015-12-02 | 谷歌公司 | Translating languages |
CN103119642A (en) * | 2010-09-28 | 2013-05-22 | 雅马哈株式会社 | Audio output device and audio output method |
CN102811284A (en) * | 2012-06-26 | 2012-12-05 | 深圳市金立通信设备有限公司 | Method for automatically translating voice input into target language |
CN103559180A (en) * | 2013-10-12 | 2014-02-05 | 安波 | Chat translator |
CN105794231A (en) * | 2013-11-22 | 2016-07-20 | 苹果公司 | Handsfree beam pattern configuration |
CN104394265A (en) * | 2014-10-31 | 2015-03-04 | 小米科技有限责任公司 | Automatic session method and device based on mobile intelligent terminal |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019075787A1 (en) * | 2017-10-17 | 2019-04-25 | 深圳市沃特沃德股份有限公司 | Translation box and translation system |
WO2019104667A1 (en) * | 2017-11-30 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | Method for operating translating machine and finger-ring remote controller |
CN108509430A (en) * | 2018-04-10 | 2018-09-07 | 京东方科技集团股份有限公司 | Intelligent glasses and its interpretation method |
CN109286875B (en) * | 2018-09-29 | 2021-01-01 | 百度在线网络技术(北京)有限公司 | Method, apparatus, electronic device and storage medium for directional sound pickup |
CN109286875A (en) * | 2018-09-29 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | For orienting method, apparatus, electronic equipment and the storage medium of pickup |
CN110188363A (en) * | 2019-04-26 | 2019-08-30 | 北京搜狗科技发展有限公司 | A kind of information switching method, device and interpreting equipment |
US11755849B2 (en) | 2019-04-26 | 2023-09-12 | Beijing Sogou Technology Development Co., Ltd. | Information switching method, apparatus and translation device |
CN110457716A (en) * | 2019-07-22 | 2019-11-15 | 维沃移动通信有限公司 | A kind of speech output method and mobile terminal |
CN110457716B (en) * | 2019-07-22 | 2023-06-06 | 维沃移动通信有限公司 | Voice output method and mobile terminal |
CN111312212A (en) * | 2020-02-25 | 2020-06-19 | 北京搜狗科技发展有限公司 | Voice processing method, device and medium |
CN111428521A (en) * | 2020-03-23 | 2020-07-17 | 合肥联宝信息技术有限公司 | Data processing method and electronic equipment |
CN111428521B (en) * | 2020-03-23 | 2022-03-15 | 合肥联宝信息技术有限公司 | Data processing method and electronic equipment |
CN111795707A (en) * | 2020-07-21 | 2020-10-20 | 高超群 | New energy automobile charging pile route planning method |
Also Published As
Publication number | Publication date |
---|---|
CN106528545B (en) | 2020-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106528545A (en) | Voice message processing method and device | |
CN105788612B (en) | A kind of method and apparatus detecting sound quality | |
CN103578474B (en) | A kind of sound control method, device and equipment | |
CN107247711B (en) | Bidirectional translation method, mobile terminal and computer readable storage medium | |
CN108268835A (en) | sign language interpretation method, mobile terminal and computer readable storage medium | |
CN106331359B (en) | A kind of speech signal collection method, device and terminal | |
CN104035948A (en) | Geographic position display method and device | |
CN104699973A (en) | Method and device for controlling logic of questionnaires | |
CN104516893A (en) | Information storage method, information storage device and communication terminal | |
CN105208056A (en) | Information exchange method and terminal | |
CN111563151B (en) | Information acquisition method, session configuration method, device and storage medium | |
CN108492836A (en) | A kind of voice-based searching method, mobile terminal and storage medium | |
CN106921791A (en) | The storage and inspection method of a kind of multimedia file, device and mobile terminal | |
CN107370670A (en) | Unread message extracts methods of exhibiting and device | |
CN108897846B (en) | Information searching method, apparatus and computer readable storage medium | |
CN109243488A (en) | Audio-frequency detection, device and storage medium | |
CN106973168A (en) | Speech playing method, device and computer equipment | |
CN106534528A (en) | Processing method and device of text information and mobile terminal | |
CN107885718A (en) | Semanteme determines method and device | |
CN104750722B (en) | A kind of acquisition of information and methods of exhibiting and device | |
CN108549681A (en) | Data processing method and device, electronic equipment, computer readable storage medium | |
CN106375182B (en) | Voice communication method and device based on instant messaging application | |
CN107622137A (en) | The method and apparatus for searching speech message | |
CN106486119A (en) | A kind of method and apparatus of identification voice messaging | |
CN104731806A (en) | Method and terminal for quickly finding user information in social network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |