CN110399461A - Data processing method, device, server and storage medium - Google Patents
Data processing method, device, server and storage medium Download PDFInfo
- Publication number
- CN110399461A CN110399461A CN201910659254.7A CN201910659254A CN110399461A CN 110399461 A CN110399461 A CN 110399461A CN 201910659254 A CN201910659254 A CN 201910659254A CN 110399461 A CN110399461 A CN 110399461A
- Authority
- CN
- China
- Prior art keywords
- target
- character
- dialogue
- character string
- teller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 135
- 238000004458 analytical method Methods 0.000 claims abstract description 65
- 238000000034 method Methods 0.000 claims abstract description 31
- 230000004048 modification Effects 0.000 claims description 36
- 238000012986 modification Methods 0.000 claims description 36
- 230000014509 gene expression Effects 0.000 claims description 21
- 230000015572 biosynthetic process Effects 0.000 claims description 19
- 238000003786 synthesis reaction Methods 0.000 claims description 19
- 239000003607 modifier Substances 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 238000003058 natural language processing Methods 0.000 claims description 6
- 235000013399 edible fruits Nutrition 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 19
- 239000000203 mixture Substances 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000012549 training Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 235000009508 confectionery Nutrition 0.000 description 3
- 230000008451 emotion Effects 0.000 description 3
- 230000035800 maturation Effects 0.000 description 3
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002688 persistence Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 241000287181 Sturnus vulgaris Species 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Abstract
The embodiment of the invention discloses a kind of data processing method, device, server and storage mediums, and wherein method includes: the character data for obtaining destination document and including;Structural body identifying processing is carried out to character data and obtains dialogue string assemble, dialogue string assemble includes target dialogue character string, and target dialogue character string includes multiple continuation characters;Determine the identity of the corresponding target teller of target dialogue character set;Determine that the feature that character features description is carried out to target teller describes character set from character data according to the identity of target teller;Character set is described to the feature and carries out the acoustic information that signature analysis determines the target teller.Using the embodiment of the present invention, can automatically identify in document the corresponding teller of conversation portion acoustic information.
Description
Technical field
This application involves Internet technical fields more particularly to a kind of data processing method, device, server and storage to be situated between
Matter.
Background technique
The continuous development of science and technology brings many conveniences to people's lives, such as to novel, webpage information and other texts
The voice of shelves is read aloud, and the time that user stares at terminal screen can be reduced while guaranteeing that user gets information in document,
It can protect the eyes of user.In addition, user can select difference to read aloud sound type according to the hobby of oneself to read aloud document, increase
The interest of reading documents is added.
During voice is read aloud, usually entire document is read aloud according to the sound type of reading aloud that user selects,
That is entire document is read out by a sound is bright, the function of so causing voice to be read aloud is relatively simple.If literary
It include multiple personages in shelves, user reads aloud the gender that cannot get multiple personages and each personage by above-mentioned voice
Between the information such as relationship.It can be seen that being read aloud in field in voice, if used and its acoustic information phase for different personages
Symbol read aloud sound type read aloud can increase voice read aloud it is rich.Therefore, different people in document how is determined now
The acoustic information of object becomes research hotspot.
Summary of the invention
The embodiment of the invention provides a kind of data processing method, device, server and storage mediums, it can be achieved that automatically
Identify the acoustic information of each teller in document.
On the one hand, the embodiment of the invention provides a kind of data processing methods, comprising:
Obtain the character data that destination document includes;
Structural body identifying processing is carried out to the character data and obtains dialogue string assemble, the dialogue string assemble
In include target dialogue character string, the target dialogue character string includes multiple continuous dialogue characters;
Obtain the name of the corresponding target teller of the target dialogue character string;
Determine that the feature that character features description is carried out to the target teller describes character from the character data
Set;
Character set is described to the feature and carries out the acoustic information that signature analysis determines the target teller.
On the other hand, the embodiment of the invention provides a kind of data processing equipments, comprising:
Acquiring unit, the character data for including for obtaining destination document;
Processing unit obtains dialogue string assemble for carrying out structural body identifying processing to the character data, described
Talking in string assemble includes target dialogue character string, and the target dialogue character string includes multiple continuous dialogue characters;
The acquiring unit is also used to obtain the name of the corresponding target teller of the target dialogue character string;
The processing unit is also used to be determined from the character data to institute according to the name of the target teller
The feature for stating target teller progress character features description describes character set;
The processing unit is also used to describe the feature character set progress signature analysis and determines the target speech
The acoustic information of people.
In another aspect, the embodiment of the invention provides a kind of server, the server includes:
Processor is adapted for carrying out one or more instruction;And
Computer storage medium, the computer storage medium are stored with one or more instruction, and described one or more
Instruction is suitable for being loaded by the processor and executing following steps:
Obtain the character data that destination document includes;
Structural body identifying processing is carried out to the character data and obtains dialogue string assemble, the dialogue string assemble
In include target dialogue character string, the target dialogue character string includes multiple continuous dialogue characters;
Obtain the name of the corresponding target teller of the target dialogue character string;
Determine that the feature that character features description is carried out to the target teller describes character from the character data
Set;
Character set is described to the feature and carries out the acoustic information that signature analysis determines the target teller.
The embodiment of the present invention carries out structural body identifying processing to the character data that the destination document got includes and determines
Dialogue string assemble in character data;Further, for the target dialogue character string in dialogue string assemble, identification
Then the identity of the corresponding target teller of target dialogue character string out is searched in character data to target teller
The feature for carrying out character features description describes character set;Feature is described to determine after character set carries out signature analysis
The acoustic information of target teller realizes and automatically identifies the corresponding teller of each dialogue character set in destination document
Acoustic information.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 a is a kind of structural schematic diagram of data processing scheme provided in an embodiment of the present invention;
Fig. 1 b is a kind of module rack composition provided in an embodiment of the present invention;
Fig. 1 c is a kind of working timing figure of data processing provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of data processing method provided in an embodiment of the present invention;
Fig. 3 is the flow diagram of another data processing method provided in an embodiment of the present invention;
Fig. 4 a is a kind of schematic diagram of the user interface of terminal provided in an embodiment of the present invention;
Fig. 4 b is the schematic diagram of another user interface provided in an embodiment of the present invention;
Fig. 4 c is a kind of user interface map of document to be read provided in an embodiment of the present invention;
Fig. 4 d is a kind of read interface figure of document to be read provided in an embodiment of the present invention;
Fig. 5 a is a kind of schematic diagram of destination document provided in an embodiment of the present invention;
Fig. 5 b is a kind of schematic diagram of the corresponding annotation results of destination document provided in an embodiment of the present invention;
Fig. 5 c is the schematic diagram of the corresponding annotation results of another destination document provided in an embodiment of the present invention;
Fig. 5 d is a kind of schematic diagram for reading aloud sound type for the first teller setting provided in an embodiment of the present invention;
Fig. 5 e is a kind of schematic diagram for marking modifier area provided in an embodiment of the present invention;
Fig. 5 f is a kind of schematic diagram at mark modification interface provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of data processing equipment provided in an embodiment of the present invention;
Fig. 7 is a kind of structural schematic diagram of server provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description.
The embodiment of the invention provides a kind of data processing scheme, the data processing scheme can be applied to voice read aloud and
In the application scenarios such as sound novel, the data processing scheme can include: obtain the character data that destination document includes;To described
Character data carries out structural body identifying processing and obtains dialogue string assemble, and dialogue string assemble includes target dialogue character
String, the target dialogue string assemble includes multiple continuation characters;Obtain the corresponding mesh of the target dialogue string assemble
Mark the identity of teller;It determines to say the target from the character data according to the name of the target teller
The feature that words people carries out character features description describes character set;Character set progress signature analysis is described to the feature to determine
The acoustic information of the target teller.It may be implemented automatically to identify using data processing scheme provided in an embodiment of the present invention
The acoustic information of the corresponding teller of each dialogue character string that destination document includes out is read aloud or sound novel in voice
It can be different speeches according to the acoustic information of the corresponding teller of each dialogue character string identified in application scenarios
Person selects the sound type of reading aloud being consistent with it and reads aloud, thus can be improved voice read aloud or sound novel it is rich.
Based on the above-mentioned data processing scheme of the embodiment of the present invention, the embodiment of the invention provides a kind of data processing schemes
Structural schematic diagram, as shown in Figure 1a.Structural schematic diagram shown in Fig. 1 a may include terminal 101 and server 102.Wherein, terminal
101 can be mobile phone, tablet computer, laptop computer and wearable device etc.;Server 102 can refer to Cloud Server or
Local server;Terminal 101 is connected with server 102, and the two can carry out information exchange.
In one embodiment, terminal 101 initiates the data processing request about destination document to server 102;Service
Device 102 is used for the data processing request for destination document sent in response to terminal 101, executes above-mentioned data processing scheme pair
Destination document carries out data processing, obtains each acoustic information for talking with the corresponding teller of character string in destination document;Then
The acoustic information of the corresponding teller of dialogue character string each in destination document is sent to terminal 101.
Terminal 101 receives each dialogue character string in the destination document that server 102 is sent in response to data processing request
The acoustic information of corresponding teller;Further, terminal 101 is also used to corresponding according to each dialogue character string received
The acoustic information of teller is labeled destination document to obtain annotation results, and annotation results are displayed in the user interface.
In other embodiments, server 102, can after the information for obtaining the corresponding teller of each dialogue character string
To be labeled to obtain to destination document according to the acoustic information of the corresponding teller of dialogue character set each in destination document
Annotation results;Annotation results are sent to terminal 101, so, terminal 101 can directly receive annotation results and in user circle
Annotation results are shown in face, eliminate the step of terminal 101 is labeled destination document, save power consumption of terminal expense.
In one embodiment, server 102 may include a data processing group of planes 1021 and identification computer cluster 1022;It is taking
When business device 102 executes above-mentioned data processing scheme, a data processing group of planes 1021 can be used for carrying out data prediction to destination document,
Obtain the dialogue string assemble that destination document includes;Dialogue string assemble is sent to identification computer cluster 1022;Identification
The acoustic information identifying processing that 1022 pairs of computer cluster dialogue string assembles carry out teller is determined to each words character string
The acoustic information of corresponding teller.
It in other embodiments, can also include third party cloud service 103 in data processing architecture figure shown in Fig. 1 a, the
Tripartite's cloud service 103 can provide more richer identification computing capabilitys and data-handling capacity, raising pair for server 102
The processing speed of destination document progress data processing.
Data processing architecture figure based on Fig. 1 a, the embodiment of the invention provides a kind of module rack compositions, as shown in Figure 1 b.
Fig. 1 b is data processing module architecture diagram corresponding with the data processing architecture figure of Fig. 1.It can in module rack composition shown in Fig. 1 b
Including presentation layer 104, logical layer 105 and service layer 106.Wherein, presentation layer 104 and logical layer 105 are located in terminal 101, clothes
Business layer 106 is located in server 102.
In one embodiment, presentation layer 104 is mainly used for showing destination document and the annotation results to destination document, with
And the interaction between processing terminal and user;And it is responsible for the specified operation transmission by user to logical layer 105, and will be to specified
The response results of operation are shown.Optionally, presentation layer 104 may include that destination document display module 1041 and mark show mould
Block 1042, wherein destination document display module 1041 is for showing destination document, and mark display module 1042 is for showing to mesh
Mark the annotation results of document.
In one embodiment, certain function logics of the logical layer 105 for processing terminal 101, such as non-displaying logic,
Network request logic, data persistence, voice read aloud logic, voice plays logic etc..For example, if user to click voice bright
It reads, network request logic, data persistence logic, voice can be triggered at this time and read aloud logic etc..Specifically, if user clicks
Voice is read aloud, it may be necessary to be sent data processing request to server by network, be obtained annotation results;And it obtains according to mark
Note result carries out the audio data that speech synthesis obtains to destination document;Playing audio-fequency data again is also possible in above process
It is related to for audio data or annotation results being cached in database.
In one embodiment, logical layer 105 may include local management module 1051 and local service module 1052, local
Management module 1051 is mainly responsible for network request logic and data base administration;Local service module 1052 is mainly responsible for voice and reads aloud
Logic and voice play logic.
In one embodiment, service layer 106 may include network service interface 1061, identification computing module 1062 and data
Processing module 1063.Wherein, data processing module 1063 to the character data that destination document includes for being pre-processed to obtain
Talk with string assemble;Then dialogue word string symbol set is sent to identification computing module 1062, by identification computing module 1062
Identification calculating is carried out to dialogue string assemble, determines the corresponding teller of each dialogue character string in dialogue string assemble
Acoustic information;Network service interface 1061 identifies that computing module 1062 will be each for the data interaction between offer and terminal
A acoustic information for talking with the corresponding teller of character string is sent to terminal 101 by network service interface 1061.
Based on the embodiment of Fig. 1 a and Fig. 1 b, the embodiment of the invention also provides a kind of working timing figures of data processing such as
Assume that terminal shows destination document to user on a user interface by presentation layer shown in Fig. 1 c, in Fig. 1 c, when receiving user
For destination document voice bright read operation when, data processing request is sent to server by network layer;Server is using upper
The data processing scheme stated carries out data processing to destination document, obtains the corresponding speech of each dialogue character string in target character
The acoustic information of people;And destination document is labeled to obtain annotation results according to the acoustic information of teller;By annotation results
It is sent to terminal, so that terminal shows annotation results in the user interface.
When terminal detect user input the operation for starting to read aloud, then to server send voice read aloud request, to ask
Server is asked to carry out speech synthesis to destination document;Server carries out voice conjunction to each dialogue character string that destination document includes
At obtaining the corresponding audio data of destination document, audio data be sent to terminal, by terminal plays audio data.
In other embodiments, terminal can also be called after detecting the operation that the beginning voice of user's input is read aloud
Speech synthesis is applied to be handled to obtain the audio data of destination document to destination document progress speech synthesis, and plays the audio number
According to.
Based on foregoing description, the embodiment of the invention provides a kind of flow diagrams of data processing method, such as Fig. 2 institute
Show.Data processing method described in Fig. 2 can be executed by server, can specifically be executed by the processor of server.It is shown in Fig. 2
Data processing method may include following steps:
S201, the character data that destination document includes is obtained.
Wherein, destination document can refer to any one document, such as novel, webpage information, news etc.;Destination document can also
To refer to a part of some document, for example, some novel chapter 5 section or page 2 to page 5 etc. of news.
Optionally, server can execute step when detecting the trigger event to destination document progress data processing
S201.In one embodiment, trigger event can refer to the data processing that data processing is carried out to destination document that terminal is sent
Request.Specifically, terminal can show the read interface about destination document to user, it may include reading model in read interface
Option is selected, reading model selection option includes word read, the bright reading mode of voice or mark reading model etc.;If detection
To user to the selection operation of the bright reading mode of voice or mark reading model, then terminal, which is produced, carries out data to destination document
The data processing request of processing.
In other embodiments, trigger event can also refer to server detect to destination document carry out data processing when
Between reach, specifically, server can preset a time cycle, when reaching each time cycle to being deposited in server
The document of storage carries out data processing, or presets: carrying out data processing to the document in server when server free.
So, when terminal needs data processed result, data processed result directly can be sent to terminal by server, be saved
The number that terminal is interacted with server, also saves data processing time, improves data-handling efficiency.
In one embodiment, the corresponding text of multiple documents can be stored in server, these documents are usually to encode
It stores, cannot directly be read by user afterwards, needed these text conversions at can direct reading character with user, wherein
Character can refer to letter, number, Chinese character and symbol etc., for example, 1,2,3, A, B, C, #, you, I etc..It services in step s 201
The character data that device is got is made of multiple characters.
S202, dialogue string assemble is obtained to character data progress structural body identifying processing, talked in string assemble
Including target dialogue character string, target dialogue character string includes multiple continuous dialogue characters.
In one embodiment, dialogue string assemble refers to the collection for the dialogue character string composition for including in character data
It closes, wherein target dialogue character string is any one the dialogue character string talked in string assemble, in each dialogue character string
Including multiple continuous dialogue characters.Wherein, dialogue character refers to the corresponding character of session operational scenarios, session operational scenarios in destination document
Refer to that the scene of some teller speech, such as first are said: " I is xxxx ", is a session operational scenarios, and the session operational scenarios are corresponding
Character is dialogue character;For another example, " I never thinks that you are such people ", first is said at leisure and a session operational scenarios, should
The character for including in session operational scenarios is to talk with character.For a destination document, it may include one or more dialogues
Scene, the dialogue character that each session operational scenarios include form a dialogue character string, and all dialogue character strings constitute dialogue word
Accord with set of strings.
In one embodiment, it can use regular expression in step S202 and structural body identification carried out to character data
Obtain dialogue string assemble.It should be appreciated that regular expression is a kind of logic to character manipulation, refer to fixed in advance
The combination of justice good some specific characters and these specific characters forms a regular character string, this regular character string is used
To express a kind of filter logic to character.In simple terms, in embodiments of the present invention, regular expression refers to utilize and set in advance
The some characters set form a regular character string, and all characters with regular string matching are found out in character data
String, all matched character string composition dialogue string assembles.
Specifically, the implementation of step S202 can be with are as follows: obtains the regular expression for talking with character string for identification, institute
Stating regular expression is determined according to the characteristic information of dialogue character string sample;By the character data and the regular expressions
Formula carries out matching treatment, obtains at least one dialogue character string that the character data includes, at least one described dialogue character
String constitutes dialogue character set.
It should be appreciated that the feature for talking with character string is often different in different document or the same document.In
In one embodiment, the feature for talking with character string may is that teller: " content of speech ", for example, first is said: " xxxx ".At it
In his embodiment, the feature for talking with character string may also is that " content of speech ", teller;For example, " xxxx ", the small sound of Zhang San
Say.Alternatively, the feature of dialogue character string can also include other forms, will not repeat them here.Based on this, the present invention is real
Example is applied in order to accurately identify the dialogue string assemble of destination document, needs the word that engages in the dialogue using multiple regular expressions
Accord with String matching.In other words, in the embodiment of the present invention, the quantity of regular expression is at least one.
Each regular expression is determined according to the feature of dialogue character string sample, for example, it is assumed that dialogue character string sample
This feature is-teller: " content of speech " then determines that the thinking of regular expression should according to the dialogue character string sample
Be: matching is all with character string that " start, and with " terminates;For another example, it is assumed that dialogue character string sample feature be-" speech
Content ", teller, then the thinking for determining regular expression according to the dialogue character string sample should be: matching is all " to open
Begin, with " terminate character string.
S203, the identity for obtaining the corresponding target teller of target dialogue character string.
From the foregoing it will be appreciated that target dialogue character string is any one the dialogue character string talked in string assemble, in step
In the description of the step of rapid S203- step S205, by taking target dialogue character string as an example, introduce how to determine target dialogue character string pair
Other dialogue character strings in dialogue string assemble can be used and target dialogue in the acoustic information of the target teller answered
Character string same procedure determines the acoustic information of the corresponding teller of each dialogue character string.
In one embodiment, wherein the identity of the corresponding target teller of target dialogue character string is for marking
Know the spokesman of target dialogue character string, identity can be name, pseudonym, code name or pet name of target teller etc.
Deng for example, it is assumed that target dialogue character string is that (Zhang San says first: " you are that I met most obstinate person."), corresponding target is said
The identity for talking about people is Zhang San;For another example, target dialogue character string is that (I never thinks that you are such people, and first is said at leisure
Road "), the identity of corresponding target teller is first.
Below by taking the identity of target teller includes the name of target teller as an example, how introduction, which obtains target, is said
Talk about the identity of people.In one embodiment, server can talk with character according to Natural Language Processing Models combining target
The front and back character of string obtains the identity of the corresponding target teller of target dialogue character string.Specifically, if target is talked
The identity of people includes the name of target teller, then obtains the mark of the corresponding target teller of target dialogue character string,
Include: based on context analysis rule the corresponding analysis reference word of the target dialogue character string is selected from the character data
Symbol, and the analysis reference character and the dialogue character string are formed into reference character set;Using Natural Language Processing Models
Semantic analysis processing is carried out to the reference character set, determines the corresponding target teller's of the target dialogue character string
Name.
Wherein, contextual analysis rule is used to indicate how many words in selection target dialogue character string before first character
How many characters in symbol and selection target dialogue character string after end character are as the corresponding analysis of target dialogue character string
Reference character.Contextual analysis rule can be according to historical experience determination, such as talk with character string in 100 acquisitions of history
When the name of corresponding teller, has more than 70 times or other quantity number will talk with first character in character string
Rear 50 characters of end character then can be set as analysis reference character in preceding 100 characters, and dialogue character string
Hereafter analysis rule is used to indicate preceding 100 characters and selection target pair of first character in selection target dialogue character string
Rear 50 characters of end character in character string are talked about as analysis reference character.
Wherein, Natural Language Processing Models can carry out context semantic understanding and analysis, the model to a character string
It can be and obtained using the training of a large amount of sample data.It is assumed that the reference character set that based on context analysis rule determines
Are as follows: " first is not to accept very much to the saying of second, he then expresses the view of oneself: " xxxxxx ".After third tin, praise is had issued
Sound ", wherein in reference character set, target dialogue character string are as follows: " xxxxxx ", analyze reference character are as follows: " first pair
The saying of second is not to accept very much, he then expresses the view of oneself ", " after third tin, having issued the sound of praise ".Utilize nature
Language Processing model carries out semantic analysis to reference character set, it may be determined that goes out and occurs three surnames in reference character set
Name, but by semantic understanding, it is known that target dialogue character string is that first carries out to express the view different from second
Speech, therefore, the name of the corresponding target teller of target dialogue character string are first.
S204, it is determined to carry out target teller personage spy from character data according to the identity of target teller
The feature of sign description describes character set.
In one embodiment, character features may include macroscopic features, age characteristics, sex character and character trait
Etc..It should be appreciated that it is more than to describe character string to the feature of target teller progress character features description in character data
One, the feature that multiple couples of target tellers carry out character features descriptions describes character string and constitutes feature to describe character set.
Wherein, it is multiple continuous characters by carrying out character features description in character data to target teller that feature, which describes character string,
Composition, for example, feature character string is described can be with are as follows: " first is that a maturation is steady, considers thorough middle aged man ";For another example, special
Sign description character string can also be " from dressing it can be seen that she is the capable and experienced super woman of a fashion for second ", in this part
Including all characters be character features character is described.
In one embodiment, server is determined from character data to target according to the identity of target teller
The embodiment that the feature that teller carries out character features description describes character set can be with are as follows: the identity based on target teller
Mark calls feature to describe identification model and carries out identifying processing to character data, obtains retouching target teller progress character features
The feature stated describes character set.Wherein, it is to describe the training of sample character using multiple features to obtain that feature, which describes identification model,
's.
As a kind of feasible embodiment, every kind of character features can be directed to when describing identification model training to feature
One corresponding feature of training describes identification model, that is to say, that each feature, which describes identification model, can be used for a kind of personage
The character features of feature describe character and are identified.It can be used for identifying number of characters for example, sex character describes identification model
The character that sex character is described in;Age characteristics identification model can be used for identifying in character data to age spy
Levy the character being described;Character trait identification model, which can be used for identifying, is described character trait in character data
Character.Optionally, it before calling feature to describe identification model to character data progress identifying processing, can first determine whether to need to know
Other character features describe corresponding character features, select corresponding feature to describe identification model to number of characters according to character features
Character set is described to obtain feature according to being identified.For example, if server judge need identify to sex character into
Row character features description feature character set is described, then server selection sex character describe identification model to character data into
Row identifying processing.
A feature can be trained when describing identification model training to feature as another feasible embodiment
Identification model is described, this feature, which describes identification model, can be used for describing the character features of all persons' feature character knowing
Not.In this way, which server when the feature for obtaining target teller describes character set, can need to know others without judgement
Object feature describes the corresponding character features of character, call directly feature describe identification model to character data carry out identifying processing,
The feature that target teller can be got describes character set.
S205, the acoustic information that character set progress signature analysis determines target teller is described to feature.
Wherein, acoustic information includes sound type, and sound type can be boy student's sound or schoolgirl's sound, sweet type sound
The acoustic information of the sound perhaps teller such as mature steady type sound can pass through the gender of teller, age or personality etc. one
Kind or much information determine.Optionally, speech is determined with specific reference to which feature in the features such as age, gender, personality
The acoustic information of people can be what user determined.For example, if user merely desires to distinguish the part that boy student talks in destination document
With the part of schoolgirl's speech, then user, which can choose, simply marks destination document, and such server can be according to user
Operation, only selection determines the acoustic information of each teller by gender.For another example, if user wants detailed differentiation mesh
The features such as name, age and the personality of each teller in document are marked, then user may be selected to carry out destination document detailed
Mark, server operates according to the user's choice, selection by any two in the features such as name, age and gender or
Three determine the acoustic information of teller.
During specific implementation, the embodiment of step S205 can include: character set is described to feature and carries out gender spy
Sign identification, determines the gender of target teller;The acoustic information of target teller is determined according to the gender of target teller.Such as
Fruit gender is boy student, then the sound type that the acoustic information of target teller includes is the sound of male;If gender is schoolgirl,
The sound type that then acoustic information of target teller includes is the sound of women.Wherein, character set is described to feature to carry out
Sex character analysis can be by calling sex character analysis model to realize.Sex character analysis model, which refers to, can retouch feature
It states character set to be analyzed, obtains the model of people's sex character, which is to utilize multiple features point
Analysis sample character training obtains.For example, it is that " first is that a maturation is steady, considers the thorough middle age that feature, which describes character set,
Man " is male using the sex character that gender Characteristic Analysis Model can analyze first.
In other embodiments, target can also be determined by the gender of teller, age and personality in step S205
The acoustic information of teller, it is specific: respectively to feature describe character set carry out gender signature analysis, age character analysis with
And character trait analysis, obtain gender, age and the personality of target teller;According to the gender of the target teller, age
The acoustic information of the target teller is determined with personality.Optionally, character set progress sex character point is being described to feature
When analysis, age character analysis and character trait are analyzed, corresponding Characteristic Analysis Model can be called to realize respectively, for example call
Sex character analysis model identification feature describes the sex character that character includes, and age character analysis model identification feature is called to retouch
The age characteristics that character set includes is stated, calls character trait analysis model identification feature to describe the personality that character set includes special
Sign.Above-mentioned sex character analysis model, Analysis of age model and character analysis model are that preparatory training obtains, for model
The training process embodiment of the present invention is not described in detail.
As an example it is assumed that describing to determine that target is talked after character set carries out gender, age, character analysis to feature
The gender of people is schoolgirl, the age is 10-15 years old, personality be it is lively and lovely, then according to the gender of target teller, age and property
The sound type that the acoustic information for the target teller that lattice are determined includes is women, sweet type sound;For another example, it is assumed that feature
Gender boy student, the 40-46 years old age, property of target teller are determined after description character set progress gender, age, character analysis
Lattice are that maturation is steady, then in the acoustic information packet of the target teller determined according to the gender of target teller, age, personality
Including sound type be male, mature simple and honest type sound.
It, can be true by the gender of teller, age and personality in step S205 in other feasible embodiments
The embodiment of acoustic information of teller of setting the goal can be with are as follows: comprehensive characteristics descriptive model is called to describe character set to feature
It closes and carries out identifying processing, obtain gender, age and the personality of target teller;According to the gender of target teller, age with
And personality determines the acoustic information of target teller.Wherein, comprehensive characteristics descriptive model, which refers to, can identify target teller
Gender, the age, personality and other features model.
It should be appreciated that including at least one dialogue character string, target in the dialogue string assemble that destination document includes
Dialogue character string is any one the dialogue character string talked in string assemble, step S202- step of the embodiment of the present invention
S205 is that the acoustic information for how determining the corresponding teller of target dialogue character string is introduced for target dialogue character string,
For other dialogue character strings in dialogue string assemble, step S202- step S205 can be used and determine respective correspondence
Teller acoustic information, the embodiment of the present invention do not repeat specifically.
In the embodiment of the present invention, the embodiment of the present invention carries out structure to the character data that the destination document got includes
Body identifying processing determines the dialogue string assemble in character data;Further, for the mesh in dialogue string assemble
Mark dialogue character string, identifies the identity of the corresponding target teller of target dialogue character string, then in character data
It searches the feature that character features description is carried out to target teller and describes character set;Character set is described to feature and carries out spy
The acoustic information that target teller can be determined after sign analysis, realizes and automatically identifies each dialogue word in destination document
The acoustic information of the corresponding teller of symbol string.
Fig. 3 is referred to, is the flow diagram of another data processing method provided in an embodiment of the present invention.Shown in Fig. 3
Data processing method may include following steps:
If step S301, terminal receives user for the triggering command of destination document input mark processing, to clothes
Business device sends the data processing request that data processing is carried out to destination document;
It should be understood that user can be any by terminal reading document content, such as novel, webpage information, news
Deng for the convenience of description, selecting the document content read to be known as document to be read user.Specifically, being this hair with reference to Fig. 4 a
A kind of schematic diagram of the user interface for terminal that bright embodiment provides, such as point after user has input reading operations in the terminal
The icon of reader is hit, terminal shows user interface as shown in Figure 4 b to user.It can be wrapped in the user interface shown in Fig. 4 b
Include multiple documents, such as document 1, document 2, document 3 and document 4 etc.;If detecting, user grasps the selection of some document
After work, terminal determines the document to be read that the corresponding document of selection operation is selected as user;Terminal is determining document to be read
Later, the user interface of document to be read can be shown as illustrated in fig. 4 c.
In one embodiment, destination document can be document to be read, be also possible to a part of document to be read.Example
Such as, if user has input the triggering command of mark processing when just starting to read document to be read, it is believed that at this time wait read
Reading document is exactly destination document;In the user interface of the document to be read shown in Fig. 4 c, user has input the touching of mark processing
Send instructions, for example show the option 401 that voice is read aloud in user interface, user clicks 401 and just determines that user has input mark
The triggering command of processing.When user does not have started and reads document to be read, selects voice to read aloud and shown that user wishes to whole
Piece document carries out voice and reads aloud, and destination document can be document to be read at this time.
For another example, it if user has input the triggering command of mark processing after having read a period of time, can will cut at this time
The document that only do not read also in document to be read until current time is known as destination document.It optionally, can will be by the end of
Character after the first character for including in end-user interface until current time is determined as not read also in document to be read
The document of reading.In Fig. 4 c, if detecting that user starts the operation read, terminal can be shown by display module to user
As shown in figure 4d, with the passage of reading time, user may feel eye fatigue to the read interface of document to be read, or
Other reasons, user has input the triggering command for being labeled processing in read interface Fig. 4 d, for example, in reading circle of Fig. 4 d
If may include that voice read out loud options are perhaps checked the option user of document marking to voice read out loud options or checked in face
The selection operation of document annotation option, it may be determined that user has input the triggering command for being labeled processing, at this time by current time
The character composition destination document being located at after first character in first character and document to be read in read interface.
After terminal detects the triggering command of user, destination document and its corresponding document identification are obtained, is then given birth to
At the data processing request of carrying document identification, and the data processing request is sent to server.Wherein, document identification can
To be document title+chapters and sections mark, for example document identification is gold pupil chapter 5 third section.
Step S302, the document identification for the destination document for including in server based on data processing request, obtains target text
The character data that shelves include.
In one embodiment, the character data that multiple documents and each document include can be stored in server, work as clothes
After business device receives data processing request, according to the document identification for including in data processing request, search needs in the database
The destination document of data processing is carried out, and obtains the character data that destination document includes.
Step S303, server carries out structural body identifying processing to character data and obtains dialogue string assemble, talks with word
According in set of strings includes target dialogue character string, and target dialogue character string includes multiple continuation characters.
Step S304, server obtains the identity of the corresponding target teller of target dialogue character string.
Step S305, server determines target to teller according to the identity of target teller from character data
The feature for carrying out character features description describes character set, and describes character set progress signature analysis to feature and determine that target is said
Talk about the acoustic information of people.
In one embodiment, some optional embodiments for including in step S302- step S305 can be found in Fig. 2 reality
The description of corresponding steps in example is applied, details are not described herein.
Step S306, add target index mark for target dialogue character string, and by the acoustic information of target teller and
Target index mark is referred to as the mark of target dialogue character string.
In one embodiment, after the acoustic information for determining the corresponding target teller of target dialogue character string, clothes
The acoustic information of target dialogue character string and target teller can be associated storage by business device, in order in subsequent needs pair
It is directly that target dialogue character string adds according to incidence relation between the two when target dialogue character string carries out acoustic information mark
It marks.Due to including multiple characters in target dialogue character string, if directly by target dialogue character string and target teller
Acoustic information be associated storage, it may be necessary to biggish memory space, in order to solve this problem, the embodiment of the present invention can be with
Target index mark is added for target dialogue character string, the corresponding target index mark of target dialogue character string is talked with target
The acoustic information of people is associated storage, can be according to mesh when needing to carry out acoustic information mark for target dialogue character string
The target of mark dialogue character string indexes identifier lookup to the acoustic information of corresponding target teller, in this way, can be with
Save the memory space of server.
Wherein, the target index mark of target dialogue character string can uniquely indicate target dialogue character string, in a reality
It applies in example, the embodiment for target dialogue character string target addition index mark can be with are as follows: obtains the target dialogue
First location information of the first character and the first character for including in character string in the character data;Described in acquisition
Second location information of the end character and the end character for including in target dialogue character string in the character data;
The corresponding target index mark of the target dialogue character string is generated according to the first location information and the second location information
Know.Wherein, first location information can refer to first character is which character in character data, and second location information can refer to
End character is which character in character data.
For example, target dialogue character string is " I knows the negative emotions that how overcome oneself ", the target dialogue
The first character for including in character string is " I ", and end character is " thread ", and server determines above-mentioned two character in character data
In location information, for example " I " is the 18th character in character data, and " thread " is the 34th character in character data, then
The target index mark of target dialogue character data can be (18,34).
After the corresponding target index mark of target dialogue character has been determined, the target of target dialogue character string is indexed into mark
The acoustic information of target teller corresponding with target dialogue character string is known as mark when marking to target dialogue character string
It is stored with reference to being associated.
It is corresponding that iteration executes each dialogue character string in the available dialogue string assemble of step S302- step S306
Mark reference.
Step S307, server obtains the corresponding mark reference set of destination document, and it includes each for marking in reference set
Talk with the mark reference of character string, and destination document is labeled to obtain annotation results according to mark reference set, will carry
The mark notice of annotation results is sent to terminal.
Step S308, the annotation results for including in terminal display mark notice, if the voice for receiving user's input is bright
Read operation then sends voice to server and reads aloud request, includes reading aloud sound for target teller setting in the bright read request of voice
Type.
It should be appreciated that may also include aside character string in addition to including dialogue character string in a document.Of the invention real
Apply in example, can by destination document in addition to dialogue character string remaining character be determined as aside character string, multiple aside characters
String composition aside string assemble.The mark reference of aside character string, such as setting aside character string can be preset in server
Mark reference in the corresponding teller of aside character string be labeled as aside.So, the mark ginseng of each dialogue character string
It examines and constitutes the mark reference set of destination document with the reference of the mark of each aside character string.
It in one embodiment, may include each in the corresponding mark reference set of the destination document obtained in step S307
The mark reference of a dialogue character string can also include the mark reference of aside character string.According to mark reference set to target
The embodiment that document is labeled to obtain annotation results, which may is that, finds correspondence according to the index mark of each dialogue character string
Dialogue character string, index is then identified into the acoustic information of associated teller as the mark of each dialogue character string.Example
Such as, it is the first dialogue character string that the first index, which identifies corresponding dialogue character string, and the first index identifies associated first teller
Acoustic information are as follows: first, boy student, 20 years old;It is the second dialogue character string, the second rope that second index, which identifies corresponding dialogue character string,
The acoustic information of associated second teller is known in tendering are as follows: second, boy student, and 30 years old.If server is detected according to teller's
Gender determines the acoustic information of teller, then may is that male+the first talks with character string to the mark of the first dialogue character string, right
The mark of second dialogue character string may is that male+the second talks with character string;If server detects the property according to teller
Not, age and personality etc. determine the acoustic information of teller, then may is that (first, male to the mark of the first dialogue character string
It is raw, 20 years old) the+the first dialogue character string;, then+the second pair of (second, boy student, 30 years old) may is that the mark of the first dialogue character string
Talk about character string.
In one embodiment, server by the mark for carrying annotation results notice be sent to terminal after, by terminal with
Family shows annotation results in interface.It for example, is a kind of signal of destination document provided in an embodiment of the present invention with reference to Fig. 5 a
Figure.The first dialogue character string that 501 expression destination documents include in Fig. 5 a, 502 indicate the second dialogue character string, and 503 indicate other
White character string, it is assumed that the mark reference of the first dialogue character string stored in server are as follows: index mark (x-th of character, xth x
A character)-the first teller's first acoustic information (first, 18 years old, male was active);The mark reference of second dialogue character string are as follows: rope
The acoustic information (second, 45 years old, female was sedate) of tendering knowledge (xx character of xth, xx character of xth)-the second teller's second;Aside
Talk with the mark reference of character string are as follows: aside.
First index mark of the server based on the first dialogue character string, finds the first dialogue character string and the first dialogue
The acoustic information of corresponding first teller of character string, using the acoustic information of the first teller as to the first dialogue character string
Mark;Similarly, processing same as described above is executed to the second dialogue character string and aside character string, can obtained to target text
Then the mark for carrying annotation results notice is sent to terminal by the annotation results of shelves, terminal shows mark in the user interface
As a result as shown in Figure 5 b, 503 indicate the first mark for talking with character strings in Fig. 5 b, and 504 and 505 respectively indicate the second dialogue character
The mark of string and aside character string.
In other embodiments, it if user merely desires to distinguish the part of the boy student in destination document and schoolgirl part, takes
Business device is labeled the annotation results for handling and obtaining to destination document can be as shown in Figure 5 c.
In one embodiment, the sound for each teller that terminal or server can include according to destination document is believed
Breath is arranged one or more for each teller and reads aloud sound type, if detecting user in the interface of display annotation results
It has input and starts the operation that voice is read aloud, then terminal can be shown as each teller in the user interface of display annotation results
What is be arranged reads aloud sound type.User is that sound type or user are read aloud in each teller selection one according to the hobby of oneself
It can choose terminal or server and read aloud sound type for each teller's default choice.Terminal detects that user is each says
After words person has selected and reads aloud sound type, according to each teller it is corresponding read aloud sound type and submit voice to read aloud to server ask
It asks.
For example, in the annotation results shown in above-mentioned Fig. 5 b include the first teller, the second teller and aside, terminal or
Person's server is reading aloud sound type and being as fig 5d for the first teller setting according to the acoustic information of the first teller
Zhang San, Zhao five, Li Si or active male;Sound type is read aloud according to the acoustic information of the second teller for the second teller setting
It can be Xiao Fang, emotion female etc.;Sound type for aside setting can be sweet female, emotion male etc..
In one embodiment, the annotation results that user can also show according to terminal are labeled with reference to modification.If
User, which wishes that the mark to which dialogue character string is modified, can recall this by long-pressing, click or other predetermined registration operations
Talk with the corresponding mark modifier area of character string, for example, if user assumes first pair in the interface shown in Fig. 5 b or Fig. 5 c
Words character string is target dialogue character string, and long-pressing target dialogue character string then will appear mark modification area as shown in fig. 5e
Domain 507;If the selection operation that user selects modification to mark in 507, terminal can show the mark to target dialogue character string
Note modification interface is as shown in figure 5f.It may include that modification teller region, modification character string type region and modification are submitted in Fig. 5 f
Region, modification teller region modify to the corresponding target teller of target dialogue character string for user;Modify word
Symbol string type region modifies to the type of target dialogue character string for receiving user, and modification submits region to use for receiving
Operation is modified in the confirmation of family input.
Detect that user has inserted the new teller different from target teller in modification teller region in terminal, and
It submits region to receive user in modification and confirms modification operation, then terminal, which generates, carries new teller and target dialogue character string
The mark of corresponding target index mark mark is sent to server with reference to request with reference to modification request, and by the mark;Server
The mark that terminal is sent is received to request with reference to modification;It obtains mark and indexes mark with reference to the target in modification request, and according to mark
Note is referred to reference to the mark that the indicated modification information of modification request identifies corresponding target dialogue character string to target index
It modifies.In this way, which the accuracy to destination document mark can be improved.
It should be appreciated that above-mentioned is for modifying the teller in mark, user can also modify dialogue character string
Index mark, information, the embodiment of the present invention such as type of dialogue character string be not listed one by one.
Step S309, server is obtained identifies with the associated target index of the acoustic information of target teller, and obtain with
Target index identifies corresponding target dialogue character string.
Step S3010, server carries out speech synthesis processing to read aloud sound type to target dialogue character string, obtains described
The corresponding audio data of target dialogue character string, and the notice of reading aloud for carrying audio data is sent to the terminal.
Step S3011, terminal plays read aloud the audio data in notice.
After server receives the bright read request of voice, the sound letter for the target teller for including in the bright read request of voice is obtained
Breath, obtains and identifies with the associated target index of the acoustic information of target teller, can find mesh according to target index mark
Mark dialogue character string, then by step S3010 in the bright read request of voice read aloud sound type to target dialogue character string into
The processing of row speech synthesis, obtains the corresponding audio data of target dialogue character string.
In one embodiment, it should be understood that, the dialogue character string corresponding with a teller in destination document
Quantity can be for one or multiple, for example, one section of content in destination document is as follows: " first is said: " I is because feeling
You have enough abilities come the feelings that attend to the matter, so just not taking part in ", " I knows that you are for I am good ", second saying softly.
" so you can contact at any time mine if necessary to which I helps ", first says second.";In above-mentioned destination document, speech
The quantity of the corresponding dialogue character string of people's first is two.
Due to the corresponding index mark of each dialogue character string, it is not phase that the index between each dialogue character string, which identifies,
With, so the quantity for target corresponding with the target teller index mark that server is got can in the embodiment of the present invention
Think one, or multiple.It is described so that index mark quantity is one as an example in step S309, if talked with target
The quantity of the corresponding target index mark of people is at least two, then obtains each target at least two targets index mark respectively
Index identifies corresponding dialogue character string, and each target index is identified corresponding dialogue character string and is carried out at speech synthesis
Reason.
In one embodiment, above-mentioned only by taking target teller as an example, it introduces server and how to be read aloud according to voice and asked
It asks and carries out speech synthesis for the corresponding target dialogue character string of target teller, step S308- can be passed through for other tellers
Step S3010 dialogue character string corresponding to each teller carries out speech synthesis processing.When speech all in destination document
The corresponding dialogue character string of people is synthesized into corresponding audio data, then server is by the corresponding audio of each dialogue character string
The target audio data of data composition destination document are sent to terminal, so that terminal plays target audio data.
Combining step S306- step S3011 it is found that server receive terminal transmission data processing request after, meeting
Destination document is handled, the mark reference set that destination document includes is obtained and merges according to mark reference set to destination document
It is labeled to obtain annotation results, annotation results is sent to terminal so that terminal shows annotation results in the user interface;
When terminal receives beginning voice bright read operation of the user for annotation results input, voice is sent to server read aloud and ask
It asks, server carries out speech synthesis processing to destination document further according to the bright read request of voice of terminal, finally by destination document pair
The target audio data answered are sent to terminal, in order to which terminal plays out.
In other embodiments, server is after obtaining target dialogue character string by step S301- step S305,
All dialogue character strings that step S306- step S3011 can not be executed to include to destination document carry out speech synthesis to obtain
To the corresponding target audio data of destination document, following step can be performed, speech synthesis is carried out to target dialogue character string, and will
The corresponding audio data of target dialogue character string that speech synthesis obtains is sent to terminal, in order to terminal plays target dialogue word
The corresponding audio data of symbol string.So, the audio data that server synthesizes each dialogue character string on one side may be implemented, eventually
End can one side Play Server synthesized completion dialogue character string audio data, with above-mentioned server by all dialogues
Character string carries out speech synthesis and then the audio data of obtained all dialogue character strings is sent to terminal with instruction terminal
It plays, the speed to the bright read request of the voice of terminal can be accelerated.
Specifically, after executing step S301- step S305, the scheme that following step has realized foregoing description is executed:
Acquisition is corresponding with the acoustic information of target teller to read aloud sound type;Target dialogue character string is carried out based on sound type is read aloud
Speech synthesis processing, obtains the corresponding audio data of target dialogue character string;The notification information for carrying audio data is sent to
Terminal, the notification information are used to indicate terminal plays audio data.Wherein, the acoustic information pair with target teller got
Sound type can be terminal or server is pre-set for reading aloud of answering.
In the embodiment of the present invention, when terminal, which receives user, is directed to the labeling operation of destination document, sent to server
The data processing request of data processing is carried out to destination document;Server obtains mesh in response to the data processing request received
Then the character data that mark document includes carries out structural body identifying processing to character data and obtains dialogue string assemble;Into one
Step, for the target dialogue character string in dialogue character string, obtain the body of the corresponding target teller of target dialogue character string
Part mark;It is found from character data according to the identity of target teller and character features description is carried out to target teller
Feature describes character;The acoustic information that character analyze determining target teller is described to feature;It further, is target pair
It talks about character string and adds target index mark, using the acoustic information of target index mark and target teller as target dialogue character
The mark of string refers to, iteration above-mentioned steps, obtains the mark reference of each dialogue character string in dialogue string assemble;Server
It is labeled and is marked with reference to the mark reference pair destination document with aside character string according to the mark of each dialogue character string
As a result, and by the mark for carrying annotation results notice be sent to terminal, annotation results are shown by terminal in the user interface;If
User has input the operation for starting to read aloud, then terminal to server sends voice and reads aloud request, is read aloud by server according to voice
Destination document Composite tone data are sent to terminal in order to terminal plays audio data by request.In the above-described embodiments, it takes
Business device may be implemented to automatically identify each acoustic information for talking with the corresponding teller of character string in destination document, and can basis
The acoustic information of teller is labeled destination document, compared with existing artificial mark, improves annotating efficiency.
Based on the description of above-mentioned data processing method, the embodiment of the invention also discloses a kind of data processing equipment, institutes
Stating data processing equipment can be performed Fig. 2 and method shown in Fig. 3.It such as places an order referring to FIG. 6, the data processing equipment can be run
Member:
Acquiring unit 601, the character data for including for obtaining destination document;
Processing unit 602 obtains dialogue string assemble, institute for carrying out structural body identifying processing to the character data
Stating in dialogue string assemble includes target dialogue character string, and the target dialogue character string includes multiple dialogue characters;
Acquiring unit 601 is also used to obtain the identity of the corresponding target teller of the target dialogue character string;
Processing unit 602 is also used to be determined from the character data according to the identity of the target teller
The feature for carrying out character features description to the target teller describes character set;
Processing unit 602 is also used to describe the feature character set progress signature analysis and determines the target speech
The acoustic information of people.
In one embodiment, the identity of target teller includes the name of target teller, and acquiring unit 601 exists
When obtaining the identity of the corresponding target teller of the target dialogue character string, performs the following operations: based on context dividing
Analysis rule selects the corresponding analysis reference character of the target dialogue character string from the character data, and the analysis is joined
It examines character and the target dialogue character string forms reference character set;Using Natural Language Processing Models to the reference character
Set carries out semantic analysis processing, determines the name of the corresponding target teller of the target dialogue character string.
In one embodiment, the data processing equipment further includes storage unit 603:
The processing unit 602 is also used to add target index mark for the target dialogue character string;
The storage unit 603, for regarding the acoustic information of target teller and index mark as target dialogue character
The mark reference of string is associated storage.
In one embodiment, the processing unit 602 is adding target index mark for the target dialogue character string
When, it performs the following operations: obtaining the first character for including in the target dialogue character string and the first character described
First location information in character data;Obtain the end character for including in the target dialogue character string and the end word
Accord with the second location information in the character data;Institute is generated according to the first location information and the second location information
State the corresponding index mark of target dialogue character string.
In one embodiment, the acquiring unit 601 is also used to obtain the corresponding mark reference set of the destination document
It closes, the mark reference set includes each dialogue character string that dialogue string assemble described in the destination document includes
Mark reference;The processing unit 602 is also used to be labeled to obtain to the destination document according to the mark reference set
Annotation results, and the mark for carrying annotation results notice is sent to the terminal, the mark notice is used to indicate institute
It states terminal and shows the annotation results in the user interface, include each in each dialogue character string in the annotation results
A character and the acoustic information of the corresponding teller of each dialogue character string.
In one embodiment, the data processing equipment further includes receiving unit 604:
The receiving unit 604 includes for receiving the bright read request of voice of terminal transmission, in the bright read request of voice
For the target teller acoustic information be arranged read aloud sound type;
The acquiring unit 601 is also used to obtain and mark with the associated target index of the acoustic information of the target teller
Know, and obtains the target dialogue character string corresponding with target index mark;
The processing unit 602 is also used to carry out voice conjunction to the target dialogue character string with the sound type of reading aloud
At processing, the corresponding audio data of the target dialogue character set string is obtained, and read aloud notice for carry the audio data
It is sent to the terminal, the notice of reading aloud is used to indicate audio data described in the terminal plays.
In one embodiment, the user interface of terminal includes mark modifier area, and receiving unit 604 is also used to receive institute
The mark of terminal transmission is stated with reference to modification request, the mark is the terminal in the mark modifier area with reference to modification request
The modification operation generation of user's input is received, the mark modifier area is that the terminal receives the user to described
Target dialogue character string, which has input, to be shown in the user interface after predetermined registration operation;Acquiring unit 601 is also used to obtain institute
Mark is stated with reference to the target index mark in modification request, and asks indicated modification to believe with reference to modification according to the mark
The mark reference for identifying the corresponding target dialogue character string to target index is ceased to modify.
In one embodiment, the processing unit 602 is obtained to character data progress structuring identifying processing
It when talking with string assemble, performs the following operations: obtaining the regular expression for talking with character string for identification, the regular expressions
Formula is determined according to the feature of dialogue character string sample;The character data and the regular expression are carried out at matching
Reason, obtains at least one dialogue character string that the character data includes, at least one described dialogue character string constitutes dialogue
String assemble.
In one embodiment, the processing unit 602 to the feature describe character set carry out signature analysis it is true
It when the acoustic information of the fixed target teller, performs the following operations: character set being described to the feature and carries out sex character
Analysis, determines the gender of the target teller;Determine the target teller's according to the gender of the target teller
Acoustic information.
According to one embodiment of present invention, each step involved in Fig. 2 or method shown in Fig. 3 can be by Fig. 6
Shown in each unit in data processing equipment be performed.For example, step S201 shown in Fig. 2 and step S203 can be by scheming
Acquiring unit 601 in data processing equipment shown in 6 executes, and step S202, step S204 and step S205 can be by Fig. 6
Shown in processing unit 602 in data processing equipment execute;For another example, step S302, step S304 shown in Fig. 3,
S403 and step S307 and step S309 can be the acquiring unit 701 in data processing equipment as shown in Figure 6 to execute,
Step S303, step S305- step S306 and step S3010 step can be in data processing equipment as shown in Figure 6
Processing unit 602 executes.
According to another embodiment of the invention, each unit in data processing equipment shown in fig. 6 can respectively or
All one or several other units are merged into constitute or some (a little) unit therein can also be split as function again
Smaller multiple units are constituted on energy, this may be implemented similarly to operate, and the technology without influencing the embodiment of the present invention is imitated
The realization of fruit.Said units are logic-based function divisions, and in practical applications, the function of a unit can also be by multiple
Unit is realized or the function of multiple units is realized by a unit.In other embodiments of the invention, based at data
Managing device also may include other units, and in practical applications, these functions can also be assisted to realize by other units, and can
It is realized with being cooperated by multiple units.
It according to another embodiment of the invention, can be by including central processing unit (CPU), random access memory
It is transported on the universal computing device of such as computer of the processing elements such as medium (RAM), read-only storage medium (ROM) and memory element
Row is able to carry out the computer program (including program code) of each step involved in the correlation method as shown in Fig. 2 or Fig. 3,
Construct data processing equipment as shown in Figure 6, and to realize data processing method of the embodiment of the present invention.The computer
Program can be recorded on such as computer readable storage medium, and be loaded into above-mentioned calculating by computer readable storage medium
In equipment, and run wherein.
In the embodiment of the present invention, the embodiment of the present invention carries out structure to the character data that the destination document got includes
Body identifying processing determines the dialogue string assemble in character data;Further, for the mesh in dialogue string assemble
Mark dialogue character string, identifies the identity of the corresponding target teller of target dialogue character string, then in character data
It searches the feature that character features description is carried out to target teller and describes character set;Character set is described to feature and carries out spy
The acoustic information that target teller can be determined after sign analysis, realizes and automatically identifies each dialogue word in destination document
The acoustic information of the corresponding teller of symbol string.
Description based on above method embodiment and Installation practice, the embodiment of the invention also provides a kind of services
Device, referring to FIG. 7, the server may include processor 701 and computer storage medium 702.
Computer storage medium 702 can store in the memory of server, and the computer storage medium 702 is used for
Computer program is stored, the computer program includes program instruction, and the processor 701 is for executing the computer storage
The program instruction that medium 702 stores.Processor 701 or CPU (Central Processing Unit, central processing unit)) be
The calculating core and control core of server, are adapted for carrying out one or more instruction, are particularly adapted to load and execute one
Or a plurality of instruction is to realize correlation method process or corresponding function;In one embodiment, place described in the embodiment of the present invention
Reason device 701 can be used for executing: obtain the character data that destination document includes;The character data is carried out at structural body identification
Reason obtains dialogue string assemble, includes target dialogue character string, the target dialogue character in the dialogue string assemble
String includes multiple dialogue characters;Obtain the identity of the corresponding target teller of the target dialogue character string;According to described
The identity of target teller is determined to carry out character features description to the target teller from the character data
Feature describes character set;Character set is described to the feature and carries out the sound letter that signature analysis determines the target teller
Breath.
The embodiment of the invention also provides a kind of computer storage medium (Memory), the computer storage medium is clothes
The memory device being engaged in device, for storing program and data.It is understood that computer storage medium herein both can wrap
Include the built-in storage medium in server, naturally it is also possible to the expansion storage medium supported including server.Computer storage
Medium provides memory space, and one or more finger being suitable for by processor 701 loads and executes is housed in the memory space
It enables, these instructions can be one or more computer programs (including program code).It should be noted that computer herein
Storage medium can be high speed RAM memory, be also possible to non-labile memory (non-volatile memory), example
Such as at least one magnetic disk storage;It optionally can also be that at least one is located remotely from the storage of the computer of aforementioned processor and is situated between
Matter.
In one embodiment, it can be loaded by processor 701 and execute one stored in computer storage medium or more
Item instruction, to realize the above-mentioned corresponding steps in relation to the method in data processing method embodiment;In the specific implementation, computer is deposited
One or more instruction in storage media is loaded by processor 701 and executes following steps:
Obtain the character data that destination document includes;Structural body identifying processing is carried out to the character data and obtains dialogue word
Set of strings is accorded with, includes target dialogue character string in the dialogue string assemble, the target dialogue character string includes multiple right
Talk about character;Obtain the identity of the corresponding target teller of the target dialogue character string;According to the target teller's
Identity determines that the feature that character features description is carried out to the target teller describes character from the character data
Set;Character set is described to the feature and carries out the acoustic information that signature analysis determines the target teller.
In one embodiment, the identity of the target teller includes the name of the target teller, described
Processor 701 performs the following operations: root when obtaining the identity of the corresponding target teller of the target dialogue character string
The corresponding analysis reference character of the target dialogue character string is selected from the character data according to contextual analysis rule, and will
The analysis reference character and the target dialogue character string form reference character set;Using Natural Language Processing Models to institute
It states reference character set and carries out semantic analysis processing, determine the surname of the corresponding target teller of the target dialogue character string
Name.
In one embodiment, one or more instruction in computer storage medium is also executed by the load of processor 701
Following steps: target index mark is added for the target dialogue character string;By the acoustic information of the target teller and institute
The mark reference that target index mark is stated as the target dialogue character string is associated storage.
In one embodiment, the processor 701 for the target dialogue character string add target index mark when,
It performs the following operations: obtaining the first character for including in the target dialogue character string and the first character in the character
First location information in data;It obtains the end character for including in the target dialogue character string and the end character exists
Second location information in the character data;The mesh is generated according to the first location information and the second location information
The corresponding target index mark of mark dialogue character string.
In one embodiment, one or more instruction in computer storage medium is also executed by the load of processor 701
Following steps: the corresponding mark reference set of the destination document is obtained, the mark reference set includes the destination document
Described in dialogue string assemble each dialogue character string for including mark reference;According to the mark reference set to described
Destination document is labeled to obtain annotation results, and the mark for carrying annotation results notice is sent to the terminal, institute
It states mark and notifies that be used to indicate the terminal shows the annotation results in the user interface, include described in the annotation results
The acoustic information of each character and the corresponding teller of each dialogue character string in each dialogue character string.
In one embodiment, one or more instruction in computer storage medium is also executed by the load of processor 701
Following steps: the bright read request of voice that terminal is sent is received, includes for the target teller in the bright read request of voice
Sound type is read aloud in acoustic information setting;It obtains and is identified with the associated target index of the acoustic information of the target teller, and
Obtain the target dialogue character string corresponding with target index mark;With the sound type of reading aloud to the target dialogue
Character string carries out speech synthesis processing, obtains the corresponding audio data of the target dialogue character set string, and will carry the sound
The notice of reading aloud of frequency evidence is sent to the terminal, and the notice of reading aloud is used to indicate audio data described in the terminal plays.
In one embodiment, the user interface of terminal includes mark modifier area, and one in computer storage medium
Or a plurality of instruction is loaded to also execute the following steps: by processor 701 and receives the mark reference modification request that the terminal is sent, institute
State mark with reference to modification request be the terminal it is described mark modifier area receive user input modification operation generate,
The mark modifier area is that the terminal receives after the user has input predetermined registration operation to the target dialogue character string
It is shown in the user interface;The mark is obtained with reference to the target index mark in modification request, and according to institute
It states mark and asks indicated modification information to identify the corresponding target dialogue character string to target index with reference to modification
Mark reference is modified.
In one embodiment, the processor 701 obtains pair carrying out structuring identifying processing to the character data
It when talking about string assemble, performs the following operations: obtaining the regular expression for talking with character string for identification, the regular expression
It is to be determined according to the feature of dialogue character string sample;The character data and the regular expression are subjected to matching treatment,
At least one dialogue character string that the character data includes is obtained, at least one described dialogue character string constitutes dialogue character
Set of strings.
In one embodiment, one or more instruction in computer storage medium is also executed by the load of processor 701
Following steps: acquisition is corresponding with the acoustic information of the target teller to read aloud sound type;Sound type pair is read aloud based on described
The target dialogue character string carries out speech synthesis processing, obtains the corresponding audio data of the target dialogue character string;It will take
Notification information with the audio data is sent to terminal, and the notification information is used to indicate audio data described in terminal plays.
In one embodiment, processor 701 describes described in character set progress signature analysis determination to the feature
It when the acoustic information of target teller, performs the following operations: character set being described to the feature and carries out gender signature analysis, really
Make the gender of the target teller;The sound letter of the target teller is determined according to the gender of the target teller
Breath.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access
Memory, RAM) etc..
Above disclosed is only section Example of the present invention, cannot limit the right model of the present invention with this certainly
It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.
Claims (13)
1. a kind of data processing method characterized by comprising
Obtain the character data that destination document includes;
Structural body identifying processing is carried out to the character data and obtains dialogue string assemble, is wrapped in the dialogue string assemble
Target dialogue character string is included, the target dialogue character string includes multiple dialogue characters;
Obtain the identity of the corresponding target teller of the target dialogue character string;
It is determined to carry out people to the target teller from the character data according to the identity of the target teller
The feature of object feature description describes character set;
Character set is described to the feature and carries out the acoustic information that signature analysis determines the target teller.
2. the method as described in claim 1, which is characterized in that the identity of the target teller includes that the target is said
Talk about the name of people, the identity for obtaining the corresponding target teller of the target dialogue character string, comprising:
Based on context analysis rule selects the corresponding analysis reference word of the target dialogue character string from the character data
Symbol, and the analysis reference character and the target dialogue character string are formed into reference character set;
Semantic analysis processing is carried out to the reference character set using Natural Language Processing Models, determines the target dialogue
The name of the corresponding target teller of character string.
3. the method as described in claim 1, which is characterized in that the method also includes:
Target index mark is added for the target dialogue character string;
Mark by the acoustic information of the target teller and target index mark as the target dialogue character string
It is stored with reference to being associated.
4. method as claimed in claim 3, which is characterized in that described to add target index mark for the target dialogue character string
Know, comprising:
The first character for including in the target dialogue character string and the first character are obtained in the character data
First location information;
The end character for including in the target dialogue character string and the end character are obtained in the character data
Second location information;
The corresponding target rope of the target dialogue character string is generated according to the first location information and the second location information
Tendering is known.
5. method as claimed in claim 3, which is characterized in that the method also includes:
The corresponding mark reference set of the destination document is obtained, the mark reference set includes described in the destination document
The mark reference for each dialogue character string that dialogue string assemble includes;
The destination document is labeled according to the mark reference set to obtain annotation results, and the mark knot will be carried
The mark notice of fruit is sent to the terminal, and the mark notice is used to indicate the terminal and shows the mark in the user interface
Note is as a result, include each character in each dialogue character string and each dialogue character string in the annotation results
The acoustic information of corresponding teller.
6. method as claimed in claim 5, which is characterized in that the method also includes:
The bright read request of voice that terminal is sent is received, includes believing in the bright read request of voice for the sound of the target teller
Sound type is read aloud in breath setting;
It obtains and is identified with the associated target index of the acoustic information of the target teller, and obtain and indexed with the target
Identify the corresponding target dialogue character string;
Speech synthesis processing is carried out to the target dialogue character string with the sound type of reading aloud, obtains the target dialogue character
The corresponding audio data of collection string, and the notice of reading aloud for carrying the audio data is sent to the terminal, it is described to read aloud notice
It is used to indicate audio data described in the terminal plays.
7. method as claimed in claim 5, which is characterized in that the user interface of the terminal includes mark modifier area, institute
State method further include:
It receives the mark that the terminal is sent to request with reference to modification, the mark is the terminal in the mark with reference to modification request
It infuses modifier area and receives what the modification operation that user inputs generated, the mark modifier area is described in the terminal receives
User, which has input the target dialogue character string, to be shown in the user interface after predetermined registration operation;
It obtains the mark and asks signified with reference to modification with reference to the target index mark in modification request, and according to the mark
The mark reference that the modification information shown identifies the corresponding target dialogue character string to target index is modified.
8. the method as described in claim 1, which is characterized in that described to be obtained to character data progress structuring identifying processing
To dialogue string assemble, comprising:
The regular expression for talking with character string for identification is obtained, the regular expression is the spy according to dialogue character string sample
Sign determination;
The character data and the regular expression are subjected to matching treatment, obtain at least one that the character data includes
Talk with character string, at least one described dialogue character string constitutes dialogue string assemble.
9. the method as described in claim 1, which is characterized in that the method also includes:
Acquisition is corresponding with the acoustic information of the target teller to read aloud sound type;
Based on the sound type of reading aloud to target dialogue character string progress speech synthesis processing, the target dialogue word is obtained
The corresponding audio data of symbol string;
The notification information for carrying the audio data is sent to terminal, the notification information is used to indicate sound described in terminal plays
Frequency evidence.
10. the method as described in claim 1, which is characterized in that described to describe character set progress feature point to the feature
Analysis determines the acoustic information of the target teller, comprising:
Character set is described to the feature and carries out gender signature analysis, determines the gender of the target teller;
The acoustic information of the target teller is determined according to the gender of the target teller.
11. a kind of data processing equipment characterized by comprising
Acquiring unit, the character data for including for obtaining destination document;
Processing unit obtains dialogue string assemble, the dialogue for carrying out structural body identifying processing to the character data
It include target dialogue character string in string assemble, the target dialogue character string includes multiple dialogue characters;
The acquiring unit is also used to obtain the name of the corresponding target teller of the target dialogue character string;
The processing unit is also used to be determined from the character data to the mesh according to the name of the target teller
The feature that mark teller carries out character features description describes character set;
The processing unit is also used to describe the feature character set progress signature analysis and determines the target teller's
Acoustic information.
12. a kind of server, which is characterized in that further include:
Processor is adapted for carrying out one or more instruction;And
Computer storage medium, the computer storage medium are stored with one or more instruction, one or more instruction
Suitable for being loaded by the processor and being executed such as the described in any item data processing methods of claim 1-10.
13. a kind of computer storage medium, which is characterized in that be stored with computer program in the computer storage medium and refer to
It enables, when the computer program instructions are executed by processor, for executing as at the described in any item data of claim 1-10
Reason method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910659254.7A CN110399461A (en) | 2019-07-19 | 2019-07-19 | Data processing method, device, server and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910659254.7A CN110399461A (en) | 2019-07-19 | 2019-07-19 | Data processing method, device, server and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110399461A true CN110399461A (en) | 2019-11-01 |
Family
ID=68324826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910659254.7A Pending CN110399461A (en) | 2019-07-19 | 2019-07-19 | Data processing method, device, server and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110399461A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111158630A (en) * | 2019-12-25 | 2020-05-15 | 网易(杭州)网络有限公司 | Play control method and device |
CN112989822A (en) * | 2021-04-16 | 2021-06-18 | 北京世纪好未来教育科技有限公司 | Method, device, electronic equipment and storage medium for recognizing sentence categories in conversation |
JP2021170394A (en) * | 2020-10-14 | 2021-10-28 | ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド | Labeling method for role, labeling device for role, electronic apparatus and storage medium |
CN114067340A (en) * | 2022-01-17 | 2022-02-18 | 山东北软华兴软件有限公司 | Intelligent judgment method and system for information importance |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160027431A1 (en) * | 2009-01-15 | 2016-01-28 | K-Nfb Reading Technology, Inc. | Systems and methods for multiple voice document narration |
CN105869446A (en) * | 2016-03-29 | 2016-08-17 | 广州阿里巴巴文学信息技术有限公司 | Electronic reading apparatus and voice reading loading method |
CN108091321A (en) * | 2017-11-06 | 2018-05-29 | 芋头科技(杭州)有限公司 | A kind of phoneme synthesizing method |
CN108231059A (en) * | 2017-11-27 | 2018-06-29 | 北京搜狗科技发展有限公司 | Treating method and apparatus, the device for processing |
CN109523988A (en) * | 2018-11-26 | 2019-03-26 | 安徽淘云科技有限公司 | A kind of text deductive method and device |
-
2019
- 2019-07-19 CN CN201910659254.7A patent/CN110399461A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160027431A1 (en) * | 2009-01-15 | 2016-01-28 | K-Nfb Reading Technology, Inc. | Systems and methods for multiple voice document narration |
CN105869446A (en) * | 2016-03-29 | 2016-08-17 | 广州阿里巴巴文学信息技术有限公司 | Electronic reading apparatus and voice reading loading method |
CN108091321A (en) * | 2017-11-06 | 2018-05-29 | 芋头科技(杭州)有限公司 | A kind of phoneme synthesizing method |
CN108231059A (en) * | 2017-11-27 | 2018-06-29 | 北京搜狗科技发展有限公司 | Treating method and apparatus, the device for processing |
CN109523988A (en) * | 2018-11-26 | 2019-03-26 | 安徽淘云科技有限公司 | A kind of text deductive method and device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111158630A (en) * | 2019-12-25 | 2020-05-15 | 网易(杭州)网络有限公司 | Play control method and device |
CN111158630B (en) * | 2019-12-25 | 2023-06-23 | 网易(杭州)网络有限公司 | Playing control method and device |
JP2021170394A (en) * | 2020-10-14 | 2021-10-28 | ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド | Labeling method for role, labeling device for role, electronic apparatus and storage medium |
US11907671B2 (en) | 2020-10-14 | 2024-02-20 | Beijing Baidu Netcom Science Technology Co., Ltd. | Role labeling method, electronic device and storage medium |
CN112989822A (en) * | 2021-04-16 | 2021-06-18 | 北京世纪好未来教育科技有限公司 | Method, device, electronic equipment and storage medium for recognizing sentence categories in conversation |
CN114067340A (en) * | 2022-01-17 | 2022-02-18 | 山东北软华兴软件有限公司 | Intelligent judgment method and system for information importance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110399461A (en) | Data processing method, device, server and storage medium | |
CN107040452B (en) | Information processing method and device and computer readable storage medium | |
US10171659B2 (en) | Customer portal of an intelligent automated agent for a contact center | |
AU2017251686A1 (en) | Intelligent automated agent for a contact center | |
CN107832382A (en) | Method, apparatus, equipment and storage medium based on word generation video | |
CN110189754A (en) | Voice interactive method, device, electronic equipment and storage medium | |
CN107623614A (en) | Method and apparatus for pushed information | |
EP3616081A1 (en) | Transitioning between prior dialog contexts with automated assistants | |
CN110517689A (en) | A kind of voice data processing method, device and storage medium | |
CN109256133A (en) | A kind of voice interactive method, device, equipment and storage medium | |
CN108920450A (en) | A kind of knowledge point methods of review and electronic equipment based on electronic equipment | |
DE102012022733A1 (en) | Advertising system combined with a search engine service and method for carrying it out | |
US11253778B2 (en) | Providing content | |
CN109389427A (en) | Questionnaire method for pushing, device, computer equipment and storage medium | |
JP2022020659A (en) | Method and system for recognizing feeling during conversation, and utilizing recognized feeling | |
CN107195301A (en) | The method and device of intelligent robot semantic processes | |
CN107463684A (en) | Voice replying method and device, computer installation and computer-readable recording medium | |
CN109325178A (en) | Method and apparatus for handling information | |
CN111639218A (en) | Interactive method for spoken language training and terminal equipment | |
CN110399459A (en) | Searching method, device, terminal, server and the storage medium of online document | |
CN106406882A (en) | Method and device for displaying post background in forum | |
JP2006235671A (en) | Conversation device and computer readable record medium | |
CN109683727A (en) | A kind of data processing method and device | |
CN114970733A (en) | Corpus generation method, apparatus, system, storage medium and electronic device | |
CN112908362B (en) | System based on acquisition robot terminal, method and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |