CN115374793A - Voice data processing method based on service scene recognition and related device - Google Patents
Voice data processing method based on service scene recognition and related device Download PDFInfo
- Publication number
- CN115374793A CN115374793A CN202211306175.6A CN202211306175A CN115374793A CN 115374793 A CN115374793 A CN 115374793A CN 202211306175 A CN202211306175 A CN 202211306175A CN 115374793 A CN115374793 A CN 115374793A
- Authority
- CN
- China
- Prior art keywords
- text
- sentence pattern
- text sentence
- score
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 8
- 238000004458 analytical method Methods 0.000 claims abstract description 79
- 230000003993 interaction Effects 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 41
- 230000011218 segmentation Effects 0.000 claims abstract description 26
- 230000015654 memory Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 238000011160 research Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 7
- 230000002829 reductive effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000010197 meta-analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a voice data processing method and a related device based on service scene recognition, which are applied to a voice interaction system, and the method comprises the following steps: receiving voice information input by a user of the electronic equipment in the current conversation event, and executing the following operations through a man-machine interaction engine: converting the voice information into an original text; determining a target service scene to which the current conversation event belongs according to the conversation content and/or the event associated information of the current conversation event, wherein the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current conversation event and the equipment type of the electronic equipment; acquiring a reference word set of a target service scene; performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene; and performing semantic analysis on the original text according to the at least one text sentence pattern to obtain a semantic analysis result. The accuracy of semantic recognition is improved.
Description
Technical Field
The present invention relates to the field of general data processing of voice data, and in particular, to a voice data processing method and related apparatus based on service scene recognition.
Background
When the voice interaction system interacts with a user, voice information of the user needs to be converted into character information, then word segmentation analysis is carried out on characters to infer semantics of the user, when semantic analysis is carried out on externally input sentences, all word segmentation modes need to be exhausted to obtain a plurality of text sentences, and then the obtained plurality of text sentences are analyzed to determine a target text finally used for semantic analysis. Thus, when a sentence input by a user at a time is too long and a plurality of ambiguous words exist in the sentence, the amount of calculation increases.
Disclosure of Invention
In view of the above problems, the present application provides a voice data processing method and related apparatus based on service scene recognition, which determine a reference word of user voice information according to a target service scene of a user, and perform word segmentation on the user voice information according to the reference word, so as to reduce the amount of computation of a voice interaction system and improve the accuracy of analysis.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a voice data processing method based on service scene recognition, which is applied to a server of a voice interaction system, where the server is provided with a human-computer interaction engine, and the voice interaction system further includes an electronic device in communication connection with the server, and the method includes: receiving voice information input by a user of the electronic equipment in the current conversation event, and executing the following operations through a man-machine interaction engine: converting the voice information into an original text; determining a target service scene to which the current conversation event belongs according to the conversation content and/or the event associated information of the current conversation event, wherein the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current dialog event, and the equipment type of the electronic equipment; acquiring a reference word set of a target service scene; performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene; and performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result.
It can be seen that in the embodiment of the application, by determining the target service scene of the user when using the human-computer interaction system, the reference word set in the target service scene can be obtained, and the reference words in the target service scene of the original text obtained according to the voice information of the user are confirmed. The word segmentation text sentence pattern which is most consistent with logic can be obtained according to the reference words, so that the semantic analysis accuracy of the voice interaction model is improved, the calculated amount of a voice interaction system is reduced, and the analysis efficiency of the system is improved.
With reference to the first aspect, in a possible embodiment, the at least one text sentence pattern includes a plurality of text sentences, and before performing semantic analysis on the original text according to the at least one text sentence pattern, the method further includes: determining a text sentence pattern with single characters in the words included in each text sentence pattern as a target text sentence pattern; determining the realizability of the single characters included in the target text sentence pattern; and deleting the text sentence patterns with realizability lower than a preset value from the plurality of text sentence patterns.
It can be seen that in the embodiment of the present application, by calculating the realizability of the individual characters in the text sentence pattern with the individual character segmentation, a part of text sentence patterns with too low realizability of the individual characters are preliminarily eliminated before the text sentence pattern is logically detected, so that the calculation amount of the voice interaction system is reduced, and the analysis efficiency of the system is improved.
With reference to the first aspect, in one possible embodiment, the at least one text schema includes a plurality of text schemas, and performing semantic analysis on the original text according to the at least one text schema includes: carrying out logic detection on each text sentence pattern on the basis of the reference words included in each text sentence pattern to obtain a logic score of each text sentence pattern; and carrying out semantic analysis on the text sentence pattern with the highest logic score to obtain a semantic analysis result.
With reference to the first aspect, in one possible embodiment, the logically detecting each text sentence pattern based on the reference word included in each text sentence pattern to obtain a logical score of each text sentence pattern includes: determining words adjacent to the reference words in each text sentence pattern as check words; determining a likelihood score for a combination of the reference term and the check term into a phrase; a logical score for each text sentence pattern is determined based on the likelihood score.
It can be seen that in the embodiment of the present application, the logical score of the text sentence pattern is determined according to the probability score of the combination of the reference word and the word adjacent to the reference word in the text sentence pattern, and the text sentence pattern with the highest score is used as the text sentence pattern corresponding to the original text, so that the finally confirmed text sentence pattern is ensured to be most logical, and the semantic analysis accuracy of the speech interaction model is improved.
In combination with the first aspect, in one possible embodiment, determining a logical score for each text sentence according to the likelihood score includes: determining the occurrence probability of each word in all words included in each text sentence pattern in a target service scene; determining a coefficient value for each term in each text sentence pattern, the more distant a term in a text sentence pattern from a reference term, the lower the coefficient value; determining probability scores according to the occurrence probability and coefficient values of each word; a logical score for each text sentence pattern is determined based on the likelihood score and the probability score.
It can be seen that in the embodiment of the present application, the coefficient value of each word is calculated according to the occurrence probability of each word in the target service scenario and the distance from the reference word in the text sentence pattern, and the probability of each word in the text sentence pattern occurring in the target service scenario and the overall logic score of the text sentence pattern are scientifically evaluated, so that the finally confirmed text sentence pattern is ensured to be most logical, and the semantic analysis accuracy of the speech interaction model is further improved.
With reference to the first aspect, in one possible embodiment, the segmenting the original text according to the reference word set includes: determining whether a preset text base comprises a target text with similarity higher than a preset value with the original text or not according to a reference word set included in the original text; and if so, segmenting the original text according to the segmentation result of the target text.
With reference to the first aspect, in one possible embodiment, the method further includes: acquiring a historical text of a user, wherein the historical text is converted according to a historical voice record of the user, and a text sentence pattern corresponding to the historical text is a text sentence pattern used for semantic analysis; and adding the historical texts and the text sentence patterns corresponding to the historical texts into a preset text library.
It can be seen that in the embodiment of the present application, the original text is compared with the historical text, and if the similarity between the original text and the historical text is greater than the preset value, the participle text sentence pattern of the historical text is taken as the participle text sentence pattern of the target original text, so that the semantic analysis accuracy of the voice interaction system is improved, the calculation amount is reduced, and the analysis calculation efficiency is improved.
Second aspect the embodiment of the present application provides a speech data processing apparatus based on service scene recognition, which is characterized in that, the server applied to the speech interactive system is provided with a human-computer interaction engine, and the speech interactive system further includes an electronic device in communication connection with the server, including:
a receiving unit: the system is used for receiving voice information input by a user of the electronic equipment in the current dialog event, and the following operations are executed through a man-machine interaction engine: converting the voice information into an original text;
a determination unit: the method is used for determining a target service scene to which a current dialog event belongs according to the dialog content and/or event associated information of the current dialog event, and the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current dialog event, and the equipment type of the electronic equipment;
an analysis unit: the method comprises the steps of obtaining a reference word set of a target service scene; performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene; and performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result.
In a third aspect, embodiments of the present application provide an electronic device, comprising a processor, a memory, a communication interface, and one or more programs, the one or more programs being stored in the memory and configured to be executed by the processor, the one or more instructions being adapted to be loaded by the processor and to perform part or all of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the method according to the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a speech data processing system based on service scene recognition according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a speech data processing based on service scene recognition according to an embodiment of the present application;
fig. 3 is a schematic view of a voice interaction interface of an electronic device according to an embodiment of the present application;
FIG. 4 is a schematic view of a voice interaction interface of a vehicle navigation electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a speech data processing apparatus based on service scene recognition according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
Embodiments of the present application are described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a speech data processing system based on service scene recognition according to an embodiment of the present disclosure, and as shown in the figure, the speech data processing system 100 based on service scene recognition includes an intelligent electronic device 101 and a server 102, where the intelligent electronic device 101 may specifically be a mobile phone, a tablet computer, a computer with a wireless transceiving function, a wearable device, a vehicle-mounted device, a robot, an intelligent home device, and the like; for sending voice information and text information of the user to the server 102, and for receiving feedback information sent by the server 102. The server 102 is configured to receive voice information, text information, and the like sent by the intelligent electronic device 101, the server 102 further includes a human-computer interaction engine 1021, and the human-computer interaction engine 1021 is configured to analyze and understand a user's requirement according to the voice information and the text information sent by the intelligent electronic device 101, and generate feedback information.
Referring to fig. 2, fig. 2 is a flowchart illustrating a voice data processing method based on service scene recognition according to an embodiment of the present application, and as shown in fig. 2, the method includes steps S201 to S205.
S201: receiving voice information input by a user of the electronic equipment in the current conversation event, and executing the following operations through a man-machine interaction engine: converting the voice information into an original text;
specifically, the user enters voice information through the electronic device, and the voice information here may be, for example, voice information in the form of a question, such as "how is the weather today and how is the exhibition center. And after receiving the voice information recorded by the electronic equipment, the server converts the voice information into text information to obtain an original text corresponding to the voice information. The user can also directly input text information to the electronic equipment, the electronic equipment sends the text information input by the user to the server, and the server directly confirms the text information as an original text.
For example, please refer to fig. 3, and fig. 3 is a schematic view of a voice interaction interface of an electronic device according to an embodiment of the present application, where a user sends voice information to a server through the electronic device, and after receiving the voice information of the user, the server generates feedback information according to the voice information of the user and displays the feedback information through the electronic device. The user sends the voice information of 'getting on the bus to the airport' to the server according to the electronic equipment, the server generates feedback information according to the voice information, and the electronic equipment starts getting on the bus.
S202: determining a target service scene to which the current conversation event belongs according to the conversation content and/or the event associated information of the current conversation event, wherein the event associated information comprises at least one of the following: the type of service or application provided by the electronic device in the current session event, and the device type of the electronic device.
Specifically, a target service scene to which the target dialog event belongs may be acquired according to the dialog content or the event related information, where the target service scene may be a shopping scene, a navigation scene, a learning scene, or the like. And analyzing the original text, and obtaining a corresponding target service scene according to an analysis result, or obtaining a corresponding target service scene according to the equipment type of the electronic equipment.
For example, please refer to fig. 4, where fig. 4 is a schematic view of a voice interaction interface of a vehicle-mounted navigation electronic device according to an embodiment of the present application, and if the type of the electronic device is a vehicle-mounted navigation electronic device, a server obtains a voice message sent by the vehicle-mounted navigation electronic device and also obtains the type of the electronic device. When the type of the electronic equipment sending the voice information is determined to be vehicle-mounted navigation electronic equipment, the target service scene of the voice information can be directly determined to be a navigation scene. When the original text of the user received by the server is "navigate to XX cell", it can be confirmed that the navigation is the reference word of the original text.
S203: acquiring a reference word set of a target service scene;
specifically, the reference word here refers to a word having a high correlation with the corresponding target service scenario, and may be determined according to the frequency of occurrence of the word in the target service scenario, for example, place names such as "Chongqing", "exhibition center", and the like may be identified as the reference word in the navigation scenario, and the reference word set is all the reference words in the target service scenario.
S204: and performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene.
Specifically, after the target service scenario of the original text is confirmed, all reference words in the original text are confirmed according to the target service scenario, and then word segmentation is performed on other non-reference words to obtain at least one word segmentation result, that is, at least one text sentence pattern.
S205: and performing semantic analysis on the original text according to the at least one text sentence pattern to obtain a semantic analysis result.
Specifically, semantic analysis is performed on the original text according to at least one text sentence pattern, that is, the semantic analysis is performed on the original text according to the word segmentation result to obtain the semantics expressed by the original text, so as to send corresponding feedback information to the user.
In one possible embodiment, the at least one text schema comprises a plurality, and before semantically analyzing the original text according to the at least one text schema, the method further comprises: determining a text sentence pattern with single characters in words included in each text sentence pattern as a target text sentence pattern; determining the realizability of the single characters included in the target text sentence pattern; and deleting the text sentence patterns with realizability lower than a preset value from the plurality of text sentence patterns.
Specifically, if the target original sentence includes a plurality of text sentences, it is determined whether there is a text sentence in each text sentence, the probability of a single character in the text sentence is analyzed, and if the probability of a single character is lower than a preset threshold, the text sentence including the single character is deleted.
Illustratively, in the text sentence pattern "research/life/and/service" and the text sentence pattern "research/life/and/service", both words "service" and "are single words, and both text sentence patterns are text sentence patterns including single words. The occurrence probability of the single character can be determined according to the occurrence frequency of the single character in the target scene, and it can be seen that the occurrence frequency of the 'sum' single character is far greater than that of the 'business' character, the probability of the 'sum' character is greater than a preset value, and the probability of the 'business' character is lower than that of the preset value: so a text sentence pattern of "research/life/and/service" will be deleted and "research/life/and/service" will be retained.
It can be seen that in the embodiment of the present application, by calculating the realizability of the individual characters in the text sentence pattern with the individual character segmentation, a part of text sentence patterns with too low realizability of the individual characters are preliminarily eliminated before the text sentence pattern is logically detected, so that the calculation amount of the voice interaction system is reduced, and the analysis efficiency of the system is improved.
In one possible embodiment, the at least one text sentence pattern comprises a plurality of text sentences, and the semantic analysis of the original text from the at least one text sentence pattern comprises: carrying out logic detection on each text sentence pattern on the basis of the reference words included in each text sentence pattern to obtain a logic score of each text sentence pattern; and performing semantic analysis on the text sentence pattern with the highest logic score to obtain a semantic analysis result.
Specifically, when the text sentence pattern of the original text includes a plurality of. And performing logic detection on the plurality of text sentences, ranking the plurality of text sentences according to the logic detection scores, and performing semantic analysis by taking the text sentence with the highest logic score ranking as the only text sentence of the original text to obtain a semantic analysis result.
In one possible embodiment, logically testing each text sentence pattern based on the reference words included in each text sentence pattern to obtain a logical score for each text sentence pattern comprises: determining words adjacent to the reference words in each text sentence pattern as check words; determining a likelihood score for a combination of the reference term and the check term into a phrase; a logical score for each text sentence pattern is determined based on the likelihood score.
Specifically, the logical score is the probability that the preceding and succeeding terms of the reference term appear in the same phrase. If the combination of the three words of the former word and the latter word of the reference word frequently appears in the same phrase, the logic score of the text sentence pattern is high; if the combination of three words of the former word and the latter word of the reference word is rare in the same phrase, the logic score of the text sentence pattern is low. And the text sentence pattern with the highest logic score ranking is used as the only text sentence pattern of the original text for semantic analysis to obtain a semantic analysis result.
Illustratively, in the text sentence pattern "research/life/and/service" and the text sentence pattern "research/life/and/service", if "life" is a reference word, the logical score of the first text sentence pattern is determined according to "research", "life", and "sum"; the logical score of the second text sentence pattern "research", "life" and "sum", it can be seen that the logical score of "research", "life" and "sum" will be lower than the logical score of "research", "life" and "sum", therefore "research/life/and/service" will be used as the unique text sentence pattern of the original text to perform semantic analysis, and obtain the semantic analysis result.
It can be seen that in the embodiment of the present application, the logical score of the text sentence pattern is determined according to the probability score of the combination of the reference word and the word adjacent to the reference word in the text sentence pattern, and the text sentence pattern with the highest score is used as the text sentence pattern corresponding to the original text, so that the finally confirmed text sentence pattern is ensured to be most logical, and the semantic analysis accuracy of the speech interaction model is improved.
In one possible embodiment, determining a logical score for each text sentence pattern based on the likelihood score comprises: determining the occurrence probability of each word in all words included in each text sentence pattern in a target service scene; determining a coefficient value of each word in each text sentence pattern, wherein the coefficient value is lower for words in the text sentence pattern which are farther away from the reference word; determining probability scores according to the occurrence probability and the coefficient value of each word; a logical score for each text sentence pattern is determined based on the likelihood score and the probability score.
In particular, when the original text includes multiple text sentences, the logical scores for all the text sentences may be determined according to the probability that all the words of each text sentence will appear collectively in the target service scenario. And determining a coefficient value of each non-reference word according to the distance from the reference word in the text sentence pattern, wherein the coefficient value is lower as the distance from the reference word is longer, and determining a probability score according to the occurrence probability and the coefficient value of each word. The sum of the probability scores of each word in the text sentence pattern is the probability score of the text sentence pattern. The logic score of each text sentence pattern can be determined according to the probability score and the probability score.
Illustratively, when the probability scores of several text sentences in the first few are not greatly different, the probability score is introduced to calculate the logic score of each sentence, and if the probability score is larger than the score difference between the first sentence and the second sentence, the probability score is directly determined to be the logic score. Or when the logic score is calculated according to the likelihood score and the probability score, the likelihood score and the probability score may be weighted and summed, for example, the likelihood score accounts for 70% of the total score, the probability score accounts for 30%, the final logic score is calculated, the finally obtained logic scores of several text sentences are ranked, and the text sentence with the first ranking is used as the logic score ranking uniquely corresponding to the target original text.
It can be seen that in the embodiment of the present application, the coefficient value of each word is calculated according to the occurrence probability of each word in the target service scenario and the distance from the reference word in the text sentence pattern, and the probability of each word in the text sentence pattern occurring in the target service scenario and the overall logic score of the text sentence pattern are scientifically evaluated, so that the finally confirmed text sentence pattern is ensured to be most logical, and the semantic analysis accuracy of the speech interaction model is further improved.
In one possible embodiment, the logic detecting each text sentence pattern based on the reference word included in each text sentence pattern to obtain the logic score of each text sentence pattern comprises: obtaining sentence types of each text sentence pattern, and judging a sentence component list of each text sentence pattern according to the sentence types, wherein the sentence component list comprises the types and the quantity of all sentence components required to be included by the corresponding sentence types; acquiring the logic score of each text sentence pattern, wherein the more the text sentence patterns meet the types and the number of sentence components, the higher the sentence integrity; and determining the second text character data with higher sentence integrity as new confirmed text character data.
Specifically, depending on the tone of the mood of the user or a particular word in the text sentence pattern, such as "what", the words such as "what" can be judged, and the sentence type of the original text can be a statement sentence, an question sentence and the like. The sentence component of the statement sentence can be formed by a subject, a predicate and an object, and the logical score of the text sentence is highest if the word segmentation result in the text sentence with the sentence type of the statement sentence completely satisfies the subject, the predicate and the object; if two subjects appear in a text sentence pattern with a sentence type of statement sentence, the logical score of the text sentence pattern is lower due to the structure of the two subjects.
In one possible embodiment, the segmenting of the original text from the set of reference words comprises: determining whether a target text with similarity higher than a preset value with the original text is included in a preset text library or not according to a reference word set included in the original text; and if so, segmenting the original text according to the segmentation result of the target text.
Specifically, if the similarity between the original text of the user and the target text in the preset text database is higher than a preset threshold, the original text and the target text are considered to be consistent, and the original text is segmented according to the segmentation result of the target text.
For example, in a scenario where a dialog is fixed, the original text may be directly compared with a preset text to determine a text sentence pattern, for example, a sentence pattern of "where to navigate" in a navigation scenario is relatively fixed, so that the text may be segmented according to the preset text comparison. The similarity may be determined according to a word composition structure of the text and a keyword category. When a plurality of keywords exist, the keywords can be determined together according to the number of the keywords included in the preset text.
In one possible embodiment, the method further comprises: acquiring a historical text of a user, wherein the historical text is converted according to a historical voice record of the user, and a text sentence pattern corresponding to the historical text is a text sentence pattern for semantic analysis; and adding the historical texts and the text sentence patterns corresponding to the historical texts into a preset text library.
Specifically, the history text comprises an original text converted according to the historical voice record of the target user and only one determined text sentence pattern corresponding to the original text. The original text and the corresponding text sentence pattern are stored in a preset text base and are used for being compared with the new original text, and when the similarity between the new original text and the historical text is higher than a preset threshold value, the new original text can be segmented according to the text sentence pattern corresponding to the historical text in the preset text base.
It can be seen that in the embodiment of the present application, the original text is compared with the historical text, and if the similarity between the original text and the historical text is greater than the preset value, the participle text sentence pattern of the historical text is taken as the participle text sentence pattern of the target original text, so that the semantic analysis accuracy of the voice interaction system is improved, the calculation amount is reduced, and the analysis calculation efficiency is improved.
In a possible embodiment, before segmenting the original text according to the reference word set to obtain at least one text sentence pattern of the original text adapted to the target service scenario, determining the reference word in the original text with the highest matching degree with the target service scenario includes: acquiring a word list of an original text, wherein the word list comprises all words possibly appearing in the original text according to different combination modes; matching all words in the word list with keywords of each semantic scene in a plurality of semantic scenes, and determining the number of words matched with a reference word set of each target service scene in all the words; and determining the target service scene with the maximum number of words matched with the reference word set as the target service scene corresponding to the original text, and determining the words matched with the keywords of the target semantic scene in all the words as the reference words.
Specifically, if the reference word of the original text is not limited to one, the original text may be first participled to obtain a word list of the original text, where the word list includes a list of all words of the original text that can be obtained by participling, a reference word set of the target service scenario is matched with the word list, and it is determined that the target service scenario that can be matched to the most reference words is the target service scenario of the original text. And confirming the text sentence pattern of the original text according to the obtained plurality of reference words.
In one possible embodiment, after matching all the words in the word list with the keywords of each semantic scenario in the plurality of semantic scenarios, and determining the number of words in all the words that match the reference word set of each target service scenario, the method further comprises: if the words with the occurrence frequency exceeding a second preset threshold value in the word list of the original text exceed a preset number limit; acquiring a preset text database of a target service scene corresponding to the original text; and determining the words with the highest occurrence frequency as the reference words of the first text character data in the history input sentences among the words with the occurrence frequency exceeding the second preset threshold.
Specifically, the reference words of an original text may be limited, and if the original text confirms too many reference words at the stage of determining the reference words, a final text sentence pattern may be caused to be not in accordance with logic, so when the matched reference words of the original text exceed a preset number limit, the occurrence number of the multiple reference words in the target service scenario is obtained, and the reference word with the largest occurrence number is determined as the reference word of the original text.
By implementing the method in the embodiment of the application, the reference word set under the target service scene can be obtained by determining the target service scene of the user when the user uses the man-machine interaction system, and the reference words under the target service scene of the original text obtained according to the voice information of the user are confirmed. The word segmentation text sentence pattern which is most consistent with logic can be obtained according to the reference words, so that the semantic analysis accuracy of the voice interaction model is improved, the text sentence pattern of the target original text is determined to be the text sentence pattern corresponding to the historical text with the similarity larger than the preset value according to the text sentence pattern with the word with low possibility of eliminating the word and the similarity of the target original text and the historical text, the calculated amount of a voice interaction system is reduced, and the analysis efficiency of the system is improved.
Based on the above description of the configuration method embodiment, the present application further provides a service scene recognition-based voice data processing apparatus 500, where the service scene recognition-based voice data processing apparatus 500 may be a computer program (including program codes) running in a terminal. The voice data processing apparatus 500 based on service scene recognition may perform the methods shown in fig. 1 and 2. Referring to fig. 5, the apparatus includes:
the reception unit 501: the system is used for receiving voice information input by a user of the electronic equipment in a current conversation event, and the following operations are executed through a man-machine interaction engine: converting the voice information into an original text;
the determination unit 502: the method is used for determining a target service scene to which a current dialog event belongs according to the dialog content and/or event associated information of the current dialog event, and the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current conversation event and the equipment type of the electronic equipment;
the analysis unit 503: the method comprises the steps of obtaining a reference word set of a target service scene; performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene; and performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result.
In a possible embodiment, in terms of performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result, the analyzing unit 503 is further specifically configured to: the at least one text schema includes a plurality of portions, and before performing semantic analysis on the original text according to the at least one text schema, the method further includes: determining a text sentence pattern with single characters in words included in each text sentence pattern as a target text sentence pattern; determining the realizability of the single characters included in the target text sentence pattern; and deleting the text sentence pattern with the realizability lower than the preset value from the plurality of text sentence patterns.
In a possible embodiment, in terms of performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result, the analyzing unit 503 is further specifically configured to: the at least one text sentence pattern comprises a plurality of text sentences, and the semantic analysis is performed on the original text according to the at least one text sentence pattern, which comprises the following steps: carrying out logic detection on each text sentence pattern on the basis of the reference words included in each text sentence pattern to obtain a logic score of each text sentence pattern; and carrying out semantic analysis on the text sentence pattern with the highest logic score to obtain a semantic analysis result.
In a possible embodiment, in terms of performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result, the analyzing unit 503 is further specifically configured to: logic detection is carried out on each text sentence pattern on the basis of the reference words included in each text sentence pattern, and a logic score of each text sentence pattern is obtained, wherein the logic score comprises the following steps: determining words adjacent to the reference words in each text sentence pattern as check words; determining a likelihood score for a combination of the reference term and the check term into a phrase; a logical score for each text sentence pattern is determined based on the likelihood score.
In a possible embodiment, in terms of performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result, the analyzing unit 503 is further specifically configured to: determining a logical score for each text sentence pattern based on the likelihood score, comprising: determining the occurrence probability of each word in all words included in each text sentence pattern in a target service scene; determining a coefficient value for each term in each text sentence pattern, the more distant a term in a text sentence pattern from a reference term, the lower the coefficient value; determining probability scores according to the occurrence probability and the coefficient value of each word; a logical score for each text sentence pattern is determined based on the likelihood score and the probability score.
In a possible embodiment, in terms of performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result, the analyzing unit 503 is further specifically configured to: segmenting the original text according to the reference word set, comprising: determining whether a target text with similarity higher than a preset value with the original text is included in a preset text library or not according to a reference word set included in the original text; and if so, segmenting the original text according to the segmentation result of the target text.
In a possible embodiment, in terms of performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result, the analyzing unit 503 is further specifically configured to: the method further comprises the following steps: acquiring a historical text of a user, wherein the historical text is converted according to a historical voice record of the user, and a text sentence pattern corresponding to the historical text is a text sentence pattern used for semantic analysis; and adding the historical text and the text sentence pattern corresponding to the historical text into a preset text library.
It should be noted that the above modules (the receiving unit 501, the determining unit 502 and the meta-analysis unit 503) are used for executing the relevant steps of the above method. For example, the receiving unit 501 is used for executing the related content of step S201, and the determining unit 502 is used for executing the related content of S202.
Based on the description of the method embodiment and the apparatus embodiment, please refer to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and the electronic device 600 described in the embodiment, as shown in fig. 6, the electronic device 600 includes a processor 601, a memory 602, a communication interface 603, and one or more programs, where the processor 601 may be a general Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs according to the above-mentioned schemes. The Memory 602 may be, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 602 may be separate and coupled to the processor 601 via a bus. The memory 602 may also be integrated with the processor 601. Communication interface 603 is used for communicating with other devices or communication Networks, such as ethernet, radio Access Network (RAN), wireless Local Area Networks (WLAN), etc. The one or more programs are stored in the memory by a form of program code and configured to be executed by the processor, and in an embodiment of the present application, the programs include instructions for performing the following steps:
receiving voice information recorded in the current dialog event by a user from the electronic equipment, and executing the following operations through a man-machine interaction engine: converting the voice information into an original text; determining a target service scene to which the current dialog event belongs according to the dialog content and/or the event associated information of the current dialog event, wherein the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current conversation event and the equipment type of the electronic equipment; acquiring a reference word set of a target service scene; performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene; and performing semantic analysis on the original text according to at least one text sentence pattern to obtain a semantic analysis result.
In one possible embodiment, the at least one text schema comprises a plurality, and before semantically analyzing the original text according to the at least one text schema, the method further comprises: determining a text sentence pattern with single characters in the words included in each text sentence pattern as a target text sentence pattern; determining the realizability of the single characters included in the target text sentence pattern; and deleting the text sentence patterns with realizability lower than a preset value from the plurality of text sentence patterns.
In one possible embodiment, the at least one text schema includes a plurality of, and the semantic analysis of the original text based on the at least one text schema includes: carrying out logic detection on each text sentence pattern on the basis of the reference words included in each text sentence pattern to obtain a logic score of each text sentence pattern; and performing semantic analysis on the text sentence pattern with the highest logic score to obtain a semantic analysis result.
In one possible embodiment, logically testing each text sentence pattern based on the reference words included in each text sentence pattern to obtain a logical score for each text sentence pattern comprises: determining words adjacent to the reference words in each text sentence pattern as check words; determining a likelihood score of a phrase formed by combining the reference word and the check word; a logical score for each text sentence pattern is determined based on the likelihood score.
In one possible embodiment, determining a logical score for each text sentence pattern based on the likelihood score comprises: determining the occurrence probability of each word in all words included in each text sentence pattern in a target service scene; determining a coefficient value for each term in each text sentence pattern, the more distant a term in a text sentence pattern from a reference term, the lower the coefficient value; determining probability scores according to the occurrence probability and the coefficient value of each word; a logical score for each text sentence pattern is determined based on the likelihood score and the probability score.
In one possible embodiment, the tokenizing of the original text according to the set of reference words comprises: determining whether a target text with similarity higher than a preset value with the original text is included in a preset text library or not according to a reference word set included in the original text; and if so, segmenting the original text according to the segmentation result of the target text.
In one possible embodiment, the method further comprises: acquiring a historical text of a user, wherein the historical text is converted according to a historical voice record of the user, and a text sentence pattern corresponding to the historical text is a text sentence pattern for semantic analysis; and adding the historical texts and the text sentence patterns corresponding to the historical texts into a preset text library.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that acts or modules referred to are not necessarily required for this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solutions of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, can be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. A voice data processing method based on service scene recognition is characterized in that the method is applied to a server of a voice interaction system, the server is provided with a man-machine interaction engine, the voice interaction system further comprises an electronic device in communication connection with the server, and the method comprises the following steps:
receiving voice information recorded by a user of the electronic equipment in the current conversation event, and executing the following operations through the man-machine interaction engine:
converting the voice information into an original text;
determining a target service scene to which the current dialog event belongs according to the dialog content and/or the event associated information of the current dialog event, wherein the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current dialog event, and the equipment type of the electronic equipment;
acquiring a reference word set of the target service scene;
performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene;
and performing semantic analysis on the original text according to the at least one text sentence pattern to obtain a semantic analysis result.
2. The method of claim 1, wherein the at least one text schema comprises a plurality of text schemas, and wherein prior to semantically analyzing the original text according to the at least one text schema, the method further comprises:
determining a text sentence pattern with single characters in the words included in each text sentence pattern as a target text sentence pattern;
determining the realizability of the single characters included in the target text sentence pattern;
and deleting the text sentence patterns with the realizability lower than the preset value from the plurality of text sentence patterns.
3. The method of claim 2, wherein said at least one text sentence pattern comprises a plurality of text sentences, and wherein said semantically analyzing said original text according to said at least one text sentence pattern comprises:
carrying out logic detection on each text sentence pattern on the basis of the reference words included in each text sentence pattern to obtain a logic score of each text sentence pattern;
and performing semantic analysis on the text sentence pattern with the highest logic score to obtain a semantic analysis result.
4. The method of claim 3, wherein said logically detecting each text sentence pattern based on a reference word included in said each text sentence pattern to obtain a logical score for said each text sentence pattern comprises:
determining words adjacent to the reference words in each text sentence pattern as check words;
determining a likelihood score for the combination of the reference term and the check term into a phrase;
and determining the logic score of each text sentence pattern according to the possibility score.
5. The method of claim 4, wherein said determining a logical score for each text sentence pattern based on said likelihood score comprises:
determining the occurrence probability of each word in all words included in each text sentence pattern in the target service scene;
determining a coefficient value of each word in each text sentence pattern, wherein the coefficient value is lower for words in the text sentence patterns which are farther away from the reference word;
determining a probability score according to the occurrence probability of each word and the coefficient value;
and determining the logic score of each text sentence pattern according to the probability score and the probability score.
6. The method of any of claims 1-5, wherein the tokenizing the original text according to the set of reference terms comprises:
determining whether a target text with similarity higher than a preset value with the original text is included in a preset text library according to a reference word set included in the original text;
and if so, performing word segmentation on the original text according to the word segmentation result of the target text.
7. The method of claim 6, further comprising:
acquiring a historical text of the user, wherein the historical text is converted according to the historical voice record of the user, and a text sentence pattern corresponding to the historical text is a text sentence pattern used for semantic analysis;
and adding the historical texts and the text sentence patterns corresponding to the historical texts into the preset text library.
8. The utility model provides a speech data processing apparatus based on service scene discernment which characterized in that is applied to the server of voice interaction system, the server is provided with man-machine interaction engine, voice interaction system still include with server communication connection's electronic equipment includes:
a receiving unit: the system is used for receiving voice information input by a user of the electronic equipment in the current conversation event, and the following operations are executed through the man-machine interaction engine:
a determination unit: the voice information is converted into original text;
determining a target service scene to which the current dialog event belongs according to the dialog content and/or the event associated information of the current dialog event, wherein the event associated information comprises at least one of the following: the type of the service or application provided by the electronic equipment in the current dialog event, and the equipment type of the electronic equipment;
an analysis unit: the reference word set is used for acquiring the target service scene;
performing word segmentation on the original text according to the reference word set to obtain at least one text sentence pattern of the original text, which is adapted to the target service scene;
and performing semantic analysis on the original text according to the at least one text sentence pattern to obtain a semantic analysis result.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211306175.6A CN115374793B (en) | 2022-10-25 | 2022-10-25 | Voice data processing method based on service scene recognition and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211306175.6A CN115374793B (en) | 2022-10-25 | 2022-10-25 | Voice data processing method based on service scene recognition and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115374793A true CN115374793A (en) | 2022-11-22 |
CN115374793B CN115374793B (en) | 2023-01-20 |
Family
ID=84073623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211306175.6A Active CN115374793B (en) | 2022-10-25 | 2022-10-25 | Voice data processing method based on service scene recognition and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115374793B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116050939A (en) * | 2023-03-07 | 2023-05-02 | 深圳市人马互动科技有限公司 | User evaluation method based on interaction novel and related device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070208556A1 (en) * | 2006-03-03 | 2007-09-06 | Samsung Electronics Co., Ltd. | Apparatus for providing voice dialogue service and method of operating the same |
CN111178081A (en) * | 2018-11-09 | 2020-05-19 | 中移(杭州)信息技术有限公司 | Semantic recognition method, server, electronic device and computer storage medium |
WO2021129439A1 (en) * | 2019-12-28 | 2021-07-01 | 科大讯飞股份有限公司 | Voice recognition method and related product |
US20210248484A1 (en) * | 2020-06-22 | 2021-08-12 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for generating semantic representation model, and storage medium |
-
2022
- 2022-10-25 CN CN202211306175.6A patent/CN115374793B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070208556A1 (en) * | 2006-03-03 | 2007-09-06 | Samsung Electronics Co., Ltd. | Apparatus for providing voice dialogue service and method of operating the same |
CN111178081A (en) * | 2018-11-09 | 2020-05-19 | 中移(杭州)信息技术有限公司 | Semantic recognition method, server, electronic device and computer storage medium |
WO2021129439A1 (en) * | 2019-12-28 | 2021-07-01 | 科大讯飞股份有限公司 | Voice recognition method and related product |
US20210248484A1 (en) * | 2020-06-22 | 2021-08-12 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for generating semantic representation model, and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116050939A (en) * | 2023-03-07 | 2023-05-02 | 深圳市人马互动科技有限公司 | User evaluation method based on interaction novel and related device |
Also Published As
Publication number | Publication date |
---|---|
CN115374793B (en) | 2023-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334241B (en) | Quality inspection method, device and equipment for customer service record and computer readable storage medium | |
CN110275965B (en) | False news detection method, electronic device and computer readable storage medium | |
CN108027814B (en) | Stop word recognition method and device | |
CN107885717B (en) | Keyword extraction method and device | |
US8489626B2 (en) | Method and apparatus for recommending a short message recipient | |
CN111586695B (en) | Short message identification method and related equipment | |
CN116882372A (en) | Text generation method, device, electronic equipment and storage medium | |
CN115374793B (en) | Voice data processing method based on service scene recognition and related device | |
CN115174250B (en) | Network asset security assessment method and device, electronic equipment and storage medium | |
CN114706945A (en) | Intention recognition method and device, electronic equipment and storage medium | |
CN111858865B (en) | Semantic recognition method, semantic recognition device, electronic equipment and computer readable storage medium | |
CN114722199A (en) | Risk identification method and device based on call recording, computer equipment and medium | |
CN118051653A (en) | Multi-mode data retrieval method, system and medium based on semantic association | |
CN113705164A (en) | Text processing method and device, computer equipment and readable storage medium | |
CN115858776B (en) | Variant text classification recognition method, system, storage medium and electronic equipment | |
CN109829043B (en) | Part-of-speech confirmation method, part-of-speech confirmation device, electronic device, and storage medium | |
CN111083705A (en) | Group-sending fraud short message detection method, device, server and storage medium | |
CN114417883B (en) | Data processing method, device and equipment | |
CN114491232B (en) | Information query method and device, electronic equipment and storage medium | |
CN115906797A (en) | Text entity alignment method, device, equipment and medium | |
WO2021159668A1 (en) | Robot dialogue method and apparatus, computer device and storage medium | |
CN112786041A (en) | Voice processing method and related equipment | |
CN115374372B (en) | Method, device, equipment and storage medium for quickly identifying false information of network community | |
CN116992111B (en) | Data processing method, device, electronic equipment and computer storage medium | |
CN110737750B (en) | Data processing method and device for analyzing text audience and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |