CN111694926A - Interactive processing method and device based on scene dynamic configuration and computer equipment - Google Patents

Interactive processing method and device based on scene dynamic configuration and computer equipment Download PDF

Info

Publication number
CN111694926A
CN111694926A CN202010346507.8A CN202010346507A CN111694926A CN 111694926 A CN111694926 A CN 111694926A CN 202010346507 A CN202010346507 A CN 202010346507A CN 111694926 A CN111694926 A CN 111694926A
Authority
CN
China
Prior art keywords
information
interactive
processed
scene
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010346507.8A
Other languages
Chinese (zh)
Inventor
罗金雄
胡宏伟
马骏
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010346507.8A priority Critical patent/CN111694926A/en
Publication of CN111694926A publication Critical patent/CN111694926A/en
Priority to PCT/CN2020/122750 priority patent/WO2021218069A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Acoustics & Sound (AREA)
  • Databases & Information Systems (AREA)
  • Strategic Management (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Finance (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an interactive processing method and device based on scene dynamic configuration and computer equipment. The method comprises the following steps: if receiving information to be processed from a client, acquiring target character information corresponding to the information to be processed; acquiring interactive scene information corresponding to information to be processed; acquiring target configuration information matched with the interactive scene information in a pre-stored configuration database; performing class reflection on a pre-stored general frame according to the target configuration information to construct a corresponding processing example; and the execution processing example processes the target character information to obtain corresponding interactive information, and feeds the interactive information back to the client to finish interactive processing. The interactive processing method and the interactive processing system are based on development assistance technology, and the corresponding processing examples are obtained based on interactive scene information dynamic configuration to complete interactive processing, and the scene dynamic configuration process can be applied to any interactive scene, so that the interactive processing efficiency is ensured, and the interactive processing method and the interactive processing system have the characteristics of wide application range and higher use flexibility.

Description

Interactive processing method and device based on scene dynamic configuration and computer equipment
Technical Field
The invention relates to the technical field of development assistance technology, in particular to an interactive processing method, an interactive processing device and computer equipment based on scene dynamic configuration.
Background
With the development of artificial intelligence, enterprises can construct an intelligent interaction processing system based on artificial intelligence, for example, the traditional artificial voice customer service can be replaced by the intelligent voice customer service in the constructed intelligent voice system, a customer call is answered through the intelligent voice system which is intelligent in all time, and the answer is carried out based on the problem proposed by the customer after voice recognition, so that the interaction process with the customer is completed through the intelligent voice customer service, the service efficiency and the service quality of voice call can be improved, and the enterprise operation cost is reduced. However, since a large enterprise includes a large number of services, the number of interactive scenes is very large, all interactive scenes are brought into an interactive processing system, although the number of applicable interactive scenes is expanded, the obtained interactive processing system is too large, the construction and maintenance costs are extremely high, the efficiency of interactive processing is greatly reduced, and in order to avoid the too large interactive processing system, a conventional interactive processing system arranges codes included in the system for a specific interactive scene, and the obtained interactive processing system is usually only applicable to the specific interactive scene, so that the application range is narrow and the use flexibility is not high. Therefore, the interactive processing system in the prior art method has the problems of narrow application range and low use flexibility.
Disclosure of Invention
The embodiment of the invention provides an interactive processing method, an interactive processing device, computer equipment and a storage medium based on scene dynamic configuration, and aims to solve the problems of narrow application range and low use flexibility of an interactive processing system in the prior art.
In a first aspect, an embodiment of the present invention provides an interactive processing method based on scene dynamic configuration, including:
if receiving information to be processed from a client, acquiring target character information corresponding to the information to be processed;
acquiring interactive scene information corresponding to the information to be processed;
acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information;
performing class reflection on a pre-stored general frame according to the target configuration information to construct a corresponding processing example;
and processing the target character information by executing the processing example to acquire interactive information corresponding to the information to be processed, and feeding back the interactive information to the client to finish interactive processing.
In a second aspect, an embodiment of the present invention provides an interaction processing apparatus based on scene dynamic configuration, including:
the target character information acquisition unit is used for acquiring target character information corresponding to the information to be processed if the information to be processed from the client is received;
the interactive scene information acquisition unit is used for acquiring interactive scene information corresponding to the information to be processed;
the target configuration information acquisition unit is used for acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information;
the processing example construction unit is used for carrying out class reflection on a pre-stored general framework according to the target configuration information so as to construct a corresponding processing example;
and the processing example executing unit is used for processing the target character information by executing the processing example to acquire the interactive information corresponding to the information to be processed and feeding back the interactive information to the client to finish interactive processing.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor, when executing the computer program, implements the interaction processing method based on the dynamic scene configuration according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the interaction processing method based on scene dynamic configuration according to the first aspect.
The embodiment of the invention provides an interactive processing method, an interactive processing device and computer equipment based on scene dynamic configuration. The method comprises the steps of obtaining target character information corresponding to information to be processed and further obtaining corresponding interactive scene information, carrying out class reflection after obtaining target configuration information matched with the interactive scene information to construct a processing example, executing the processing example to process the target character information to obtain interactive information and feeding the interactive information back to a client to finish an interactive processing process, and dynamically configuring the corresponding processing example based on the interactive scene information to finish the interactive processing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an interaction processing method based on scene dynamic configuration according to an embodiment of the present invention;
fig. 2 is a schematic view of an application scenario of an interaction processing method based on dynamic scenario configuration according to an embodiment of the present invention;
fig. 3 is a schematic sub-flow diagram of an interaction processing method based on scene dynamic configuration according to an embodiment of the present invention;
fig. 4 is another schematic sub-flow chart of an interaction processing method based on scene dynamic configuration according to an embodiment of the present invention;
fig. 5 is another schematic sub-flow chart of an interaction processing method based on scene dynamic configuration according to an embodiment of the present invention;
fig. 6 is another schematic sub-flow chart of an interaction processing method based on scene dynamic configuration according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of an interaction processing apparatus based on scene dynamic configuration according to an embodiment of the present invention;
FIG. 8 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flowchart of an interaction processing method based on scene dynamic configuration according to an embodiment of the present invention, and fig. 2 is a schematic application scene diagram of the interaction processing method based on scene dynamic configuration according to the embodiment of the present invention. The interactive processing method based on the scene dynamic configuration is applied to the management server 10, the method is executed through application software installed in the management server 10, the management server 10 communicates with the client 20 by establishing a network connection with the client 20, a user of the client 20 can send an interactive processing request to the management server 10 through the client, the management server 20 executes the interactive processing method based on the scene dynamic configuration to process the interactive processing request and feeds back corresponding interactive information to the client 20 to complete the interactive processing, the management server 10 is an enterprise terminal for executing the interactive processing method based on the scene dynamic configuration, the client 20 is a terminal device for sending the interactive processing request, and the client 20 may be a desktop computer, a notebook computer, a tablet computer, a mobile phone, or the like. Fig. 2 shows only one client 20 and the management server 10 performing data information transmission, and in practical applications, the management server 10 may perform data information transmission with a plurality of clients 20 at the same time.
As shown in fig. 1, the method includes steps S110 to S150.
S110, if the information to be processed from the client is received, target character information corresponding to the information to be processed is obtained.
And if the information to be processed from the client is received, acquiring target character information corresponding to the information to be processed. The client sends information to be processed to the management server through the client, the information to be processed can be request information needing to be interactively processed by the management server, after receiving the information to be processed, the management server can interactively process the information to be processed to obtain corresponding interactive information and feed the interactive information back to the client, and the client receives the interactive information to complete the interactive processing process of the information to be processed.
The information to be processed can be words, voice or short videos containing the questioning information sent by the client, and processing logics for performing interactive processing on the questioning information under different application scenes are different. For example, a client inputs characters in a question box of a terminal page and clicks a confirmation button, and then the client sends the characters as information to be processed to a management server; the client clicks a voice input button of a terminal page, speaks own problems and clicks a confirmation button, and then the client sends the recorded voice as information to be processed to the management server; and the client clicks a video input button of the terminal page, speaks own problems to the video acquisition equipment of the client and clicks a confirmation button, and then the client sends the recorded short video serving as the information to be processed to the management server.
In an embodiment, as shown in fig. 3, step S110 includes sub-steps S111, S112 and S113.
And S111, judging whether the information to be processed is character information.
And judging whether the information to be processed is character information. Specifically, the information to be processed includes corresponding format identification information, where the format identification information is information for identifying a format of the information to be processed, and whether the information to be processed is text information can be determined by the format identification information of the information to be processed.
For example, if the format identification information is txt and string, the corresponding information to be processed is text information; if the format identification information is wav, mp3, wma, the corresponding information to be processed is audio information; and if the format identification information is avi, flv and rmvb, the corresponding information to be processed is video information.
And S112, if the information to be processed is not the text information, recognizing the voice information in the information to be processed according to a preset voice recognition model to obtain target text information corresponding to the information to be processed.
And if the information to be processed is not the text information, recognizing the voice information in the information to be processed according to a preset voice recognition model to obtain target text information corresponding to the information to be processed. If the information to be processed is not text information, the information to be processed may be audio information or video information, and both the audio information and the video information include voice information. The speech recognition model is a model for recognizing and converting speech information contained in audio information or video information, wherein the speech recognition model includes an acoustic model, a speech feature dictionary, and a semantic analysis model.
Specifically, the step of identifying the voice information in the information to be processed includes: 1. and segmenting the information to be processed according to an acoustic model in the speech recognition model to obtain a plurality of phonemes contained in the information to be processed. Specifically, the speech information included in the audio information or the video information is composed of phonemes of a plurality of character pronunciations, and the phoneme of one character includes the frequency and tone of the character pronunciation. The acoustic model comprises phonemes of all character pronunciations, the phonemes of a single character in the speech information can be segmented by matching the speech information with all the phonemes in the acoustic model, and a plurality of phonemes contained in the information to be processed are finally obtained through segmentation. And matching the phoneme according to a speech feature dictionary in the speech recognition model so as to convert the phoneme into pinyin information. The voice feature dictionary contains phoneme information corresponding to all character pinyins, and the obtained phonemes are matched with the phoneme information corresponding to the character pinyins, so that the phonemes of a single character can be converted into the character pinyins matched with the phonemes in the voice feature dictionary, and all the phonemes contained in the voice information can be converted into the pinyin information. And performing semantic analysis on the pinyin information according to a semantic analysis model in the voice recognition model to obtain target character information corresponding to the information to be processed. The semantic analysis model comprises the mapping relation corresponding to the pinyin information and the character information, and the obtained pinyin information can be subjected to semantic analysis through the mapping relation contained in the semantic analysis model so as to be converted into corresponding target character information.
For example, the character information corresponding to the pinyin "b-a n, l ǐ" in the semantic parsing model is "transacted".
S113, if the information to be processed is the character information, determining the information to be processed as the target character information.
And if the information to be processed is the character information, determining the information to be processed as the target character information. If the information to be processed is the character information, the information to be processed does not need to be processed, and the information to be processed can be directly determined as the target character information for subsequent processing.
And S120, acquiring interactive scene information corresponding to the information to be processed.
And acquiring interactive scene information corresponding to the information to be processed. When information to be processed is processed, interactive scene information corresponding to the information to be processed needs to be acquired first. The interactive scenario information is information for defining a specific scenario for performing interactive processing on information to be processed, and may relate to any service node in a service provided by an enterprise, for example, the interactive scenario information may be product pre-sale consultation, service handling, after-sale service, collection service, and the like.
The information to be processed may or may not include interactive scene information, the information to be processed further includes an item of an interactive scene, the item of the interactive scene correspondingly includes an item value, for example, if the customer explicitly knows the service node to be processed, the customer may select a service node from the default service nodes as the item value of the item of the interactive scene to add to the information to be processed while sending the information to be processed, and the information to be processed received by the management server at this time necessarily includes the interactive scene information; if the customer cannot clearly determine the service node of the interactive processing required by the customer, the service node cannot be selected as the item value of the item of the interactive scene, but the customer still can send the information to be processed, which does not contain the interactive scene information, to the management server.
In an embodiment, as shown in fig. 4, step S120 includes sub-steps S121, S122 and S123.
And S121, judging whether the information to be processed contains the interactive scene type.
And judging whether the information to be processed contains the interactive scene type. Specifically, whether the item value of the item of the interactive scene is empty is judged, if the item value of the item is empty, it is indicated that the information to be processed does not include the interactive scene type, and if the item value of the item is not empty, it is indicated that the information to be processed includes the interactive scene type.
And S122, if the information to be processed contains the interactive scene type, determining the interactive scene type as the interactive scene information of the information to be processed.
And S123, if the information to be processed does not contain the interactive scene type, acquiring the interactive scene type matched with the target character information according to a preset interactive scene classification model as the interactive scene information of the information to be processed.
And if the information to be processed does not contain the interactive scene type, acquiring the interactive scene type matched with the target character information as the interactive scene information of the information to be processed according to a preset interactive scene classification model. If the information to be processed does not contain the interactive scene type, the interactive scene type matched with the target character information is acquired as corresponding interactive scene information through the interactive scene classification model. The interactive scene classification model comprises a character screening rule and a plurality of interactive scene types.
In one embodiment, as shown in FIG. 5, step S123 includes sub-steps S1231 and S1232.
S1231, screening the target literal information according to the character screening rule to obtain screened literal information.
And screening the target text information according to the character screening rule to obtain screened text information. The character screening rule is rule information for screening the target text information, specifically, the character screening rule can screen out characters with insignificant meaning from the target text information, and the characters contained in the obtained screened text information are all characters with practical significance.
For example, characters to be screened may be "please", "which", "ground", etc. in the character screening rule.
And S1232, calculating the matching degree between each interactive scene type and the screened character information, and taking the interactive scene type with the highest matching degree as the interactive scene information of the information to be processed.
And calculating the matching degree between each interactive scene type and the screened character information, and taking the interactive scene type with the highest matching degree as the interactive scene information of the information to be processed. Each interactive scene type contained in the interactive scene classification model correspondingly contains one or more scene keywords, the matching degree between the screened character information and each interactive scene type can be calculated according to the screened character information and the scene keywords corresponding to each interactive scene type, and the interactive scene type with the highest matching degree is determined as the interactive scene information of the information to be processed. Specifically, each scene keyword included in the interactive scene type corresponds to a weight value, and the corresponding matching degree can be obtained by dividing the weight value of the scene keyword matched with the screened character information and each interactive scene type by the number of characters of the screened character information.
For example, the interactive scene type is product pre-sale consultation, the scene keywords of the interactive scene type include an "introduction" weight value of 2.8, an "understanding" of 2.4, a "product" weight value of 3.6, and the screening text information is "introduction XX product", and the matching degree between the screening text information and the interactive scene type is (2.8+ 3.6)/8-80%.
S130, obtaining configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information.
And acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information. The configuration database is used for storing configuration information in the management server, a group of configuration information corresponding to each interactive scene information is stored in the configuration database, the group of configuration information corresponding to the interactive scene information is basic information which can perform interactive processing on information to be processed in the interactive scene, and a group of configuration information matched with the interactive scene information in the configuration database can be obtained as target configuration information according to the interactive scene information. Specifically, the configuration information may include attribute fields (fields) and methods (methods) to be configured, and the target text information corresponding to the information to be processed may be interactively processed based on the specific attribute fields and methods to obtain the interactive information matched with the information to be processed.
For example, the interactive scenario information is product pre-sale consultation, and the method in the target configuration information corresponding to the interactive scenario information is as follows: acquiring an 'XX product' in the target text information; the attribute field is: if "XX product" is "AA product", interactive information is "XXX"; if the XX product is the BB product, the interactive information is the XXXX; if the product XX is the CC product, the interactive information is XXXXX; if "XX product" is equal to other, the interaction information is "null".
And S140, performing class reflection on the pre-stored general framework according to the target configuration information to construct a corresponding processing example.
And performing class reflection on a pre-stored general framework according to the target configuration information to construct a corresponding processing example. The generic framework includes a plurality of default classes. The universal code frame only comprises a frame capable of processing information to be processed and does not comprise processing logic for interactive processing and an expected processing result, so that the universal code frame can be applied to any interactive scene, wherein the class Reflection (Reflection) is a Reflection mechanism in a Java program, allows the Java program frame to check the universal code frame and can directly operate internal attributes or methods of the program frame; class reflection allows the program framework to take internal information of classes of any known name using the application program interface (reflactionapis) of the class reflection, including: the package (package), the type parameters (type parameters), the parent class (superclass), the implementation interfaces (implemented interfaces), the property fields (fields), the constructors (constructors), the methods (methods), etc., and the field values of the property fields (fields) or the methods (methods) can be dynamically changed during execution. The method comprises the steps of carrying out class reflection on a general frame based on target configuration information, namely configuring attribute fields and methods contained in the target configuration information in the general frame to obtain corresponding processing examples, namely completing dynamic configuration based on interactive scene information to obtain the processing examples, wherein the configuration process is that field values of the attribute fields are dynamically changed and a method is called, and executing the obtained processing examples can carry out interactive processing on information to be processed according to processing logic and expected processing results in the processing examples to obtain corresponding interactive information. The process of scene dynamic configuration, that is, the process of obtaining the corresponding processing instance based on the interactive scene dynamic information configuration, can be applied to any interactive scene, so that the interactive processing efficiency is ensured, and the method has the characteristics of wide application range and higher use flexibility.
In an embodiment, as shown in fig. 6, step S140 includes sub-steps S141 and S142.
And S141, acquiring a class matched with the target configuration information in the general framework as a target class.
And acquiring the class matched with the target configuration information in the general framework as a target class. The general framework comprises a plurality of classes, and under a specific interaction scene, only part of the classes in the general framework can be used, or all the classes contained in the general framework can be used. The target configuration information includes class names to be used in the universal frame under the corresponding interactive scene, that is, the classes matched with the class names in the universal frame can be used as target classes.
S142, configuring the parameter values corresponding to the configuration values in the target class according to the configuration values in the target configuration information so as to construct a processing example corresponding to the target configuration information.
And configuring the parameter values corresponding to the configuration values in the target class according to the configuration values in the target configuration information so as to construct a processing instance corresponding to the target configuration information. The attribute fields and methods corresponding to each object class in the object configuration information include corresponding configuration values, and the parameter values in one object class are configured according to the configuration values in the object configuration information, that is, the field values of the attribute fields corresponding to the object class are dynamically changed and the corresponding methods in the object class are called, and after the parameter values in all the object classes are configured, the corresponding processing examples can be obtained.
S150, processing the target character information by executing the processing example to acquire interactive information corresponding to the information to be processed, and feeding back the interactive information to the client to finish interactive processing.
And processing the target character information by executing the processing example to acquire interactive information corresponding to the information to be processed, and feeding back the interactive information to the client to finish interactive processing. Executing the obtained processing instance can perform interactive processing on the target text information corresponding to the information to be processed by the processing logic and the expected processing result in the processing instance, obtain corresponding interactive information and feed the interactive information back to the client, namely, complete the process of performing interactive processing on the information to be processed once. The interactive information may be text information, audio information, video information, or a combination of text information and audio information or a combination of text information and video information, and specifically, the interactive information may be an answer to a question posed by a customer, for example, a detailed explanation of product contents included in a product posed by the customer; and corresponding guiding information can be fed back according to the questioning information of the client so as to guide the client to use the client to perform relevant operations of business transaction.
For example, if the "XX product" included in the target text information is "CC product" and the interaction information obtained by the execution processing instance is "xxxxxx", the "xxxxxx" is fed back to the client, and the interaction processing can be completed.
The technical method can be applied to application scenes including intelligent interaction scenes such as intelligent government affairs, intelligent city management, intelligent communities, intelligent security, intelligent logistics, intelligent medical treatment, intelligent education, intelligent environmental protection and intelligent traffic, and accordingly construction of intelligent cities is promoted.
In the interaction processing method based on scene dynamic configuration provided by the embodiment of the invention, the target character information corresponding to the information to be processed is acquired and the corresponding interaction scene information is further acquired, the class reflection is carried out after the target configuration information matched with the interaction scene information is acquired to construct the processing example, the processing example is executed to process the target character information to obtain the interaction information and feed the interaction information back to the client side, so that the interaction processing process can be completed, the corresponding processing example is obtained based on the interaction scene information dynamic configuration to complete the interaction processing, and the scene dynamic configuration process can be applied to any interaction scene, so that the characteristics of wide application range and higher use flexibility are realized while the interaction processing efficiency is ensured.
The embodiment of the invention also provides an interactive processing device based on the scene dynamic configuration, which is used for executing any embodiment of the interactive processing method based on the scene dynamic configuration. Specifically, referring to fig. 7, fig. 7 is a schematic block diagram of an interaction processing apparatus based on scene dynamic configuration according to an embodiment of the present invention. The interaction processing device dynamically configured based on the scene may be configured in the management server 10.
As shown in fig. 7, the interaction processing apparatus 100 dynamically configured based on a scene includes: a target text information obtaining unit 110, an interactive scene information obtaining unit 120, a target configuration information obtaining unit 130, a processing instance constructing unit 140, and a processing instance executing unit 150.
The target text information obtaining unit 110 is configured to, if information to be processed is received from a client, obtain target text information corresponding to the information to be processed.
In another embodiment of the present invention, the target text information obtaining unit 110 includes: the device comprises a to-be-processed information judging unit, a to-be-processed information identifying unit and a target character information determining unit.
The information to be processed judging unit is used for judging whether the information to be processed is character information or not; the to-be-processed information identification unit is used for identifying voice information in the to-be-processed information according to a preset voice identification model to obtain target character information corresponding to the to-be-processed information if the to-be-processed information is not character information; and the target character information determining unit is used for determining the information to be processed as the target character information if the information to be processed is the character information.
An interactive scene information obtaining unit 120, configured to obtain interactive scene information corresponding to the information to be processed.
In another embodiment of the present invention, the interaction scenario information obtaining unit 120 includes: the interactive scene classification device comprises an interactive scene judging unit, an interactive scene information determining unit and an interactive scene classifying unit.
The interactive scene judging unit is used for judging whether the information to be processed contains an interactive scene type; the interactive scene information determining unit is used for determining the interactive scene type as the interactive scene information of the information to be processed if the information to be processed contains the interactive scene type; and the interactive scene classification unit is used for acquiring the interactive scene type matched with the target character information as the interactive scene information of the information to be processed according to a preset interactive scene classification model if the information to be processed does not contain the interactive scene type.
In other inventive embodiments, the interactive scene classification unit includes: a screening text information acquisition unit and an interactive scene matching unit.
A screening text information obtaining unit, configured to screen the target text information according to the character screening rule to obtain screening text information; and the interactive scene matching unit is used for calculating the matching degree between each interactive scene type and the screened character information so as to take the interactive scene type with the highest matching degree as the interactive scene information of the information to be processed.
A target configuration information obtaining unit 130, configured to obtain configuration information in a pre-stored configuration database, which matches the interaction scenario information, as target configuration information.
And the processing instance constructing unit 140 is configured to perform class reflection on a pre-stored general framework according to the target configuration information to construct a corresponding processing instance.
In other embodiments of the present invention, the process instance constructing unit 140 includes: the device comprises a target class acquisition unit and a parameter configuration unit.
A target class obtaining unit, configured to obtain a class, which is matched with the target configuration information, in the general framework as a target class; and the parameter configuration unit is used for configuring the parameter values corresponding to the configuration values in the target class according to the configuration values in the target configuration information so as to construct the processing examples corresponding to the target configuration information.
The processing instance executing unit 150 is configured to process the target text information by executing the processing instance to obtain interaction information corresponding to the information to be processed, and feed back the interaction information to the client to complete interaction processing.
The interactive processing device based on the scene dynamic configuration provided by the embodiment of the invention is applied to the interactive processing method based on the scene dynamic configuration, the target character information corresponding to the information to be processed is obtained, the corresponding interactive scene information is further obtained, the target configuration information matched with the interactive scene information is obtained and then subjected to class reflection to construct a processing example, the processing example is executed to process the target character information to obtain the interactive information and feed the interactive information back to the client side to complete the interactive processing process, the corresponding processing example is obtained based on the interactive scene information dynamic configuration to complete the interactive processing, and the scene dynamic configuration process can be applied to any interactive scene, so that the interactive processing efficiency is ensured, and the interactive processing device based on the scene dynamic configuration has the characteristics of wide application range and higher use flexibility.
The above-mentioned interaction processing means based on scene dynamic configuration may be implemented in the form of a computer program, which may be run on a computer device as shown in fig. 8.
Referring to fig. 8, fig. 8 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Referring to fig. 8, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform an interaction processing method based on scene dynamic configuration.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be enabled to execute the interaction processing method based on the scene dynamic configuration.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 8 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following functions: if receiving information to be processed from a client, acquiring target character information corresponding to the information to be processed; acquiring interactive scene information corresponding to the information to be processed; acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information; performing class reflection on a pre-stored general frame according to the target configuration information to construct a corresponding processing example; and processing the target character information by executing the processing example to acquire interactive information corresponding to the information to be processed, and feeding back the interactive information to the client to finish interactive processing.
In an embodiment, when the processor 502 executes the step of acquiring the target text information corresponding to the to-be-processed information if the to-be-processed information from the client is received, the following operations are executed: judging whether the information to be processed is character information or not; if the information to be processed is not the text information, recognizing the voice information in the information to be processed according to a preset voice recognition model to obtain target text information corresponding to the information to be processed; and if the information to be processed is the character information, determining the information to be processed as the target character information.
In an embodiment, when the processor 502 performs the step of acquiring the interactive scene information corresponding to the information to be processed, the following operations are performed: judging whether the information to be processed contains an interactive scene type; if the information to be processed contains an interactive scene type, determining the interactive scene type as interactive scene information of the information to be processed; and if the information to be processed does not contain the interactive scene type, acquiring the interactive scene type matched with the target character information as the interactive scene information of the information to be processed according to a preset interactive scene classification model.
In an embodiment, when executing the step of obtaining the interactive scene type matched with the target text information as the interactive scene information of the information to be processed according to a preset interactive scene classification model if the information to be processed does not include the interactive scene type, the processor 502 executes the following operations: screening the target character information according to the character screening rule to obtain screened character information; and calculating the matching degree between each interactive scene type and the screened character information, and taking the interactive scene type with the highest matching degree as the interactive scene information of the information to be processed.
In an embodiment, when the processor 502 performs the step of performing class reflection on the pre-stored generic framework according to the target configuration information to construct the corresponding processing instance, the following operations are performed: acquiring a class matched with the target configuration information in the general framework as a target class; and configuring the parameter values corresponding to the configuration values in the target class according to the configuration values in the target configuration information so as to construct a processing instance corresponding to the target configuration information.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 8 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 8, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer-readable storage medium stores a computer program, wherein the computer program when executed by a processor implements the steps of: if receiving information to be processed from a client, acquiring target character information corresponding to the information to be processed; acquiring interactive scene information corresponding to the information to be processed; acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information; performing class reflection on a pre-stored general frame according to the target configuration information to construct a corresponding processing example; and processing the target character information by executing the processing example to acquire interactive information corresponding to the information to be processed, and feeding back the interactive information to the client to finish interactive processing.
In an embodiment, the step of acquiring the target text information corresponding to the information to be processed if the information to be processed is received from the client includes: judging whether the information to be processed is character information or not; if the information to be processed is not the text information, recognizing the voice information in the information to be processed according to a preset voice recognition model to obtain target text information corresponding to the information to be processed; and if the information to be processed is the character information, determining the information to be processed as the target character information.
In an embodiment, the step of obtaining the interactive scene information corresponding to the information to be processed includes: judging whether the information to be processed contains an interactive scene type; if the information to be processed contains an interactive scene type, determining the interactive scene type as interactive scene information of the information to be processed; and if the information to be processed does not contain the interactive scene type, acquiring the interactive scene type matched with the target character information as the interactive scene information of the information to be processed according to a preset interactive scene classification model.
In an embodiment, the step of obtaining, according to a preset interactive scene classification model, an interactive scene type matched with the target text information as the interactive scene information of the information to be processed, if the information to be processed does not include the interactive scene type, includes: screening the target character information according to the character screening rule to obtain screened character information; and calculating the matching degree between each interactive scene type and the screened character information, and taking the interactive scene type with the highest matching degree as the interactive scene information of the information to be processed.
In an embodiment, the step of performing a class reflection on a pre-stored general framework according to the target configuration information to construct a corresponding processing instance includes: acquiring a class matched with the target configuration information in the general framework as a target class; and configuring the parameter values corresponding to the configuration values in the target class according to the configuration values in the target configuration information so as to construct a processing instance corresponding to the target configuration information.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a computer-readable storage medium, which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
The computer-readable storage medium is a physical, non-transitory storage medium, and the computer-readable storage medium may be an internal storage unit of the foregoing device, for example, a physical storage medium such as a hard disk or a memory of the device. The storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and other physical storage Media provided on the device.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An interactive processing method based on scene dynamic configuration is applied to a management server, the management server is communicated with at least one client, and the method is characterized by comprising the following steps:
if receiving information to be processed from a client, acquiring target character information corresponding to the information to be processed;
acquiring interactive scene information corresponding to the information to be processed;
acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information;
performing class reflection on a pre-stored general frame according to the target configuration information to construct a corresponding processing example;
and processing the target character information by executing the processing example to acquire interactive information corresponding to the information to be processed, and feeding back the interactive information to the client to finish interactive processing.
2. The interactive processing method based on the scene dynamic configuration of claim 1, wherein if receiving the information to be processed from the client, acquiring the target text information corresponding to the information to be processed comprises:
judging whether the information to be processed is character information or not;
if the information to be processed is not the text information, recognizing the voice information in the information to be processed according to a preset voice recognition model to obtain target text information corresponding to the information to be processed;
and if the information to be processed is the character information, determining the information to be processed as the target character information.
3. The interactive processing method based on the scene dynamic configuration according to claim 1, wherein the obtaining of the interactive scene information corresponding to the information to be processed includes:
judging whether the information to be processed contains an interactive scene type;
if the information to be processed contains an interactive scene type, determining the interactive scene type as interactive scene information of the information to be processed;
and if the information to be processed does not contain the interactive scene type, acquiring the interactive scene type matched with the target character information as the interactive scene information of the information to be processed according to a preset interactive scene classification model.
4. The interaction processing method based on the scene dynamic configuration of claim 3, wherein the interaction scene classification model includes a character screening rule and a plurality of interaction scene types, and the obtaining of the interaction scene type matched with the target text information according to a preset interaction scene classification model as the interaction scene information of the information to be processed includes:
screening the target character information according to the character screening rule to obtain screened character information;
and calculating the matching degree between each interactive scene type and the screened character information, and taking the interactive scene type with the highest matching degree as the interactive scene information of the information to be processed.
5. The interactive processing method based on scene dynamic configuration according to claim 1, wherein the performing class reflection on a pre-stored general framework according to the target configuration information to construct a corresponding processing instance comprises:
acquiring a class matched with the target configuration information in the general framework as a target class;
and configuring the parameter values corresponding to the configuration values in the target class according to the configuration values in the target configuration information so as to construct a processing instance corresponding to the target configuration information.
6. An interactive processing device based on scene dynamic configuration, comprising:
the target character information acquisition unit is used for acquiring target character information corresponding to the information to be processed if the information to be processed from the client is received;
the interactive scene information acquisition unit is used for acquiring interactive scene information corresponding to the information to be processed;
the target configuration information acquisition unit is used for acquiring configuration information matched with the interactive scene information in a pre-stored configuration database as target configuration information;
the processing example construction unit is used for carrying out class reflection on a pre-stored general framework according to the target configuration information so as to construct a corresponding processing example;
and the processing example executing unit is used for processing the target character information by executing the processing example to acquire the interactive information corresponding to the information to be processed and feeding back the interactive information to the client to finish interactive processing.
7. The interactive processing device based on scene dynamic configuration of claim 6, wherein the target text information obtaining unit comprises:
the information to be processed judging unit is used for judging whether the information to be processed is character information or not;
the to-be-processed information identification unit is used for identifying voice information in the to-be-processed information according to a preset voice identification model to obtain target character information corresponding to the to-be-processed information if the to-be-processed information is not character information;
and the target character information determining unit is used for determining the information to be processed as the target character information if the information to be processed is the character information.
8. The interactive processing device based on scene dynamic configuration according to claim 6, wherein the interactive scene information obtaining unit includes:
the interactive scene judging unit is used for judging whether the information to be processed contains an interactive scene type;
the interactive scene information determining unit is used for determining the interactive scene type as the interactive scene information of the information to be processed if the information to be processed contains the interactive scene type;
and the interactive scene classification unit is used for acquiring the interactive scene type matched with the target character information as the interactive scene information of the information to be processed according to a preset interactive scene classification model if the information to be processed does not contain the interactive scene type.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the interactive processing method based on scene dynamic configuration according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the interaction processing method based on scene dynamic configuration according to any one of claims 1 to 5.
CN202010346507.8A 2020-04-27 2020-04-27 Interactive processing method and device based on scene dynamic configuration and computer equipment Pending CN111694926A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010346507.8A CN111694926A (en) 2020-04-27 2020-04-27 Interactive processing method and device based on scene dynamic configuration and computer equipment
PCT/CN2020/122750 WO2021218069A1 (en) 2020-04-27 2020-10-22 Dynamic scenario configuration-based interactive processing method and apparatus, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010346507.8A CN111694926A (en) 2020-04-27 2020-04-27 Interactive processing method and device based on scene dynamic configuration and computer equipment

Publications (1)

Publication Number Publication Date
CN111694926A true CN111694926A (en) 2020-09-22

Family

ID=72476703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010346507.8A Pending CN111694926A (en) 2020-04-27 2020-04-27 Interactive processing method and device based on scene dynamic configuration and computer equipment

Country Status (2)

Country Link
CN (1) CN111694926A (en)
WO (1) WO2021218069A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214209A (en) * 2020-10-23 2021-01-12 北航(四川)西部国际创新港科技有限公司 Modeling method for interaction information and task time sequence in unmanned aerial vehicle operation scene
WO2021218069A1 (en) * 2020-04-27 2021-11-04 平安科技(深圳)有限公司 Dynamic scenario configuration-based interactive processing method and apparatus, and computer device
CN114924666A (en) * 2022-05-12 2022-08-19 上海云绅智能科技有限公司 Interaction method and device for application scene, terminal equipment and storage medium
CN117201441A (en) * 2023-08-28 2023-12-08 广州市玄武无线科技股份有限公司 Method and device for realizing multi-message type multi-turn user interaction

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125147B (en) * 2021-11-15 2023-05-30 青岛海尔科技有限公司 Verification method for equipment scene function, scene engine and scene platform
CN114265505A (en) * 2021-12-27 2022-04-01 中国电信股份有限公司 Man-machine interaction processing method and device, storage medium and electronic equipment
CN114201837B (en) * 2022-02-15 2022-08-09 杭州杰牌传动科技有限公司 Speed reducer model selection method and system based on scene virtual matching

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5874886B1 (en) * 2015-03-20 2016-03-02 パナソニックIpマネジメント株式会社 Service monitoring device, service monitoring system, and service monitoring method
CN105260179A (en) * 2015-09-24 2016-01-20 浪潮(北京)电子信息产业有限公司 Method for achieving flex and servlet interaction
CN110019483A (en) * 2018-01-02 2019-07-16 航天信息股份有限公司 Grain feelings collecting method and grain feelings data acquisition platform
CN110830665A (en) * 2019-11-12 2020-02-21 德邦物流股份有限公司 Voice interaction method and device and express service system
CN111063340A (en) * 2019-12-09 2020-04-24 用友网络科技股份有限公司 Service processing method and device of terminal, terminal and computer readable storage medium
CN111694926A (en) * 2020-04-27 2020-09-22 平安科技(深圳)有限公司 Interactive processing method and device based on scene dynamic configuration and computer equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021218069A1 (en) * 2020-04-27 2021-11-04 平安科技(深圳)有限公司 Dynamic scenario configuration-based interactive processing method and apparatus, and computer device
CN112214209A (en) * 2020-10-23 2021-01-12 北航(四川)西部国际创新港科技有限公司 Modeling method for interaction information and task time sequence in unmanned aerial vehicle operation scene
CN112214209B (en) * 2020-10-23 2024-02-13 北航(四川)西部国际创新港科技有限公司 Modeling method for interaction information and task time sequence in unmanned aerial vehicle operation scene
CN114924666A (en) * 2022-05-12 2022-08-19 上海云绅智能科技有限公司 Interaction method and device for application scene, terminal equipment and storage medium
CN117201441A (en) * 2023-08-28 2023-12-08 广州市玄武无线科技股份有限公司 Method and device for realizing multi-message type multi-turn user interaction
CN117201441B (en) * 2023-08-28 2024-06-04 广州市玄武无线科技股份有限公司 Method and device for realizing multi-message type multi-turn user interaction

Also Published As

Publication number Publication date
WO2021218069A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN111694926A (en) Interactive processing method and device based on scene dynamic configuration and computer equipment
CN108345692B (en) Automatic question answering method and system
CN109514586B (en) Method and system for realizing intelligent customer service robot
CN109359194B (en) Method and apparatus for predicting information categories
CN112328876A (en) Electronic card generation and pushing method and device based on knowledge graph
WO2023142451A1 (en) Workflow generation methods and apparatuses, and electronic device
CN113139816A (en) Information processing method, device, electronic equipment and storage medium
CN112861529A (en) Method and device for managing error codes
CN112925898A (en) Question-answering method, device, server and storage medium based on artificial intelligence
CN110059172B (en) Method and device for recommending answers based on natural language understanding
CN114119123A (en) Information pushing method and device
CN115543662B (en) Method and related device for issuing kafka message data
CN116955561A (en) Question answering method, question answering device, electronic equipment and storage medium
CN110597765A (en) Large retail call center heterogeneous data source data processing method and device
CN116303937A (en) Reply method, reply device, electronic equipment and readable storage medium
CN112908339B (en) Conference link positioning method and device, positioning equipment and readable storage medium
CN112669000A (en) Government affair item processing method and device, electronic equipment and storage medium
CN113742593A (en) Method and device for pushing information
CN113935334A (en) Text information processing method, device, equipment and medium
CN113422810A (en) Method and device for sending information to service provider
CN113609833A (en) Dynamic generation method and device of file, computer equipment and storage medium
CN113254579A (en) Voice retrieval method and device and electronic equipment
CN108962398B (en) Hospital information acquisition method and device
CN111782776A (en) Method and device for realizing intention identification through slot filling
CN111063340A (en) Service processing method and device of terminal, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination