CN115567646A - Intelligent outbound method, device, computer equipment and storage medium - Google Patents
Intelligent outbound method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115567646A CN115567646A CN202211157027.2A CN202211157027A CN115567646A CN 115567646 A CN115567646 A CN 115567646A CN 202211157027 A CN202211157027 A CN 202211157027A CN 115567646 A CN115567646 A CN 115567646A
- Authority
- CN
- China
- Prior art keywords
- information
- voice
- target
- text
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000004458 analytical method Methods 0.000 claims abstract description 81
- 230000006854 communication Effects 0.000 claims abstract description 72
- 238000004891 communication Methods 0.000 claims abstract description 71
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 15
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims description 37
- 238000003062 neural network model Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 27
- 230000000306 recurrent effect Effects 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 12
- 230000001755 vocal effect Effects 0.000 claims 2
- 238000005516 engineering process Methods 0.000 abstract description 15
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 239000003795 chemical substances by application Substances 0.000 description 13
- 238000010586 diagram Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004622 sleep time Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/527—Centralised call answering arrangements not requiring operator intervention
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/487—Arrangements for providing information services, e.g. recorded voice services or time announcements
- H04M3/493—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
- H04M3/4936—Speech interaction details
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application belongs to the field of artificial intelligence and relates to an intelligent outbound method, which comprises the following steps: acquiring historical call voice data between a target client and a seat, and converting the historical call voice data into corresponding voice texts; performing semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; determining a target voice operation text corresponding to the target service scene information and the target reply information from a preset voice operation database; carrying out voice synthesis on the target language text to obtain corresponding language term voice; determining the idle time of a target client and acquiring the number information of the target client; and calling a preset voice robot to carry out conversation communication with the target client based on the idle time, the number information and the speech terminology voice. The application also provides an intelligent outbound method and device, computer equipment and a storage medium. In addition, the application also relates to a block chain technology, and the intention characteristic information can be stored in the block chain. The application improves the treatment efficiency of collection.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an intelligent outbound method, an intelligent outbound device, a computer apparatus, and a storage medium.
Background
In the financial industry, insurance agencies are required to incentivize warranty charges when: when a customer places an order to purchase insurance, the balance is insufficient, and the temporary affairs are interrupted, so that the customer is required to complete payment in a subsequent time period.
In order to deal with a large amount of fee collection services, insurance organizations can establish relevant departments to handle fee collection services, and the existing fee collection services are generally called by a fee collector to make an external call to a mobile phone or a fixed telephone of a customer. The mode of urging to collect through manual operation has the disadvantages of time and labor waste and lower urging efficiency.
Disclosure of Invention
The embodiment of the application aims to provide an intelligent outbound method, an intelligent outbound device, computer equipment and a storage medium, so as to solve the technical problems that time and labor are wasted and the collection efficiency is low in the existing collection prompting mode through manual operation.
In order to solve the above technical problem, an embodiment of the present application provides an intelligent outbound method, which adopts the following technical solutions:
acquiring historical call voice data between a target client and a seat, and converting the historical call voice data into corresponding voice texts;
performing semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
determining a target voice operation text corresponding to the target service scene information and the target reply information from a preset voice operation database;
carrying out voice synthesis on the target language-technical text to obtain corresponding language term voice;
determining the idle time of the target client and acquiring the number information of the target client;
and calling a preset voice robot to carry out dialogue communication with the target client based on the idle time, the number information and the speech term voice.
Further, the step of performing semantic analysis on the voice text to obtain intention feature information corresponding to the voice text specifically includes:
calling a pre-trained intention analysis model;
inputting the voice text into the intention analysis model, performing semantic analysis on the voice text through the intention analysis model, and outputting an intention analysis result corresponding to the voice text;
and receiving the intention analysis result fed back by the intention analysis model, and taking the intention analysis result as the intention characteristic information.
Further, before the step of invoking the pre-trained intent analysis model, the method further includes:
acquiring a preset labeled training data set; the training data set comprises a plurality of training texts, and each training text is marked with service scene information and response information corresponding to the training text;
calling a preset recurrent neural network model;
inputting the training data set into the recurrent neural network model for training so as to train the recurrent neural network model for predicting business scene information and response information simultaneously and obtain a trained recurrent neural network model;
taking the trained recurrent neural network model as the intention analysis model;
storing the intent analysis model.
Further, the step of determining the target conversational text corresponding to the service scene information and the reply information from a preset conversational database includes:
determining a business process corresponding to the target reply information based on the target reply information;
calling the telephone operation database;
and based on the business process and the target business scene information, finding out a dialect text corresponding to the business process and the target business scene information from the dialect database to obtain the target dialect text.
Further, the step of determining the idle time of the target client specifically includes:
acquiring the client information of the target client;
inquiring attribute information corresponding to the client information from a preset client attribute library;
acquiring working attribute information from the attribute information;
determining a working time of the target customer based on the working attribute information;
acquiring preset appointed time;
and determining the idle time based on the working time and the designated time.
Further, after the step of invoking the preset voice robot to perform a dialogue communication with the target client based on the idle time, the number information, and the spoken term tone, the method further includes:
acquiring dialogue voice information generated after the voice robot and the target client complete dialogue communication;
analyzing the dialogue voice information, and judging whether the dialogue voice information contains target keywords or not;
and if the target keywords are contained, generating a communication result corresponding to the conversation voice information based on the target keywords.
Further, after the step of generating a communication result corresponding to the dialogue voice message based on the target keyword, the method further includes:
acquiring identity information of the seat;
acquiring communication information corresponding to the identity information;
and sending the communication result to an agent terminal of the agent based on the communication information.
In order to solve the above technical problem, an embodiment of the present application further provides an intelligent outbound method and apparatus, which adopts the following technical solutions:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring historical call voice data between a target client and an agent and converting the historical call voice data into corresponding voice texts;
the analysis module is used for carrying out semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
a first determining module, configured to determine, from a preset conversational database, a target conversational text corresponding to the target service scene information and the target reply information;
the first processing module is used for carrying out voice synthesis on the target language-technical text to obtain corresponding language term voice;
the second determining module is used for determining the idle time of the target client and acquiring the number information of the target client;
and the second processing module is used for calling a preset voice robot to carry out dialogue communication with the target client based on the idle time, the number information and the speaking terminology voice.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
acquiring historical call voice data between a target client and a seat, and converting the historical call voice data into corresponding voice texts;
performing semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
determining a target language-art text corresponding to the target service scene information and the target reply information from a preset language-art database;
carrying out voice synthesis on the target conversational text to obtain corresponding conversational terminology voice;
determining the idle time of the target client and acquiring the number information of the target client;
and calling a preset voice robot to carry out dialogue communication with the target client based on the idle time, the number information and the speech term voice.
In order to solve the foregoing technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
acquiring historical call voice data between a target client and a seat, and converting the historical call voice data into corresponding voice texts;
performing semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
determining a target language-art text corresponding to the target service scene information and the target reply information from a preset language-art database;
carrying out voice synthesis on the target language-technical text to obtain corresponding language term voice;
determining the idle time of the target client and acquiring the number information of the target client;
and calling a preset voice robot to carry out dialogue communication with the target client based on the idle time, the number information and the speech term voice.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
according to the method and the device, historical conversation voice data between a target client and an agent are obtained firstly, the historical conversation voice data are converted into corresponding voice texts, semantic analysis is conducted on the voice texts to obtain intention characteristic information corresponding to the voice texts, then the target conversation texts corresponding to target business scene information and target answer information are determined from a preset conversation database, voice synthesis is conducted on the target conversation texts to obtain corresponding conversation voices, and after the idle time of the target client is determined and the number information of the target client is obtained, a preset voice robot is called to conduct conversation communication with the target client based on the idle time, the number information and the conversation voices. The embodiment of the application automatically sends the call to the target customer to carry out conversation communication about collection prompting by using the voice robot in idle time, so that the call of the customer is dialed without manual operation to prompt collection, time and labor are saved, and the processing efficiency of collection prompting is effectively improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the description below are some embodiments of the present application, and that other drawings may be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an intelligent outbound method according to the present application;
FIG. 3 is a schematic block diagram of one embodiment of an intelligent outbound method arrangement according to the present application;
FIG. 4 is a block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the intelligent outbound method provided in the embodiment of the present application is generally executed by a server/terminal device, and accordingly, the intelligent outbound method apparatus is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow diagram of one embodiment of an intelligent outbound method according to the present application is shown. The intelligent outbound method comprises the following steps:
step S201, obtaining historical call voice data between a target client and an agent, and converting the historical call voice data into a corresponding voice text.
In this embodiment, the electronic device (for example, the server/terminal device shown in fig. 1) on which the intelligent outbound method operates may obtain the historical call voice data through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G/5G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, an UWB (ultra wideband) connection, and other wireless connection means now known or developed in the future. After historical call voice data between the target client and the seat is obtained, the historical call voice data can be converted into recognizable voice texts by calling a voice recognition engine.
Step S202, performing semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; wherein the intention characteristic information at least comprises target business scene information and target reply information.
In this embodiment, the pre-trained intent analysis model may be invoked to perform semantic analysis on the voice text to obtain intent feature information corresponding to the voice text. The service scene information is used for representing the service background, reason or current state of the event corresponding to the voice text. The target reply information is used for representing a reply result corresponding to a question described in the voice text by the target client. In addition, the above-mentioned specific implementation process of performing semantic analysis on the voice text to obtain the intention feature information corresponding to the voice text will be described in further detail in the following specific embodiments, and will not be elaborated herein.
Step S203, determining a target voice operation text corresponding to the target service scene information and the target reply information from a preset voice operation database.
In this embodiment, the tactical database is a tactical database which is created in advance according to actual service requirements and stores tactical texts applied to different service scenarios. The specific implementation process of determining the target spoken text corresponding to the target business scenario information and the target reply information from the preset spoken database is further described in detail in the following specific embodiments, and will not be described herein too much.
Step S204, carrying out voice synthesis on the target language text to obtain corresponding language term voice.
In this embodiment, after the target dialect text is obtained, the target dialect text may be subjected to speech synthesis by a TTS speech synthesis technique to obtain the corresponding dialect speech of the target dialect text.
Step S205, determining the idle time of the target client, and acquiring the number information of the target client.
In this embodiment, the number information of the target client can be obtained by obtaining the client information of the target client and performing information query on the client information. In addition, the foregoing specific implementation process for determining the idle time of the target client will be described in further detail in the following specific embodiments, which are not set forth herein too much.
And calling a preset voice robot to carry out conversation communication with the target client based on the idle time, the number information and the speech term voice.
In this embodiment, the voice robot is a robot created in advance and having a dialogue pass with a client. The idle time corresponds to the time that the target customer can be dialed, and the number information refers to the telephone number of the target customer. The process of performing dialogue communication with the target client based on the voice robot can comprise the following steps: and when the current time is in the idle time, dialing is carried out on the terminal of the target client based on the number information, and after the dialing is connected, conversation communication is carried out with the target client by using conversational voice.
According to the method, historical conversation voice data between a target client and a seat are obtained firstly, the historical conversation voice data are converted into corresponding voice texts, semantic analysis is conducted on the voice texts to obtain intention characteristic information corresponding to the voice texts, then target conversation texts corresponding to target business scene information and target answer information are determined from a preset conversation database, voice synthesis is conducted on the target conversation texts to obtain corresponding conversation term voices, and after the idle time of the target client is determined and the number information of the target client is obtained, a preset voice robot is called finally to conduct conversation communication with the target client based on the idle time, the number information and the conversation term voices. This application is through using voice robot to send the call to target customer automatically when idle time in order to carry out the dialogue communication about urging receipt to do not need manual operation to dial customer's phone and urge receipt, labour saving and time saving has improved the treatment effeciency who urges receipt effectively.
In some optional implementations, step S202 includes the following steps:
and calling a pre-trained intention analysis model.
In this embodiment, the intention analysis model may be generated by training a recurrent neural network model based on a pre-collected labeled training data set.
Inputting the voice text into the intention analysis model, performing semantic analysis on the voice text through the intention analysis model, and outputting an intention analysis result corresponding to the voice text.
In this embodiment, after the voice text is input to the intention analysis model, the intention analysis model performs semantic analysis on the voice text, that is, predicts the service scene information and the response information corresponding to the voice text at the same time, and outputs an intention analysis result including the service scene information and the response information.
And receiving the intention analysis result fed back by the intention analysis model, and taking the intention analysis result as the intention characteristic information.
According to the method and the device, the pre-trained intention analysis model is called, the voice text is input into the intention analysis model, semantic analysis is carried out on the voice text through the intention analysis model, the intention characteristic information corresponding to the voice text can be quickly and accurately obtained, the target conversational text can be accurately determined from the conversational database based on the intention characteristic information in the follow-up process, and the generation accuracy and the generation intelligence of the target conversational text are guaranteed.
In some optional implementations of this embodiment, before the step of invoking the pre-trained intent analysis model, the electronic device may further perform the following steps:
acquiring a preset labeled training data set; the training data set comprises a plurality of training texts, and each training text is marked with service scene information and response information corresponding to the training text.
In this embodiment, the service scenario information is used to represent a service context, a reason, or a current state of an event corresponding to the training text, and includes, for example: the insurance order is not paid. The target response information is used to characterize a response result corresponding to a question described by the client in the voice text, and includes, for example: and paying again after another week.
And calling a preset recurrent neural network model.
In this embodiment, the recurrent neural network model is specifically an RNN model, and the existing model, which is not described herein, may be referred to in the related description in the art. It should be noted that: the recurrent neural network model of the present application may also be other feasible models, and the present application is not particularly limited thereto.
And inputting the training data set into the recurrent neural network model for training so as to train the recurrent neural network model for simultaneously predicting business scene information and response information, thereby obtaining the trained recurrent neural network model.
In this embodiment, the training process of the recurrent neural network model may refer to the related description of the RNN model in the art, and will not be described herein too much.
And taking the trained recurrent neural network model as the intention analysis model.
Storing the intent analysis model.
In this embodiment, the storage method of the intention analysis model is not limited, and the intention analysis model generated by training may be stored in a local database or a local blockchain.
According to the method and the device, the preset labeled training data set is obtained, then the training data set is input into the recurrent neural network model for training, so that the recurrent neural network model is trained for predicting the service scene information and the response information simultaneously, the trained recurrent neural network model is obtained and serves as an intention analysis model, semantic analysis can be subsequently performed on the voice text through the intention analysis model, intention characteristic information corresponding to the voice text can be rapidly and accurately obtained, the target language and technology text can be determined accurately from a language and technology database based on the intention characteristic information, and the generation accuracy and the generation intelligence of the target language and technology text are guaranteed.
In some optional implementations, step S203 includes the following steps:
and determining a business process corresponding to the target reply information based on the target reply information.
In this embodiment, the different answer messages may correspond to different business processes to call the telephony database.
In this embodiment, the conversational database is a database that is created in advance according to actual service usage requirements and stores mapping relationships corresponding to the service flow information, the service scene, and the conversational text one by one. For different service scenes, different service flows under the same service scene correspond to different preset conversational texts.
And based on the business process and the target business scene information, finding out a dialect text corresponding to the business process and the target business scene information from the dialect database to obtain the target dialect text.
According to the method and the device, the business process corresponding to the target reply information is determined based on the target reply information, and then the voice text corresponding to the business process and the target business scene information is searched from the preset voice database, so that the required target voice text can be quickly obtained, the follow-up automatic conversation communication with the target client based on the target voice text by calling the preset voice robot can be facilitated, and the collection prompting processing efficiency is effectively improved.
In some optional implementations, the determining the idle time of the target client in step S205 includes the following steps:
and acquiring the client information of the target client.
In this embodiment, the customer information may include name information or ID information of the target customer.
And inquiring attribute information corresponding to the client information from a preset client attribute library.
In this embodiment, the client attribute library stores attribute information of each client, and is associated with client information of each client one by one. The attribute information is information related to basic attribute information of the user, work attribute information and the like, the basic attribute information includes information related to basic characteristics of the user, such as sex, age, marital situation, family situation and the like of the user, and the work attribute information includes information of social nature of the user, such as occupation, income situation, company, work information, call number list and the like
And acquiring working attribute information from the attribute information.
In this embodiment, after obtaining the attribute information, the job attribute information of the target client may be further extracted from the attribute information.
Determining the working time of the target client based on the working attribute information.
In this embodiment, the working time of the target client can be extracted from the working information by acquiring the working information from the working attribute information.
And acquiring preset appointed time.
In this embodiment, the specified time may include a sleep time, and the sleep time includes, for example, 23.
And determining the idle time based on the working time and the specified time.
In the present embodiment, the working time and the idle time are removed from the 24-hour time included in one day to obtain a removed time, and the removed time is used as the idle time.
According to the method and the device, the client information of the target client is acquired, the working attribute information of the target client is inquired from the client attribute library based on the client information, the working time of the target client is determined based on the working attribute information, and further the idle time is determined based on the working time and the preset specified time, so that the idle time is ensured to be a time period suitable for urging the target client to receive, a preset voice robot is called for later to communicate with the target client based on the idle time, the intelligence of urging to receive an outgoing call is improved, and the use experience of the user is improved.
In some optional implementation manners of this embodiment, after step S206, the electronic device may further perform the following steps:
and acquiring the conversation voice information generated after the voice robot and the target client finish the conversation communication.
In this embodiment, in the process of performing a dialogue communication between the voice robot and the target client, the dialogue communication process between the voice robot and the target client is synchronously recorded, so as to generate corresponding dialogue voice information.
And analyzing the dialogue voice information, and judging whether the dialogue voice information contains target keywords or not.
In this embodiment, the target keyword may include paid, intentional payment, unintentional payment, and the like.
And if the target keywords are contained, generating a communication result corresponding to the dialogue voice information based on the target keywords.
In this embodiment, when it is detected that the dialog voice message includes the target keyword, the preset communication result template is called, and then the target keyword is filled into the communication result template, so as to obtain a communication result corresponding to the dialog voice message. The communication result template can be generated by writing in advance according to actual service use requirements.
According to the method and the device, after the voice robot is called to carry out conversation communication with the target customer, conversation voice information generated after the conversation communication is finished is obtained, the speaking voice information is analyzed, if the fact that the conversation voice information contains the target keyword is detected, a communication result corresponding to the conversation voice information is generated based on the target keyword, automatic outbound processing of the target customer is completed, the communication result is sent to a seat terminal of the seat in the follow-up process, the seat can decide follow-up measures according to the recorded conversation communication condition of the voice robot, and the use experience of the seat is improved.
In some optional implementation manners of this embodiment, after the step of generating the communication result corresponding to the dialog voice message based on the target keyword, the electronic device may further perform the following steps:
and acquiring the identity information of the seat.
In this embodiment, the identity information may include name information or agent ID information of an agent.
And acquiring communication information corresponding to the identity information.
In this embodiment, the communication information may refer to a telephone number or a mail address of the agent. The communication information of the seat can be inquired from an internal employee information database based on the identity information of the seat.
And sending the communication result to an agent terminal of the agent based on the communication information.
In this embodiment, if the communication information is a telephone number, the communication result may be sent to an agent terminal of an agent in a short message or multimedia message manner. If the communication information is a mail address, the communication result can be sent to the seat terminal of the seat in a mail server mode
According to the method and the device, after the voice robot and the communication result after the conversation and communication of the target client are completed are generated, the corresponding communication information can be acquired based on the identity information of the seat, and then the communication result is sent to the seat terminal of the seat based on the communication information, so that the seat can decide subsequent follow-up measures according to the conversation and communication condition of the record of the voice robot, and the use experience of the seat is improved.
It is emphasized that, in order to further ensure the privacy and security of the intention characteristic information, the intention characteristic information can also be stored in a node of a block chain.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, the processes of the embodiments of the methods described above can be included. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless otherwise indicated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an intelligent outbound method apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 3, the intelligent outbound method 300 according to this embodiment includes: a first obtaining module 301, an analyzing module 302, a first determining module 303, a first processing module 304, a second determining module 305, and a second processing module 306. Wherein:
the first obtaining module 301 is configured to obtain historical call voice data between a target client and an agent, and convert the historical call voice data into a corresponding voice text;
an analysis module 302, configured to perform semantic analysis on the voice text to obtain intention feature information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
a first determining module 303, configured to determine, from a preset conversational database, a target conversational text corresponding to the target service scene information and the target reply information;
a first processing module 304, configured to perform speech synthesis on the target vocalized text to obtain a corresponding vocalized term tone;
a second determining module 305, configured to determine idle time of the target client and obtain number information of the target client;
a second processing module 306, configured to invoke a preset voice robot to perform a dialogue communication with the target client based on the idle time, the number information, and the speech term tone.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method of the foregoing embodiment one by one, and are not described herein again.
In some optional implementations of this embodiment, the analysis module 302 includes:
the first calling sub-module is used for calling a pre-trained intention analysis model;
the analysis submodule is used for inputting the voice text into the intention analysis model, performing semantic analysis on the voice text through the intention analysis model and outputting an intention analysis result corresponding to the voice text;
and the receiving submodule is used for receiving the intention analysis result fed back by the intention analysis model and taking the intention analysis result as the intention characteristic information.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method in the foregoing embodiment one by one, and are not described herein again.
In some optional implementations of this embodiment, the analysis module 302 further includes:
the first obtaining submodule is used for obtaining a preset labeled training data set; the training data set comprises a plurality of training texts, and each training text is marked with service scene information and response information corresponding to the training text;
the second calling sub-module is used for calling a preset recurrent neural network model;
the training submodule is used for inputting the training data set into the recurrent neural network model for training so as to train the recurrent neural network model for simultaneously predicting the business scene information and the reply information to obtain a trained recurrent neural network model;
a first determining submodule, configured to use the trained recurrent neural network model as the intention analysis model;
and the storage submodule is used for storing the intention analysis model.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method of the foregoing embodiment one by one, and are not described herein again.
In some optional implementations of this embodiment, the first determining module 303 includes:
the second determining sub-module is used for determining a business process corresponding to the target reply information based on the target reply information;
the third calling submodule is used for calling the telephone operation database;
and the searching submodule is used for searching a conversational text corresponding to the business process and the target business scene information from the conversational database based on the business process and the target business scene information to obtain the target conversational text.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method in the foregoing embodiments one to one, and are not described herein again.
In some optional implementations of this embodiment, the second determining module 305 includes:
the second acquisition submodule is used for acquiring the client information of the target client;
the query submodule is used for querying attribute information corresponding to the client information from a preset client attribute library;
the third acquisition sub-module is used for acquiring the working attribute information from the attribute information;
a third determining submodule, configured to determine a working time of the target customer based on the working attribute information;
the fourth obtaining submodule is used for obtaining preset specified time;
and the fourth determining submodule is used for determining the idle time based on the working time and the specified time.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method of the foregoing embodiment one by one, and are not described herein again.
In some optional implementation manners of this embodiment, the intelligent outbound method further includes:
the second acquisition module is used for acquiring dialogue voice information generated after the dialogue communication between the voice robot and the target client is finished;
the judging module is used for analyzing the dialogue voice information and judging whether the dialogue voice information contains target keywords or not;
and the generating module is used for generating a communication result corresponding to the conversation voice information based on the target keyword if the target keyword is contained.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method of the foregoing embodiment one by one, and are not described herein again.
In some optional implementation manners of this embodiment, the intelligent outbound method further includes:
the third acquisition module is used for acquiring the identity information of the seat;
a fourth obtaining module, configured to obtain communication information corresponding to the identity information;
and the sending module is used for sending the communication result to the seat terminal of the seat based on the communication information.
In this embodiment, the operations executed by the modules or units respectively correspond to the steps of the intelligent outbound method in the foregoing embodiments one to one, and are not described herein again.
In order to solve the technical problem, the embodiment of the application further provides computer equipment. Referring to fig. 4 in particular, fig. 4 is a block diagram of a basic structure of a computer device according to the embodiment.
The computer device 4 comprises a memory 41, a processor 42, and a network interface 43, which are communicatively connected to each other via a system bus. It is noted that only computer device 4 having components 41-43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device 4. Of course, the memory 41 may also include both internal and external storage devices of the computer device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as computer readable instructions of the intelligent outbound method. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing the intelligent outbound method.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
in the embodiment of the application, historical conversation voice data between a target client and a seat is obtained firstly, the historical conversation voice data are converted into corresponding voice texts, then semantic analysis is carried out on the voice texts to obtain intention characteristic information corresponding to the voice texts, then the target conversation texts corresponding to target business scene information and target answer information are determined from a preset conversation database, voice synthesis is carried out on the target conversation texts to obtain corresponding conversation voices, and after the idle time of the target client is determined and the number information of the target client is obtained, a preset voice robot is called to carry out conversation communication with the target client based on the idle time, the number information and the conversation voices. The embodiment of the application automatically sends the call to the target customer to carry out conversation communication about collection prompting by using the voice robot in idle time, so that the call of the customer is dialed without manual operation to prompt collection, time and labor are saved, and the processing efficiency of collection prompting is effectively improved.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the intelligent callout method as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
in the embodiment of the application, historical conversation voice data between a target client and a seat is obtained firstly, the historical conversation voice data are converted into corresponding voice texts, then semantic analysis is carried out on the voice texts to obtain intention characteristic information corresponding to the voice texts, then the target conversation texts corresponding to target business scene information and target answer information are determined from a preset conversation database, voice synthesis is carried out on the target conversation texts to obtain corresponding conversation voices, and after the idle time of the target client is determined and the number information of the target client is obtained, a preset voice robot is called to carry out conversation communication with the target client based on the idle time, the number information and the conversation voices. The embodiment of the application automatically sends a call to the target client to carry out conversation communication about collection urging by using the voice robot in idle time, so that the client is not required to be dialed by manual operation to urge collection, time and labor are saved, and the collection urging processing efficiency is effectively improved.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that modifications can be made to the embodiments described in the foregoing detailed description, or equivalents can be substituted for some of the features described therein. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.
Claims (10)
1. An intelligent outbound method, comprising the steps of:
acquiring historical call voice data between a target client and a seat, and converting the historical call voice data into corresponding voice texts;
performing semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
determining a target voice operation text corresponding to the target service scene information and the target reply information from a preset voice operation database;
carrying out voice synthesis on the target language-technical text to obtain corresponding language term voice;
determining the idle time of the target client and acquiring the number information of the target client;
and calling a preset voice robot to carry out dialogue communication with the target client based on the idle time, the number information and the speech term voice.
2. The intelligent outbound method according to claim 1, wherein the step of performing semantic analysis on the speech text to obtain intention feature information corresponding to the speech text specifically comprises:
calling a pre-trained intention analysis model;
inputting the voice text into the intention analysis model, performing semantic analysis on the voice text through the intention analysis model, and outputting an intention analysis result corresponding to the voice text;
and receiving the intention analysis result fed back by the intention analysis model, and taking the intention analysis result as the intention characteristic information.
3. The intelligent callout method of claim 2, further comprising, prior to said step of invoking a pre-trained intent analysis model:
acquiring a preset labeled training data set; the training data set comprises a plurality of training texts, and each training text is marked with service scene information and response information corresponding to the training text;
calling a preset recurrent neural network model;
inputting the training data set into the recurrent neural network model for training so as to train the recurrent neural network model for predicting business scene information and response information simultaneously and obtain a trained recurrent neural network model;
taking the trained recurrent neural network model as the intention analysis model;
storing the intent analysis model.
4. The intelligent outbound method according to claim 1, wherein the step of determining the target verbal text corresponding to the service scenario information and the reply information from a preset verbal database specifically comprises:
determining a business process corresponding to the target reply information based on the target reply information;
calling the telephone operation database;
and based on the business process and the target business scene information, finding out a dialect text corresponding to the business process and the target business scene information from the dialect database to obtain the target dialect text.
5. The intelligent outbound method of claim 1 wherein said step of determining the idle time of said target client specifically comprises:
acquiring the client information of the target client;
inquiring attribute information corresponding to the client information from a preset client attribute library;
acquiring working attribute information from the attribute information;
determining a working time of the target customer based on the working attribute information;
acquiring preset appointed time;
and determining the idle time based on the working time and the specified time.
6. The intelligent outbound method of claim 1 wherein after the step of invoking the preset voice bot to have a conversational communication with the target client based on the idle time, the number information and the speech term tone, further comprising:
acquiring dialogue voice information generated after the voice robot and the target client complete dialogue communication;
analyzing the dialogue voice information, and judging whether the dialogue voice information contains target keywords or not;
and if the target keywords are contained, generating a communication result corresponding to the conversation voice information based on the target keywords.
7. The intelligent outbound method according to claim 1, further comprising, after the step of generating a communication result corresponding to the conversational speech information based on the target keyword:
acquiring identity information of the seat;
acquiring communication information corresponding to the identity information;
and sending the communication result to an agent terminal of the agent based on the communication information.
8. An intelligent outbound method device, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring historical call voice data between a target client and an agent and converting the historical call voice data into corresponding voice texts;
the analysis module is used for carrying out semantic analysis on the voice text to obtain intention characteristic information corresponding to the voice text; the intention characteristic information at least comprises target business scene information and target reply information;
a first determining module, configured to determine, from a preset voice operation database, a target voice operation text corresponding to the target service scene information and the target reply information;
the first processing module is used for carrying out voice synthesis on the target language-technical text to obtain corresponding language term voice;
the second determining module is used for determining the idle time of the target client and acquiring the number information of the target client;
and the second processing module is used for calling a preset voice robot to carry out dialogue communication with the target client based on the idle time, the number information and the speaking terminology voice.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed implements the steps of the intelligent callout method of any of claims 1-7.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of the intelligent callout method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211157027.2A CN115567646A (en) | 2022-09-21 | 2022-09-21 | Intelligent outbound method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211157027.2A CN115567646A (en) | 2022-09-21 | 2022-09-21 | Intelligent outbound method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115567646A true CN115567646A (en) | 2023-01-03 |
Family
ID=84741943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211157027.2A Pending CN115567646A (en) | 2022-09-21 | 2022-09-21 | Intelligent outbound method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115567646A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110166639A (en) * | 2019-04-15 | 2019-08-23 | 中国平安人寿保险股份有限公司 | Voice collection method, apparatus, computer equipment and storage medium |
CN111949784A (en) * | 2020-08-14 | 2020-11-17 | 中国工商银行股份有限公司 | Outbound method and device based on intention recognition |
-
2022
- 2022-09-21 CN CN202211157027.2A patent/CN115567646A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110166639A (en) * | 2019-04-15 | 2019-08-23 | 中国平安人寿保险股份有限公司 | Voice collection method, apparatus, computer equipment and storage medium |
CN111949784A (en) * | 2020-08-14 | 2020-11-17 | 中国工商银行股份有限公司 | Outbound method and device based on intention recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110442697B (en) | Man-machine interaction method, system, computer equipment and storage medium | |
EP4174849B1 (en) | Automatic generation of a contextual meeting summary | |
CN112925911B (en) | Complaint classification method based on multi-modal data and related equipment thereof | |
CN112836521A (en) | Question-answer matching method and device, computer equipment and storage medium | |
CN110931002B (en) | Man-machine interaction method, device, computer equipment and storage medium | |
CN111899765B (en) | Speech sending method and device based on emotion prediction model and computer equipment | |
CN117312535A (en) | Method, device, equipment and medium for processing problem data based on artificial intelligence | |
CN112669850A (en) | Voice quality detection method and device, computer equipment and storage medium | |
CN108055192A (en) | Group's generation method, apparatus and system | |
CN116681045A (en) | Report generation method, report generation device, computer equipment and storage medium | |
CN116563034A (en) | Purchase prediction method, device, equipment and storage medium based on artificial intelligence | |
CN116166858A (en) | Information recommendation method, device, equipment and storage medium based on artificial intelligence | |
CN113157896B (en) | Voice dialogue generation method and device, computer equipment and storage medium | |
CN115730603A (en) | Information extraction method, device, equipment and storage medium based on artificial intelligence | |
CN115567646A (en) | Intelligent outbound method, device, computer equipment and storage medium | |
CN114339132A (en) | Intelligent conference summary method and device of video conference and computer equipment | |
CN114637831A (en) | Data query method based on semantic analysis and related equipment thereof | |
CN113609833A (en) | Dynamic generation method and device of file, computer equipment and storage medium | |
CN116684529A (en) | Outbound processing method, outbound processing device, computer equipment and storage medium | |
CN117273848A (en) | Product recommendation method, device, equipment and storage medium based on artificial intelligence | |
CN117251631A (en) | Information recommendation method, device, equipment and storage medium based on artificial intelligence | |
CN117131093A (en) | Service data processing method, device, equipment and medium based on artificial intelligence | |
CN115811572A (en) | Information input method, device, server and storage medium | |
CN116821298A (en) | Keyword automatic identification method applied to application information and related equipment | |
CN116595966A (en) | User complaint processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |