CN111259119A - Question recommendation method and device - Google Patents

Question recommendation method and device Download PDF

Info

Publication number
CN111259119A
CN111259119A CN201811458062.1A CN201811458062A CN111259119A CN 111259119 A CN111259119 A CN 111259119A CN 201811458062 A CN201811458062 A CN 201811458062A CN 111259119 A CN111259119 A CN 111259119A
Authority
CN
China
Prior art keywords
candidate
training
question
request
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811458062.1A
Other languages
Chinese (zh)
Other versions
CN111259119B (en
Inventor
张姣姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN201811458062.1A priority Critical patent/CN111259119B/en
Publication of CN111259119A publication Critical patent/CN111259119A/en
Application granted granted Critical
Publication of CN111259119B publication Critical patent/CN111259119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • G06Q30/015Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
    • G06Q30/016After-sales

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a question recommendation method and device, wherein the method comprises the following steps: after a session request initiated by a request end is detected, determining the accepted probability of recommending each candidate problem in a candidate problem set to the request end based on the characteristic information of the request end and a first prediction model universal for different candidate problems; determining a prediction result whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a second prediction model matched with each candidate problem in the candidate problem set; screening a prediction result from the candidate problem set to show at least one target candidate problem accepted by the request terminal according to the prediction result corresponding to each candidate problem; and selecting the question recommended to the request terminal from the at least one target candidate question according to the corresponding acceptance probability of the at least one target candidate question. Therefore, the problem can be recommended for each request end in a personalized mode, and the consultation requirements of different request ends can be better met.

Description

Question recommendation method and device
Technical Field
The application relates to the technical field of internet, in particular to a problem recommendation method and device.
Background
With the rapid development and popularization of the internet, various internet applications also come up endlessly, such as an online shopping application, an online taxi taking application, and the like. Users may encounter some problems in using internet applications and need counsel service, so these internet applications are generally configured with a counseling function to provide counsel service to users.
When a user consults a question, the consulting system will generally recommend some candidate questions to the user so that the user can select the question to consult. At present, the recommendation problem form of the consultation system adopts a static configuration mode, namely, candidate problems which can be selected by a user are configured in advance. However, this static configuration method is difficult to satisfy the consultation requirements of different users, for example, some questions to be consulted by the user may not be in the candidate questions, so that the user still needs to spend time to consult or listen to the candidate questions, resulting in inefficient consultation of the questions.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a problem recommendation method and apparatus, so as to better meet the consultation requirements of different users and improve the efficiency of problem consultation.
In a first aspect, the present application provides a question recommendation method, including:
after a request end is detected to initiate a session request, determining the accepted probability of recommending each candidate problem in a candidate problem set to the request end based on the characteristic information of the request end and a first prediction model which is universal to different kinds of candidate problems and is trained in advance;
determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
screening at least one target candidate problem with a prediction result representing acceptance of a requested end from the candidate problem set according to the prediction result corresponding to each candidate problem;
and selecting the question recommended to the request terminal from the at least one target candidate question according to the corresponding acceptance probability of the at least one target candidate question.
In a possible implementation manner, the selecting, according to the received probability corresponding to the at least one target candidate question, a question recommended to the requesting end from the at least one target candidate question includes:
and taking the target candidate question with the acceptance probability higher than the preset probability value in the at least one target candidate question as the question recommended to the request terminal.
In a possible implementation manner, the selecting, according to the received probability corresponding to the at least one target candidate question, a question recommended to the requesting end from the at least one target candidate question includes:
arranging the at least one target candidate problem according to the sequence of the received probability from large to small;
and taking the target candidate question with the acceptance probability arranged at the first k bits in the at least one target candidate question as a question recommended to the request terminal, wherein k is a positive integer.
In a possible implementation manner, the determining, based on the feature information of the requesting end and a first prediction model that is common to different types of candidate questions trained in advance, an accepted probability that each candidate question in a candidate question set is recommended to the requesting end includes:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the accepted probability of each candidate question in the candidate question set recommended to the request terminal.
In a possible implementation manner, the determining, based on the feature information of the request end and a pre-trained second prediction model matched with each candidate question in the candidate question set, a prediction result of whether each candidate question in the candidate question set is accepted by the request end includes:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate question in the candidate question set, and outputting a prediction result of whether each candidate question in the candidate question set is accepted by a request end.
In a possible implementation manner, the question recommended to the request end further includes a preset prompting question, and the preset prompting question is used for prompting whether the request end needs to request for responding to other questions.
In a possible implementation manner, before detecting that the request end initiates the session request, the method further includes:
counting the total times of each kind of problems requested to respond by different request terminals in a second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
In a possible implementation, when the requesting end is a service provider terminal, the feature information includes at least one of the following information:
person description information of the service provider;
order description information of an order which is processed last time by the service provider;
the person description information of the service requester of the most recently processed order;
the service provider initiates order state information when the session request is sent;
the location and time at which the service provider initiated the session request;
the service provider aggregates information for orders over a first historical period of time.
In a possible embodiment, the method further comprises:
acquiring historical session record information in a third historical time period, wherein the historical session record information comprises historical characteristic information of each request terminal when initiating a session request and historical problems of each request terminal when initiating the session request;
extracting a historical feature vector corresponding to each piece of historical feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to one problem label, and different problem labels are used for identifying historical problems corresponding to different historical feature vectors respectively;
training the first predictive model based on the first sample training set until it is determined that the training of the first predictive model is complete.
In a possible embodiment, the training the first prediction model based on the first sample training set until it is determined that the training of the first prediction model is completed includes:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request terminal for each input training sample, and determining a candidate problem corresponding to each training sample and having the highest history accepted probability;
determining a first loss value of the training process in the current round by comparing the candidate problem with the highest historical acceptance probability corresponding to each training sample with the problem label corresponding to each training sample;
and when the first loss value is larger than a first set value, adjusting model parameters of the first prediction model, and performing the next round of training process by using the adjusted first prediction model until the determined first loss value is smaller than or equal to the first set value, and determining that the training of the first prediction model is finished.
In a possible embodiment, the method further comprises:
for each candidate problem in the candidate problem set, generating a second prediction model matched with each candidate problem, and generating a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
In one possible embodiment, the generating the second sample training set corresponding to each candidate problem includes:
for a first candidate question in the candidate sample set, the first candidate question being any one of the candidate questions in the candidate sample set, performing the following operations:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical conversation record information; the first request end represents that the historical problem requested to be responded is the request end of the first candidate problem, and the second request end represents that the historical problem requested to be responded is not the request end of the first candidate problem;
extracting a first historical feature vector corresponding to each piece of first historical feature information, and extracting a second historical feature vector corresponding to each piece of second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by using the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the question requested to be responded by the request terminal is the first candidate question, and the negative label indicates that the question requested to be responded by the request terminal is not the first candidate question.
In a possible embodiment, the training the second prediction model matched to each candidate problem based on the second sample training set corresponding to each candidate problem until the training of the second prediction model matched to each candidate problem is determined to be completed includes:
for the second prediction model matched by the first candidate problem, executing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; wherein, the classification result indicates whether the question requested to be responded by the request terminal is the first candidate question or not;
determining a second loss value of the training process of the current round by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
and when the second loss value is greater than a second set value, adjusting model parameters of a second prediction model matched with the first candidate problem, and performing the next round of training process by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is less than or equal to the second set value, and determining that the training of the second prediction model matched with the first candidate problem is completed.
In a second aspect, the present application provides a question recommendation device, including:
the first determination module is used for determining the acceptance probability of recommending each candidate problem in the candidate problem set to the request terminal based on the characteristic information of the request terminal and a first prediction model which is universal to different kinds of candidate problems and is trained in advance after the request terminal is detected to initiate a session request;
the second determination module is used for determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
the first screening module is used for screening at least one target candidate problem of which the prediction result represents the acceptance of the request terminal from the candidate problem set according to the prediction result corresponding to each candidate problem;
and the second screening module is used for selecting the question recommended to the request terminal from the at least one target candidate question according to the received probability corresponding to the at least one target candidate question.
In one possible design, when selecting a question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question, the second filtering module is specifically configured to:
and taking the target candidate question with the acceptance probability higher than the preset probability value in the at least one target candidate question as the question recommended to the request terminal.
In one possible design, when selecting a question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question, the second filtering module is specifically configured to:
arranging the at least one target candidate problem according to the sequence of the received probability from large to small;
and taking the target candidate question with the acceptance probability arranged at the first k bits in the at least one target candidate question as a question recommended to the request terminal, wherein k is a positive integer.
In a possible design, the first determining module, when determining, based on the feature information of the requesting end and a first prediction model that is common to different types of candidate questions and trained in advance, an accepted probability that each candidate question in the candidate question set is recommended to the requesting end, is specifically configured to:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the accepted probability of each candidate question in the candidate question set recommended to the request terminal.
In a possible design, the second determining module, when determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request side based on the feature information of the request side and a pre-trained second prediction model matched with each candidate problem in the candidate problem set, is specifically configured to:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate question in the candidate question set, and outputting a prediction result of whether each candidate question in the candidate question set is accepted by a request end.
In one possible design, the question recommended to the request end further includes a preset prompting question, and the preset prompting question is used for prompting whether the request end needs to request for responding to other questions.
In one possible design, before detecting that the request end initiates the session request, the first determining module is further configured to:
counting the total times of each kind of problems requested to respond by different request terminals in a second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
In one possible design, when the requesting end is a service provider terminal, the feature information includes at least one of the following information:
person description information of the service provider;
order description information of an order which is processed last time by the service provider;
the person description information of the service requester of the most recently processed order;
the service provider initiates order state information when the session request is sent;
the location and time at which the service provider initiated the session request;
the service provider aggregates information for orders over a first historical period of time.
In one possible design, the apparatus further includes:
the first model training module is used for acquiring historical session record information in a third historical time period, wherein the historical session record information comprises historical characteristic information of each request terminal when initiating a session request each time and historical problems of each request terminal when initiating the session request each time;
extracting a historical feature vector corresponding to each piece of historical feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to one problem label, and different problem labels are used for identifying historical problems corresponding to different historical feature vectors respectively;
training the first predictive model based on the first sample training set until it is determined that the training of the first predictive model is complete.
In one possible design, the first model training module, when training the first prediction model based on the first sample training set until it is determined that the training of the first prediction model is completed, is specifically configured to:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request terminal for each input training sample, and determining a candidate problem corresponding to each training sample and having the highest history accepted probability;
determining a first loss value of the training process in the current round by comparing the candidate problem with the highest historical acceptance probability corresponding to each training sample with the problem label corresponding to each training sample;
and when the first loss value is larger than a first set value, adjusting model parameters of the first prediction model, and performing the next round of training process by using the adjusted first prediction model until the determined first loss value is smaller than or equal to the first set value, and determining that the training of the first prediction model is finished.
In one possible design, the apparatus further includes:
a second module training module, configured to generate, for each candidate problem in the candidate problem set, a second prediction model matching with each candidate problem, and generate a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
In one possible design, the second model training module, when generating the second sample training set corresponding to each candidate problem, is specifically configured to:
for a first candidate question in the candidate sample set, the first candidate question being any one of the candidate questions in the candidate sample set, performing the following operations:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical conversation record information; the first request end represents that the historical problem requested to be responded is the request end of the first candidate problem, and the second request end represents that the historical problem requested to be responded is not the request end of the first candidate problem;
extracting a first historical feature vector corresponding to each piece of first historical feature information, and extracting a second historical feature vector corresponding to each piece of second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by using the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the question requested to be responded by the request terminal is the first candidate question, and the negative label indicates that the question requested to be responded by the request terminal is not the first candidate question.
In one possible design, the second model training module, when training the second prediction model matched to each candidate problem based on the second sample training set corresponding to each candidate problem until it is determined that the training of the second prediction model matched to each candidate problem is completed, is specifically configured to:
for the second prediction model matched by the first candidate problem, executing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; wherein, the classification result indicates whether the question requested to be responded by the request terminal is the first candidate question or not;
determining a second loss value of the training process of the current round by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
and when the second loss value is greater than a second set value, adjusting model parameters of a second prediction model matched with the first candidate problem, and performing the next round of training process by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is less than or equal to the second set value, and determining that the training of the second prediction model matched with the first candidate problem is completed.
The functions of the above modules may refer to the description of the first aspect, and will not be further described here.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate via the bus when the electronic device is running, and the machine-readable instructions, when executed by the processor, perform the steps of the problem recommendation method of the first aspect or any of the possible embodiments of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps of the problem recommendation method in the first aspect or any one of the possible implementation manners of the first aspect.
In this embodiment of the application, the server may obtain feature information of the request end after the request end initiates a session request, and then may respectively predict a recommended acceptance probability of each candidate problem in the candidate problem set and a prediction result of whether each candidate problem is accepted by the request end by using a first prediction model common to different types of candidate problems and a second prediction model matched with each candidate problem in the candidate problem set. Further, at least one target candidate problem which is accepted by the request end can be screened from the candidate problem set, and then the problem which is finally recommended to the user is determined according to the corresponding acceptance probability of the target candidate problems. Compared with a scheme of configuring candidate problems in advance, the scheme can screen the problems which are most likely to be received by the request end from the candidate problem set and recommend the problems to the request end based on the characteristic information of each request end and two types of prediction models.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 illustrates a block diagram of a service system 100 of some embodiments of the present application;
FIG. 2 illustrates a schematic diagram of exemplary hardware and software components of an electronic device 200 of some embodiments of the present application;
FIG. 3 is a flow chart illustrating a problem recommendation method according to an embodiment of the present application;
FIG. 4 shows an exemplary illustrative diagram of a DNN model provided by an embodiment of the present application;
FIG. 5 is a flowchart illustrating a problem recommendation method in a specific application scenario according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating training a first prediction model according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart illustrating the generation of a second sample training set according to an embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating training a second predictive model according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram illustrating an issue recommending apparatus according to an embodiment of the present application;
fig. 10 shows a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
To enable those skilled in the art to use the present disclosure, the following embodiments are given in connection with the specific application scenario "user consults a service system for a problem". It will be apparent to those skilled in the art that the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the application. Although the present application is primarily described in the context of a taxi service system, it should be understood that this is merely one exemplary embodiment. The present application may be applied to any other transportation type service system. For example, the present application may be applied to different transportation system environments, including terrestrial, marine, or airborne, among others, or any combination thereof. The vehicle of the transportation system may include a taxi, a private car, a windmill, a bus, a train, a bullet train, a high speed rail, a subway, a ship, an airplane, a spacecraft, a hot air balloon, or an unmanned vehicle, etc., or any combination thereof. The present application may further include any service system capable of providing counseling service, for example, a system for providing counseling service to a user in an online shopping platform, and a system for providing counseling service to a user in an online ordering platform. Also, the way of providing the advisory service in the present application includes, but is not limited to, the following two types: one is online consultation, i.e., on-line consultation of questions over a network, and the other is hot-line consultation, i.e., on-line consultation of questions by dialing a customer service line. Applications of the system or method of the present application may include web pages, plug-ins for browsers, client terminals, customization systems, internal analysis systems, or artificial intelligence robots, among others, or any combination thereof.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
The terms "passenger," "requestor," "service requestor" are used interchangeably in this application to refer to an individual, entity, or tool that can request or order a service. The terms "driver," "provider," "service provider" are used interchangeably in this application to refer to an individual, entity, or tool that can provide a service. The term "user" in this application may refer to an individual, entity or tool that requests a service, subscribes to a service, provides a service, or facilitates the provision of a service. In the embodiment of the present application, the user may be, for example, a passenger as a service requester, a driver as a service provider, or the like, or any combination thereof.
One aspect of the present application relates to a service system. When the system processes the consultation service, the problem to be requested and responded by each request end can be predicted according to the characteristic information of different request ends and a prediction model trained in advance through a deep learning algorithm, and the matched problem is recommended for each request end in a personalized mode based on the prediction result corresponding to each request end.
It is worth noting that before the application is provided, the prior consulting system mostly adopts a static configuration mode to pre-configure candidate problems, when a consulting problem of a request end exists, the consulting system can recommend the pre-configured candidate problems to the request end, the recommending mode is difficult to adapt to the consulting requirements of different users, the problem that the users need to spend time to check or listen to the recommending of the consulting system is easy to occur, but the situation that the users cannot find the problem needing consulting is avoided, the problem consulting efficiency is low, and the user experience is poor. However, the problem recommendation method provided by the application can be used for recommending problems for each request end in a deep learning manner according to the feature information of different request ends, and by the aid of the personalized recommendation manner, the consultation requirements of users of different request ends can be better met, the waiting time of the users in consultation is reduced, the efficiency of consultation problems is improved, and the user experience is further improved.
Fig. 1 is a block diagram of a service system 100 of some embodiments of the present application. For example, the service system 100 may be an online transportation service platform for transportation services such as taxi cab, designated drive service, express, carpool, bus service, driver rental, or shift service, or any combination thereof. The service system 100 may include one or more of a server 110, a network 120, a service requester terminal 130, a service provider terminal 140, and a database 150, and the server 110 may include a processor therein that performs instruction operations.
In some embodiments, the server 110 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., the servers 110 can be a distributed system). In some embodiments, the server 110 may be local or remote to the terminal. For example, the server 110 may access information and/or data stored in the service requester terminal 130, the service provider terminal 140, or the database 150, or any combination thereof, via the network 120. As another example, the server 110 may be directly connected to at least one of the service requester terminal 130, the service provider terminal 140, and the database 150 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform; by way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud (community cloud), a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof. In some embodiments, the server 110 may be implemented on an electronic device 200 having one or more of the components shown in FIG. 2 in the present application.
In some embodiments, the electronic device 200 may include a processor 220. Processor 220 may process information and/or data related to a service request (a service request in this application includes a session request sent by a requestor in consulting a problem, a problem consultation request, etc.) to perform one or more functions described in this application. For example, the processor 220 may establish a session connection with the service requester terminal 130 based on a session request obtained from the service requester terminal 130, and the like. In some embodiments, processor 220 may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, Processor 220 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set computer), a microprocessor, or the like, or any combination thereof.
Network 120 may be used for the exchange of information and/or data. In some embodiments, one or more components in the service system 100 (e.g., the server 110, the service requester terminal 130, the service provider terminal 140, and the database 150) may send information and/or data to other components. For example, the server 110 may obtain a service request from the service requester terminal 130 via the network 120. In some embodiments, the network 120 may be any type of wired or wireless network, or combination thereof. Merely by way of example, Network 130 may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a WLAN, a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, network 120 may include wired or wireless network access points, such as base stations and/or network switching nodes, through which one or more components of serving system 100 may connect to network 120 to exchange data and/or information.
In some embodiments, the user of the service requester terminal 130 may be the actual demander of the service or another person other than the actual demander of the service. For example, the user a of the service requester terminal 130 may use the service requester terminal 130 to initiate a service request for the service actual demander B (for example, the user a may call a car for his friend B), or receive service information or instructions from the server 110. In some embodiments, the user of the service provider terminal 140 may be the actual provider of the service or may be another person than the actual provider of the service. For example, user C of the service provider terminal 140 may use the service provider terminal 140 to receive a service request serviced by the service provider entity D (e.g., user C may pick up an order for driver D employed by user C), and/or information or instructions from the server 110. In some embodiments, "service requester" and "service requester terminal" may be used interchangeably, and "service provider" and "service provider terminal" may be used interchangeably.
In some embodiments, the service requester terminal 130 may comprise a mobile device, a tablet computer, a laptop computer, or a built-in device in a motor vehicle, etc., or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart televisions, smart cameras, or walkie-talkies, or the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, and the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, or a point of sale (POS) device, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like. In some embodiments, the built-in devices in the motor vehicle may include an on-board computer, an on-board television, and the like. In some embodiments, the service requester terminal 130 may be a device having a location technology for locating the location of the service requester and/or service requester terminal.
In some embodiments, the service provider terminal 140 may be a similar or identical device as the service requestor terminal 130. In some embodiments, the service provider terminal 140 may be a device with location technology for locating the location of the service provider and/or the service provider terminal. In some embodiments, the service requester terminal 130 and/or the service provider terminal 140 may communicate with other locating devices to determine the location of the service requester, service requester terminal 130, service provider, or service provider terminal 140, or any combination thereof. In some embodiments, the service requester terminal 130 and/or the service provider terminal 140 may transmit the location information to the server 110.
Database 150 may store data and/or instructions. In some embodiments, the database 150 may store data obtained from the service requester terminal 130 and/or the service provider terminal 140. In some embodiments, database 150 may store data and/or instructions for the exemplary methods described herein. In some embodiments, database 150 may include mass storage, removable storage, volatile Read-write Memory, or Read-Only Memory (ROM), among others, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state drives, and the like; removable memory may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, and the like; volatile read-write Memory may include Random Access Memory (RAM); the RAM may include Dynamic RAM (DRAM), Double data Rate Synchronous Dynamic RAM (DDR SDRAM); static RAM (SRAM), Thyristor-Based Random Access Memory (T-RAM), Zero-capacitor RAM (Zero-RAM), and the like. By way of example, ROMs may include Mask Read-Only memories (MROMs), Programmable ROMs (PROMs), Erasable Programmable ROMs (PERROMs), Electrically Erasable Programmable ROMs (EEPROMs), compact disk ROMs (CD-ROMs), digital versatile disks (ROMs), and the like. In some embodiments, database 150 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, across clouds, multiple clouds, or the like, or any combination thereof.
In some embodiments, a database 150 may be connected to the network 120 to communicate with one or more components in the service system 100 (e.g., the server 110, the service requester terminal 130, the service provider terminal 140, etc.). One or more components in the service system 100 may access data or instructions stored in the database 150 via the network 120. In some embodiments, the database 150 may be directly connected to one or more components in the service system 100 (e.g., the server 110, the service requestor terminal 130, the service provider terminal 140, etc.); alternatively, in some embodiments, database 150 may also be part of server 110.
In some embodiments, one or more components in the service system 100 (e.g., the server 110, the service requestor terminal 130, the service provider terminal 140, etc.) may have access to the database 150. In some embodiments, one or more components in the service system 100 may read and/or modify information related to a service requestor, a service provider, or the public, or any combination thereof, when certain conditions are met. For example, server 110 may read and/or modify information for one or more users after receiving a service request. As another example, the service provider terminal 140 may access information related to the service requester when receiving the service request from the service requester terminal 130, but the service provider terminal 140 may not modify the related information of the service requester.
In some embodiments, the exchange of information by one or more components in the service system 100 may be accomplished by requesting a service. The object of the service request may be any product. In some embodiments, the product may be a tangible product or a non-physical product. Tangible products may include food, pharmaceuticals, commodities, chemical products, appliances, clothing, automobiles, homes, or luxury goods, and the like, or any combination thereof. The non-material product may include a service product, a financial product, a knowledge product, an internet product, or the like, or any combination thereof. The internet product may include a stand-alone host product, a network product, a mobile internet product, a commercial host product, an embedded product, or the like, or any combination thereof. The internet product may be used in software, programs, or systems of the mobile terminal, etc., or any combination thereof. The mobile terminal may include a tablet, a laptop, a mobile phone, a Personal Digital Assistant (PDA), a smart watch, a Point of sale (POS) device, a vehicle-mounted computer, a vehicle-mounted television, a wearable device, or the like, or any combination thereof. The internet product may be, for example, any software and/or application used in a computer or mobile phone. The software and/or applications may relate to social interaction, shopping, transportation, entertainment time, learning, or investment, or the like, or any combination thereof. In some embodiments, the transportation-related software and/or applications may include travel software and/or applications, vehicle dispatch software and/or applications, mapping software and/or applications, and the like. In the vehicle scheduling software and/or application, the vehicle may include a horse, a carriage, a human powered vehicle (e.g., unicycle, bicycle, tricycle, etc.), an automobile (e.g., taxi, bus, privatege, etc.), a train, a subway, a ship, an airplane (e.g., airplane, helicopter, space shuttle, rocket, hot air balloon, etc.), etc., or any combination thereof.
Fig. 2 illustrates a schematic diagram of exemplary hardware and software components of an electronic device 200 of a server 110, a service requester terminal 130, a service provider terminal 140, which may implement the concepts of the present application, according to some embodiments of the present application. For example, the processor 220 may be used on the electronic device 200 and to perform the functions herein.
The electronic device 200 may be a general purpose computer or a special purpose computer, both of which may be used to implement the issue recommendation method of the present application. Although only a single computer is shown, for convenience, the functions described herein may be implemented in a distributed fashion across multiple similar platforms to balance processing loads.
For example, the electronic device 200 may include a network port 210 connected to a network, one or more processors 220 for executing program instructions, a communication bus 230, and a different form of storage medium 240, such as a disk, ROM, or RAM, or any combination thereof. Illustratively, the computer platform may also include program instructions stored in ROM, RAM, or other types of non-transitory storage media, or any combination thereof. The method of the present application may be implemented in accordance with these program instructions. The electronic device 200 also includes an Input/Output (I/O) interface 250 between the computer and other Input/Output devices (e.g., keyboard, display screen).
For ease of illustration, only one processor is depicted in the electronic device 200. However, it should be noted that the electronic device 200 in the present application may also comprise a plurality of processors, and thus the steps performed by one processor described in the present application may also be performed by a plurality of processors in combination or individually. For example, if the processor of the electronic device 200 executes steps a and B, it should be understood that steps a and B may also be executed by two different processors together or separately in one processor. For example, a first processor performs step a and a second processor performs step B, or the first processor and the second processor perform steps a and B together.
In combination with the above description of the service system and each electronic device in the service system, the following describes in detail a problem recommendation method provided by the present application in combination with specific embodiments.
Referring to fig. 3, a flowchart of a problem recommendation method provided in an embodiment of the present application is shown, where the problem recommendation method may be executed by a server in the service system shown in fig. 1, and a specific execution process includes the following steps:
step 301, after detecting that the request terminal initiates a session request, determining an accepted probability that each candidate problem in the candidate problem set is recommended to the request terminal based on the feature information of the request terminal and a pre-trained first prediction model common to different candidate problems.
Step 302, determining whether each candidate problem in the candidate problem set is a prediction result accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set.
It should be noted that, step 301 and step 302 may not be sequentially executed.
And step 303, screening at least one target candidate question with a prediction result representing acceptance of the request terminal from the candidate question set according to the prediction result corresponding to each candidate question.
And step 304, selecting a question recommended to the request terminal from the at least one target candidate question according to the received probability corresponding to the at least one target candidate question.
In the embodiment of the present application, the request end may be a service requester terminal or a service provider terminal. In this case, service requesters and service providers are also distinguished in different application scenarios, for example, in a taxi service system, a service requester is, for example, a passenger, and a service provider is, for example, a driver. In the online shopping service system, a service requester is, for example, a buyer purchasing goods, and a service provider is, for example, a seller selling goods. This is not limited by the present application.
In an embodiment of the present application, the server may obtain the feature information of the request end after detecting that the request end initiates the session request. Wherein the session request is used for requesting to establish a session with the server so as to perform problem consultation. For example, a user of the request end, that is, the user may initiate a session request by triggering the trigger control of the online consultation function in the request end, or the user may initiate a session request by triggering the trigger control of the hot-line consultation function in the request end and then dialing a hot-line phone.
In an embodiment of the present application, when the requesting end is a service provider terminal, the characteristic information of the requesting end may include, but is not limited to, at least one of the following information:
(1) persona description information of the service provider.
In one example, when the service provider is a driver, the persona description information of the service provider may include, for example, one or more of the following: the driver age, gender, registration time, departure time period, common departure place and departure time length in the preset time period, and income running average value, historical complaints, order payment and the like in the preset time period.
(2) Order description information of the last processed order by the service provider.
In one example, the order description information of the last processed order by the service provider may include, for example, one or more of the following information: the driver's latest amount of order, running time, pickup time, payment state, whether the order has additional fee, whether the fee is abnormal, order start time, order end time, etc.
(3) Person description information of a service requester of an order that was processed last.
In one example, the profile of the service requester of the most recently processed order may include, for example, one or more of the following: the passenger age, sex, occupation, taxi taking times in a preset time period, a common departure place and destination place, a taxi taking time period, a maximum taxi taking cost, a taxi taking cost average value, a history complained condition, a bill payment condition and the like.
(4) Order status information when the service provider initiates a session request.
In one example, the order status information at the time the service provider initiates the session request may include, for example, one or more of the following information: whether the driver is waiting to allocate an order, whether the driver has accepted an order, how long the last time the order was ended since the session was initiated, whether the driver first accepted an order the day, etc.
(5) The location and time at which the service provider initiated the session request.
In an example, the location and time at which the service provider initiates the session request may include, for example, the geographic location at which the driver places a hotline call or an online consultation, and a corresponding point in time.
(6) Order summary information for a service provider over a first historical period of time.
In one example, the order summary information of the service provider over the first historical time period includes, for example, one or more of the following: the total order handling amount, the total order handling time, the total income, the actual income, the unreceived payment amount, the complained problem distribution, the complained amount, the complained problem set and the like in the first historical time period.
The first history period may be understood as a preset period before the current time. The preset time period may be configured according to actual requirements, and may be, for example, one week or one month.
As can be seen from the feature information given in the above example, the feature information of the request end is divided into three categories: static features, dynamic features, and statistical features. In a possible implementation, the static features may be pre-stored in the database of the service system shown in fig. 1, the dynamic features may be obtained by the server from the requesting end or other devices through the network in the service system shown in fig. 1, and the statistical features may be obtained by the server based on the data recorded in the database of the service system.
Of course, the feature information of the requesting terminal may also be the feature information of the service requesting terminal, and the content and the obtaining method included in the feature information of the service requesting terminal and the content and the obtaining method included in the feature information of the service providing terminal are all based on the same technical concept, and specific contents are not described herein again.
In this embodiment of the application, when the server executes step 301, in a process of determining an accepted probability that each candidate problem in the candidate problem set is recommended to the request end based on the feature information of the request end and the pre-trained first prediction model common to different types of candidate problems, the server may first perform feature extraction on the feature information to obtain a feature vector. Because the feature information contains different types of data, the feature information can be preprocessed for identification, each type of data is represented digitally, so that the feature information is converted into a multi-dimensional feature vector, and each dimension can represent one type of data in the feature information. In an example, for the age of the driver included in the characteristic information may be converted into a numerical representation such as 18 to 60, the time point when the session request is initiated may be represented by, for example, "2018-01-0108: 01: 30 ", etc.
Further, the extracted feature vector may be input into a first pre-trained prediction model, and an accepted probability that each candidate question in the candidate question set is recommended to the requesting end is output. Here, the accepted probability may be understood as a probability of whether or not the candidate question is a question that the requester wants to consult.
The first prediction model may be, for example, a Deep Neural Networks (DNN) model. Referring to fig. 4, an exemplary description of a DNN model according to an embodiment of the present application is shown, where the DNN model includes an input layer (input layer), a hidden layer (hidden layer), and an output layer (output layer), where: an input layer, i.e., a first layer of the DNN model, which may include a plurality of input nodes, for example, when the extracted feature vector includes 200-dimensional features, the number of input nodes may be 200; the output layer, i.e. the last layer of the DNN model, includes output nodes, the number of which depends on the kind of questions included in the candidate question set, for example, when 10 candidate questions are included in the candidate question set, then the output layer may include 10 output nodes; the hidden layers are located between the input layer and the output layer, the hidden layers can be multiple layers, only one hidden layer is simply listed in fig. 4, and the more hidden layers, the more nodes each hidden layer contains, the stronger the expressive ability of the first prediction model. In the embodiment of the present application, the training process of the first training model will be described in detail below, and will not be described here for the time being.
The candidate question set can be obtained based on previously recorded questions that each requesting terminal requests to respond in the session process. In a possible implementation manner, the total number of times of each type of question requested and responded by different requesting terminals in the second historical time period may be counted, and then the counted questions whose total number of times meets the preset condition are taken as candidate questions to form a candidate question set. For example, the questions with the counted total times exceeding a preset threshold are taken as candidate questions, or the total times corresponding to each question are arranged in descending order, and the questions with the total times arranged in the top M numbers are taken as candidate questions, where M is a positive integer.
In the embodiment of the application, in order to improve the accuracy of the problem recommendation prediction, the feature information of the request terminal may be respectively input to the pre-trained second prediction model matched with each candidate problem in the candidate problem set, so as to predict the prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal. Each candidate question in the candidate question set is matched with one second prediction model, and each second prediction model is used for predicting whether a question requested to be responded by the request end is a matched candidate question or not.
In one possible implementation, the feature vector extracted from the feature information may be input into a pre-trained second prediction model matched with each candidate question, and a prediction result indicating whether a question requested to respond by the request end is the candidate question may be output in the second prediction model matched with each candidate question.
Here, the second prediction model may be, for example, a Gradient Boosting Decision Tree (GBDT), which may be understood as an iterative decision Tree algorithm that includes a plurality of decision trees, and the classification results of all the trees are accumulated to obtain a final classification result. In the embodiment of the present application, the final classification result is a result of two classifications, that is, a classification result of whether the question requested to be responded by the request terminal is a matching candidate question or not. In the embodiment of the present application, since each candidate problem is matched with one second prediction model, before each second prediction model is put into use, the second prediction model may be trained based on the training sample set corresponding to each candidate problem, and the training process will be described in detail below, which will not be described here.
In this embodiment, after the first prediction model and the second prediction model are used to predict the acceptance probability corresponding to each candidate problem and the prediction result corresponding to each candidate problem, the prediction result may be first screened from the candidate problem set according to the prediction result corresponding to each candidate problem to indicate at least one target candidate problem accepted by the requesting end.
Further, for the screened at least one target candidate problem, a problem recommended to the request end may be selected from the at least one target candidate problem according to the corresponding acceptance probability of the at least one target candidate problem.
In one possible implementation manner, a target candidate question with an acceptance probability higher than a preset probability value in the at least one target candidate question may be used as the question recommended to the requesting end.
In another possible implementation, at least one target candidate problem may be arranged in order of decreasing received probability, and then a target candidate problem with the received probability arranged at the top k bits in the at least one target candidate problem is used as a problem recommended to the requesting end, where k is a positive integer.
The problem which is determined by the two implementation modes and recommended to the request end has two characteristics: the prediction result obtained through the second prediction model represents that the prediction result can be accepted by the request terminal, and the accepted probability obtained through the first prediction model is higher than the preset probability value. In view of the above two features, it can be shown that the reliability of the acceptance of the problem recommended to the requesting end is high, and the prediction is more accurate.
In addition, since most of the questions are asked in the process of asking the questions, the accuracy of the first question recommended to the asking party directly affects the efficiency of asking the questions and the user experience of the asking party. In the embodiment of the application, the candidate problem which can be accepted by the request end and has the highest acceptance probability can be placed at the first position of the problem recommended by the server according to the prediction result, and then the preset prompting problem can be placed at the second position of the problem recommended by the server.
Referring to fig. 5, a process of recommending the above problem in the embodiment of the present application is exemplarily described in conjunction with a specific application scenario.
Referring to fig. 5, it is assumed that an application scenario is that a request terminal initiates a session request to a server by dialing a hotline phone to consult a question, and the request terminal is a terminal used by a driver, and a candidate question set includes 10 types of candidate questions. The server may then perform the following steps:
in a first step, a driver initiation session request is detected.
And secondly, acquiring characteristic information of the driver, and extracting a characteristic vector from the characteristic information.
And thirdly, inputting the feature vectors into a first prediction model (namely a DNN model shown in FIG. 5) common to different candidate problems, and outputting the acceptance probabilities corresponding to the 10 candidate problems.
And fourthly, inputting the feature vector into a second prediction model matched with each candidate problem in the candidate problem set, and outputting a prediction result of whether each candidate problem is accepted by the driver.
And fifthly, screening a prediction result from the candidate problem set according to the prediction result corresponding to each candidate problem, wherein the prediction result represents the target candidate problem accepted by the driver.
And sixthly, selecting the questions with the acceptance probability higher than the preset value and/or arranged at the top N bits from the target candidate questions as the questions recommended to the driver according to the corresponding acceptance probability of the target candidate questions.
In addition, the preset prompting question can also be used as a question recommended to the driver and arranged at the last position of the recommended question.
For example, candidate questions of top1 to top3 having an accepted probability higher than a preset value and arranged in the top3 digits may be selected as the questions recommended to the driver. In addition, a preset prompting question may also be set for prompting the driver whether or not to ask other candidate questions than top1 to top 3.
The following describes the training process of the two types of prediction models proposed in the embodiments of the present application with reference to specific embodiments.
First prediction model
In the embodiment of the present application, in order to train the first prediction model, a first sample training set for training the first prediction model needs to be generated first. In a possible implementation manner, historical session record information in the third historical time period may be obtained, where the historical session record information includes historical feature information of each request terminal when initiating a session request each time, and historical problems requested by each request terminal when initiating a session request each time. Then, a historical feature vector corresponding to each piece of historical feature information may be extracted, and each extracted historical feature vector is used as a training sample to form a first sample training set. Each training sample corresponds to one problem label, and different problem labels are used for identifying the historical problems corresponding to different historical feature vectors. In one example, the issue label may be identified by a number such as 1/2/3 ….
After obtaining the first sample training set, the first prediction model may be trained based on the first sample training set until it is determined that the training of the first prediction model is completed.
Referring to fig. 6, a schematic flowchart of training a first prediction model according to an embodiment of the present application is shown, including the following steps:
step 601, inputting a preset number of training samples in a first sample training set into a first prediction model, and respectively outputting a historical accepted probability that each candidate problem in a candidate problem set is recommended to a request end aiming at each input training sample.
Step 602, determining a candidate problem with the highest historical acceptance probability corresponding to each training sample.
Step 603, determining a first loss value of the training process in the current round by comparing the candidate problem with the highest historical accepted probability corresponding to each training sample and the problem label corresponding to each training sample.
In specific implementation, for each training sample, whether the candidate problem with the highest historical acceptance probability corresponding to the training sample is consistent with the problem identified by the problem label corresponding to the training sample may be compared, and if so, it is determined that the prediction of the training sample is accurate, and if not, it is determined that the prediction of the training sample is inaccurate. Through traversing all the training samples, a first loss value of the training process of the current round can be calculated, and the first loss value can reflect the prediction accuracy of the first prediction model.
And step 604, judging whether the first loss value of the training process of the current round is greater than a first set value.
If yes, go to step 605; if the determination result is negative, go to step 606.
And 606, adjusting the model parameters of the first prediction model, returning to 601, and performing the next round of training process by using the adjusted first prediction model.
Step 605, determining that the training of the first prediction model is completed.
(II) second prediction model
In the embodiment of the present application, since each candidate problem in the candidate problem set is matched with one second prediction model, when the second prediction model is trained, the second prediction model matched with each candidate problem may be trained respectively. In order to train the first prediction model, a second sample training set corresponding to each candidate problem needs to be generated, and then the second prediction model matched with each candidate problem is trained based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained completely.
Referring to fig. 7, for a schematic flowchart of generating a second sample training set provided in the embodiment of the present application, for a first candidate problem in a candidate sample set, where the first candidate problem is any one candidate problem in the candidate sample set, the following operations are performed:
step 701, screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from historical conversation record information.
The first request end represents a request end which requests the response of the historical questions as first candidate questions, and the second request end represents a request end which requests the response of the historical questions not as the first candidate questions;
and extracting a first historical feature vector corresponding to each piece of first historical feature information, and extracting a second historical feature vector corresponding to each piece of second historical feature information.
Step 702, using each extracted first historical feature vector as a positive training sample to form a positive sample training set, and using each extracted second historical feature vector as a negative training sample to form a negative sample training set.
Each positive training sample corresponds to one positive label, each negative training sample corresponds to one negative label, the positive label indicates that the question requested to respond by the request terminal is a first candidate question, and the negative label indicates that the question requested to respond by the request terminal is not the first candidate question.
And step 703, forming a second sample training set corresponding to the first candidate problem by using the positive sample training set and the negative sample training set.
In generating a second sample training set corresponding to each candidate problem, a second prediction model matched to each candidate problem may be trained. For the second prediction model matched with the first candidate problem, as shown in fig. 8, the following training process is performed:
step 801, obtaining a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem.
The first preset number and the second preset number can be the same or different, and if the first preset number and the second preset number are different, the difference between the first preset number and the second preset number is smaller, so that samples are balanced.
Step 802, inputting a first preset number of positive training samples and a second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample.
And the classification result output by the second prediction model represents whether the question requested to be responded by the request terminal is the first candidate question or not.
And 803, determining a second loss value of the training process in the current round by comparing the classification result and the positive label corresponding to each positive training sample and comparing the classification result and the negative label corresponding to each negative training sample.
In specific implementation, for each positive training sample, whether the classification result corresponding to the positive training sample is consistent with the result identified by the positive label corresponding to the positive training sample may be compared, and if so, it is determined that the prediction of the positive training sample is accurate, and if not, it is determined that the prediction of the positive training sample is inaccurate. For each negative training sample, the above process may also be referenced to determine whether the prediction for each negative training sample is accurate. Through traversing all the positive training samples and the negative training samples, a second loss value of the training process of the current round can be calculated, and the second loss value can reflect the prediction accuracy of the second prediction model.
And step 804, judging whether the second loss value of the training process is larger than a second set value.
If yes, go to step 805; if the determination result is negative, go to step 806.
And step 805, adjusting model parameters of the second prediction model matched with the first candidate question, returning to step 801, and performing the next round of training process by using the adjusted second prediction model matched with the first candidate question.
Step 806, determining that the training of the second prediction model matched with the first candidate problem is completed.
In this embodiment of the application, the server may obtain feature information of the request end after the request end initiates a session request, and then may respectively predict a recommended acceptance probability of each candidate problem in the candidate problem set and a prediction result of whether each candidate problem is accepted by the request end by using a first prediction model common to different types of candidate problems and a second prediction model matched with each candidate problem in the candidate problem set. Further, at least one target candidate problem which is accepted by the request end can be screened from the candidate problem set, and then the problem which is finally recommended to the user is determined according to the corresponding acceptance probability of the target candidate problems. Compared with a scheme of configuring candidate problems in advance, the scheme can screen the problems which are most likely to be received by the request end from the candidate problem set and recommend the problems to the request end based on the characteristic information of each request end and two types of prediction models.
Based on the same technical concept, the embodiment of the present application further provides a problem recommendation device corresponding to the problem recommendation method, and as the principle of solving the problem of the device in the embodiment of the present application is similar to the problem recommendation method in the embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated parts are not described again.
Referring to fig. 9, which is a schematic structural diagram of an issue recommendation apparatus provided in an embodiment of the present application, the apparatus 90 includes:
the first determining module 91 is configured to determine, after it is detected that a request end initiates a session request, an accepted probability that each candidate problem in a candidate problem set is recommended to the request end based on feature information of the request end and a first prediction model that is common to different types of candidate problems trained in advance;
a second determining module 92, configured to determine, based on the feature information of the request end and a pre-trained second prediction model matched with each candidate problem in the candidate problem set, a prediction result of whether each candidate problem in the candidate problem set is accepted by the request end;
a first screening module 93, configured to screen, according to a prediction result corresponding to each candidate problem, at least one target candidate problem whose prediction result represents acceptance by a request end from the candidate problem set;
and a second screening module 94, configured to select, according to the received probability corresponding to the at least one target candidate question, a question recommended to the requesting end from the at least one target candidate question.
In one possible design, when selecting a question recommended to the requesting end from the at least one target candidate question according to the received probability corresponding to the at least one target candidate question, the second filtering module 94 is specifically configured to:
and taking the target candidate question with the acceptance probability higher than the preset probability value in the at least one target candidate question as the question recommended to the request terminal.
In one possible design, when selecting a question recommended to the requesting end from the at least one target candidate question according to the received probability corresponding to the at least one target candidate question, the second filtering module 94 is specifically configured to:
arranging the at least one target candidate problem according to the sequence of the received probability from large to small;
and taking the target candidate question with the acceptance probability arranged at the first k bits in the at least one target candidate question as a question recommended to the request terminal, wherein k is a positive integer.
In one possible design, the first determining module 91, when determining the probability of being accepted to recommend each candidate problem in the candidate problem set to the requesting end based on the feature information of the requesting end and a first prediction model that is common to different types of candidate problems trained in advance, is specifically configured to:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the accepted probability of each candidate question in the candidate question set recommended to the request terminal.
In one possible design, the second determining module 92, when determining the prediction result of whether each candidate problem in the candidate problem set is accepted by the requesting end based on the feature information of the requesting end and the pre-trained second prediction model matched with each candidate problem in the candidate problem set, is specifically configured to:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate question in the candidate question set, and outputting a prediction result of whether each candidate question in the candidate question set is accepted by a request end.
In one possible design, the question recommended to the request end further includes a preset prompting question, and the preset prompting question is used for prompting whether the request end needs to request for responding to other questions.
In one possible design, before detecting that the request end initiates the session request, the first determining module 91 is further configured to:
counting the total times of each kind of problems requested to respond by different request terminals in a second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
In one possible design, when the requesting end is a service provider terminal, the feature information includes at least one of the following information:
person description information of the service provider;
order description information of an order which is processed last time by the service provider;
the person description information of the service requester of the most recently processed order;
the service provider initiates order state information when the session request is sent;
the location and time at which the service provider initiated the session request;
the service provider aggregates information for orders over a first historical period of time.
In one possible design, the apparatus further includes:
a first model training module 95, configured to obtain historical session record information in a third historical time period, where the historical session record information includes historical feature information of each request terminal when initiating a session request each time and historical problems requested by each request terminal when initiating a session request each time;
extracting a historical feature vector corresponding to each piece of historical feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to one problem label, and different problem labels are used for identifying historical problems corresponding to different historical feature vectors respectively;
training the first predictive model based on the first sample training set until it is determined that the training of the first predictive model is complete.
In one possible design, the first model training module 95, when training the first prediction model based on the first sample training set until it is determined that the training of the first prediction model is completed, is specifically configured to:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request terminal for each input training sample, and determining a candidate problem corresponding to each training sample and having the highest history accepted probability;
determining a first loss value of the training process in the current round by comparing the candidate problem with the highest historical acceptance probability corresponding to each training sample with the problem label corresponding to each training sample;
and when the first loss value is larger than a first set value, adjusting model parameters of the first prediction model, and performing the next round of training process by using the adjusted first prediction model until the determined first loss value is smaller than or equal to the first set value, and determining that the training of the first prediction model is finished.
In one possible design, the apparatus further includes:
a second model training module 96, configured to generate, for each candidate problem in the candidate problem set, a second prediction model matching with each candidate problem, and generate a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
In one possible design, the second model training module 96, when generating the second sample training set corresponding to each candidate problem, is specifically configured to:
for a first candidate question in the candidate sample set, the first candidate question being any one of the candidate questions in the candidate sample set, performing the following operations:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical conversation record information; the first request end represents that the historical problem requested to be responded is the request end of the first candidate problem, and the second request end represents that the historical problem requested to be responded is not the request end of the first candidate problem;
extracting a first historical feature vector corresponding to each piece of first historical feature information, and extracting a second historical feature vector corresponding to each piece of second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by using the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the question requested to be responded by the request terminal is the first candidate question, and the negative label indicates that the question requested to be responded by the request terminal is not the first candidate question.
In one possible design, the second model training module 96, when training the second prediction model matched to each candidate problem based on the second sample training set corresponding to each candidate problem until it is determined that the training of the second prediction model matched to each candidate problem is completed, is specifically configured to:
for the second prediction model matched by the first candidate problem, executing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; wherein, the classification result indicates whether the question requested to be responded by the request terminal is the first candidate question or not;
determining a second loss value of the training process of the current round by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
and when the second loss value is greater than a second set value, adjusting model parameters of a second prediction model matched with the first candidate problem, and performing the next round of training process by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is less than or equal to the second set value, and determining that the training of the second prediction model matched with the first candidate problem is completed.
The functions of the modules may refer to the descriptions in the method embodiments, and are not described herein.
Based on the same technical concept, the embodiment of the application also provides the electronic equipment. Referring to fig. 10, a schematic structural diagram of an electronic device 100 provided in the embodiment of the present application includes a processor 101, a memory 102, and a bus 103. The memory 102 is used for storing execution instructions, and includes a memory 1021 and an external memory 1022; the memory 1021 is also called an internal memory, and is used for temporarily storing the operation data in the processor 101 and the data exchanged with the external storage 1022 such as a hard disk, the processor 101 exchanges data with the external storage 1022 through the memory 1021, and when the computer device 100 is running, the processor 101 communicates with the storage 102 through the bus 103, so that the processor 101 executes the following instructions:
after a request end is detected to initiate a session request, determining the accepted probability of recommending each candidate problem in a candidate problem set to the request end based on the characteristic information of the request end and a first prediction model which is universal to different kinds of candidate problems and is trained in advance;
determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
screening at least one target candidate problem with a prediction result representing acceptance of a requested end from the candidate problem set according to the prediction result corresponding to each candidate problem;
and selecting the question recommended to the request terminal from the at least one target candidate question according to the corresponding acceptance probability of the at least one target candidate question.
The specific processing flow of the processor 101 may refer to the description of the above method embodiment, and is not described herein again.
Based on the same technical concept, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the problem recommendation method.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is run, the problem recommendation method can be executed, so as to better meet the consultation requirements of users at different request ends, reduce the waiting time of the users when consulting problems, and improve the efficiency of consulting problems.
Based on the same technical concept, embodiments of the present application further provide a computer program product, which includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the problem recommendation method, and specific implementation may refer to the above method embodiments, and will not be described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (28)

1. A question recommendation method, comprising:
after a request end is detected to initiate a session request, determining the accepted probability of recommending each candidate problem in a candidate problem set to the request end based on the characteristic information of the request end and a first prediction model which is universal to different kinds of candidate problems and is trained in advance;
determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
screening at least one target candidate problem with a prediction result representing acceptance of a requested end from the candidate problem set according to the prediction result corresponding to each candidate problem;
and selecting the question recommended to the request terminal from the at least one target candidate question according to the corresponding acceptance probability of the at least one target candidate question.
2. The method of claim 1, wherein selecting the question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question comprises:
and taking the target candidate question with the acceptance probability higher than the preset probability value in the at least one target candidate question as the question recommended to the request terminal.
3. The method of claim 1, wherein selecting the question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question comprises:
arranging the at least one target candidate problem according to the sequence of the received probability from large to small;
and taking the target candidate question with the acceptance probability arranged at the first k bits in the at least one target candidate question as a question recommended to the request terminal, wherein k is a positive integer.
4. The method of claim 1, wherein the determining the probability of being accepted to recommend each candidate question in the candidate question set to the requester based on the feature information of the requester and a first prediction model common to different kinds of candidate questions trained in advance comprises:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the accepted probability of each candidate question in the candidate question set recommended to the request terminal.
5. The method of claim 1, wherein the determining a prediction result of whether each candidate question in the candidate question set is accepted by the requesting end based on the feature information of the requesting end and a pre-trained second prediction model matched with each candidate question in the candidate question set comprises:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate question in the candidate question set, and outputting a prediction result of whether each candidate question in the candidate question set is accepted by a request end.
6. The method of claim 2, wherein the questions recommended to the requester further include a preset prompt question for prompting the requester whether to request response to other questions.
7. The method of claim 1, wherein before detecting that the requesting end initiates the session request, the method further comprises:
counting the total times of each kind of problems requested to respond by different request terminals in a second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
8. The method according to claim 1, wherein when the requesting terminal is a service provider terminal, the feature information includes at least one of the following information:
person description information of the service provider;
order description information of an order which is processed last time by the service provider;
the person description information of the service requester of the most recently processed order;
the service provider initiates order state information when the session request is sent;
the location and time at which the service provider initiated the session request;
the service provider aggregates information for orders over a first historical period of time.
9. The method of claim 1, further comprising:
acquiring historical session record information in a third historical time period, wherein the historical session record information comprises historical characteristic information of each request terminal when initiating a session request and historical problems of each request terminal when initiating the session request;
extracting a historical feature vector corresponding to each piece of historical feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to one problem label, and different problem labels are used for identifying historical problems corresponding to different historical feature vectors respectively;
training the first predictive model based on the first sample training set until it is determined that the training of the first predictive model is complete.
10. The method of claim 9, wherein the training the first predictive model based on the first training set of samples until it is determined that the training of the first predictive model is complete, comprises:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request terminal for each input training sample, and determining a candidate problem corresponding to each training sample and having the highest history accepted probability;
determining a first loss value of the training process in the current round by comparing the candidate problem with the highest historical acceptance probability corresponding to each training sample with the problem label corresponding to each training sample;
and when the first loss value is larger than a first set value, adjusting model parameters of the first prediction model, and performing the next round of training process by using the adjusted first prediction model until the determined first loss value is smaller than or equal to the first set value, and determining that the training of the first prediction model is finished.
11. The method of claim 1, wherein the method further comprises:
for each candidate problem in the candidate problem set, generating a second prediction model matched with each candidate problem, and generating a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
12. The method of claim 11, wherein generating a second training set of samples for each candidate problem comprises:
for a first candidate question in the candidate sample set, the first candidate question being any one of the candidate questions in the candidate sample set, performing the following operations:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical conversation record information; the first request end represents that the historical problem requested to be responded is the request end of the first candidate problem, and the second request end represents that the historical problem requested to be responded is not the request end of the first candidate problem;
extracting a first historical feature vector corresponding to each piece of first historical feature information, and extracting a second historical feature vector corresponding to each piece of second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by using the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the question requested to be responded by the request terminal is the first candidate question, and the negative label indicates that the question requested to be responded by the request terminal is not the first candidate question.
13. The method of claim 12, wherein the training the second predictive model for each candidate problem match based on the second training set of samples for each candidate problem until the training of the second predictive model for each candidate problem match is determined to be complete, comprises:
for the second prediction model matched by the first candidate problem, executing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; wherein, the classification result indicates whether the question requested to be responded by the request terminal is the first candidate question or not;
determining a second loss value of the training process of the current round by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
and when the second loss value is greater than a second set value, adjusting model parameters of a second prediction model matched with the first candidate problem, and performing the next round of training process by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is less than or equal to the second set value, and determining that the training of the second prediction model matched with the first candidate problem is completed.
14. A question recommendation device, comprising:
the first determination module is used for determining the acceptance probability of recommending each candidate problem in the candidate problem set to the request terminal based on the characteristic information of the request terminal and a first prediction model which is universal to different kinds of candidate problems and is trained in advance after the request terminal is detected to initiate a session request;
the second determination module is used for determining a prediction result of whether each candidate problem in the candidate problem set is accepted by the request terminal or not based on the characteristic information of the request terminal and a pre-trained second prediction model matched with each candidate problem in the candidate problem set;
the first screening module is used for screening at least one target candidate problem of which the prediction result represents the acceptance of the request terminal from the candidate problem set according to the prediction result corresponding to each candidate problem;
and the second screening module is used for selecting the question recommended to the request terminal from the at least one target candidate question according to the received probability corresponding to the at least one target candidate question.
15. The apparatus of claim 14, wherein the second filtering module, when selecting the question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question, is specifically configured to:
and taking the target candidate question with the acceptance probability higher than the preset probability value in the at least one target candidate question as the question recommended to the request terminal.
16. The apparatus of claim 14, wherein the second filtering module, when selecting the question recommended to the requesting end from the at least one target candidate question according to the accepted probability corresponding to the at least one target candidate question, is specifically configured to:
arranging the at least one target candidate problem according to the sequence of the received probability from large to small;
and taking the target candidate question with the acceptance probability arranged at the first k bits in the at least one target candidate question as a question recommended to the request terminal, wherein k is a positive integer.
17. The apparatus as claimed in claim 14, wherein the first determining module, when determining the probability of being accepted to recommend each candidate question in the candidate question set to the requesting end based on the feature information of the requesting end and a first prediction model common to different pre-trained candidate questions, is specifically configured to:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector into a pre-trained first prediction model, and outputting the accepted probability of each candidate question in the candidate question set recommended to the request terminal.
18. The apparatus of claim 14, wherein the second determining module, when determining the prediction result of whether each candidate question in the candidate question set is accepted by the requesting end based on the feature information of the requesting end and a pre-trained second prediction model matched with each candidate question in the candidate question set, is specifically configured to:
extracting the features of the feature information to obtain a feature vector;
and inputting the feature vector extracted from the feature information into a pre-trained second prediction model matched with each candidate question in the candidate question set, and outputting a prediction result of whether each candidate question in the candidate question set is accepted by a request end.
19. The apparatus of claim 15, wherein the questions recommended to the requester further comprise a predetermined prompt question for prompting the requester whether to request response to other questions.
20. The apparatus of claim 14, wherein the first determining module, before detecting that the requesting end initiates the session request, is further configured to:
counting the total times of each kind of problems requested to respond by different request terminals in a second historical time period;
and taking the counted problems with the total times meeting the preset conditions as candidate problems to form the candidate problem set.
21. The apparatus according to claim 14, wherein when the requesting terminal is a service provider terminal, the feature information includes at least one of the following information:
person description information of the service provider;
order description information of an order which is processed last time by the service provider;
the person description information of the service requester of the most recently processed order;
the service provider initiates order state information when the session request is sent;
the location and time at which the service provider initiated the session request;
the service provider aggregates information for orders over a first historical period of time.
22. The apparatus of claim 14, further comprising:
the first model training module is used for acquiring historical session record information in a third historical time period, wherein the historical session record information comprises historical characteristic information of each request terminal when initiating a session request each time and historical problems of each request terminal when initiating the session request each time;
extracting a historical feature vector corresponding to each piece of historical feature information;
taking each extracted historical feature vector as a training sample to form a first sample training set, wherein each training sample corresponds to one problem label, and different problem labels are used for identifying historical problems corresponding to different historical feature vectors respectively;
training the first predictive model based on the first sample training set until it is determined that the training of the first predictive model is complete.
23. The apparatus of claim 22, wherein the first model training module, when training the first predictive model based on the first training set of samples until it is determined that the training of the first predictive model is complete, is specifically configured to:
inputting a preset number of training samples in the first sample training set into the first prediction model, respectively outputting a history accepted probability that each candidate problem in the candidate problem set is recommended to the request terminal for each input training sample, and determining a candidate problem corresponding to each training sample and having the highest history accepted probability;
determining a first loss value of the training process in the current round by comparing the candidate problem with the highest historical acceptance probability corresponding to each training sample with the problem label corresponding to each training sample;
and when the first loss value is larger than a first set value, adjusting model parameters of the first prediction model, and performing the next round of training process by using the adjusted first prediction model until the determined first loss value is smaller than or equal to the first set value, and determining that the training of the first prediction model is finished.
24. The apparatus of claim 14, wherein the apparatus further comprises:
a second module training module, configured to generate, for each candidate problem in the candidate problem set, a second prediction model matching with each candidate problem, and generate a second sample training set corresponding to each candidate problem;
and training the second prediction model matched with each candidate problem based on the second sample training set corresponding to each candidate problem until the second prediction model matched with each candidate problem is determined to be trained.
25. The apparatus of claim 24, wherein the second model training module, when generating the second training set of samples corresponding to each candidate problem, is specifically configured to:
for a first candidate question in the candidate sample set, the first candidate question being any one of the candidate questions in the candidate sample set, performing the following operations:
screening out first historical characteristic information of a first request end and second historical characteristic information of a second request end from the historical conversation record information; the first request end represents that the historical problem requested to be responded is the request end of the first candidate problem, and the second request end represents that the historical problem requested to be responded is not the request end of the first candidate problem;
extracting a first historical feature vector corresponding to each piece of first historical feature information, and extracting a second historical feature vector corresponding to each piece of second historical feature information;
taking each extracted first historical feature vector as a positive training sample to form a positive sample training set, and taking each extracted second historical feature vector as a negative training sample to form a negative sample training set;
forming a second sample training set corresponding to the first candidate problem by using the positive sample training set and the negative sample training set;
each positive training sample corresponds to a positive label, each negative training sample corresponds to a negative label, the positive label indicates that the question requested to be responded by the request terminal is the first candidate question, and the negative label indicates that the question requested to be responded by the request terminal is not the first candidate question.
26. The apparatus of claim 25, wherein the second model training module, when training the second predictive model for each candidate problem match based on the second training set of samples corresponding to each candidate problem until it is determined that the training of the second predictive model for each candidate problem match is complete, is specifically configured to:
for the second prediction model matched by the first candidate problem, executing the following training process:
acquiring a first preset number of positive training samples and a second preset number of negative training samples from a second sample training set corresponding to the first candidate problem;
inputting the first preset number of positive training samples and the second preset number of negative training samples into a second prediction model matched with the first candidate problem, and outputting a classification result corresponding to each positive training sample and a classification result corresponding to each negative training sample; wherein, the classification result indicates whether the question requested to be responded by the request terminal is the first candidate question or not;
determining a second loss value of the training process of the current round by comparing the classification result corresponding to each positive training sample with the positive label and comparing the classification result corresponding to each negative training sample with the negative label;
and when the second loss value is greater than a second set value, adjusting model parameters of a second prediction model matched with the first candidate problem, and performing the next round of training process by using the adjusted second prediction model matched with the first candidate problem until the determined second loss value is less than or equal to the second set value, and determining that the training of the second prediction model matched with the first candidate problem is completed.
27. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the problem recommendation method of any one of claims 1 to 13 when executed.
28. A computer-readable storage medium, having stored thereon a computer program for performing, when being executed by a processor, the steps of the question recommendation method according to one of the claims 1 to 13.
CN201811458062.1A 2018-11-30 2018-11-30 Question recommending method and device Active CN111259119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458062.1A CN111259119B (en) 2018-11-30 2018-11-30 Question recommending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458062.1A CN111259119B (en) 2018-11-30 2018-11-30 Question recommending method and device

Publications (2)

Publication Number Publication Date
CN111259119A true CN111259119A (en) 2020-06-09
CN111259119B CN111259119B (en) 2023-05-26

Family

ID=70944816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458062.1A Active CN111259119B (en) 2018-11-30 2018-11-30 Question recommending method and device

Country Status (1)

Country Link
CN (1) CN111259119B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688641A (en) * 2017-08-28 2018-02-13 江西博瑞彤芸科技有限公司 One kind puts question to management method and system
CN112529602A (en) * 2020-12-23 2021-03-19 北京嘀嘀无限科技发展有限公司 Data processing method and device, readable storage medium and electronic equipment
CN112885175A (en) * 2021-01-15 2021-06-01 杭州安恒信息安全技术有限公司 Information security question generation method and device, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103229223A (en) * 2010-09-28 2013-07-31 国际商业机器公司 Providing answers to questions using multiple models to score candidate answers
CN104965890A (en) * 2015-06-17 2015-10-07 深圳市腾讯计算机系统有限公司 Advertisement recommendation method and apparatus
CN106682387A (en) * 2016-10-26 2017-05-17 百度国际科技(深圳)有限公司 Method and device used for outputting information
CN107451199A (en) * 2017-07-05 2017-12-08 阿里巴巴集团控股有限公司 Method for recommending problem and device, equipment
CN107463704A (en) * 2017-08-16 2017-12-12 北京百度网讯科技有限公司 Searching method and device based on artificial intelligence
CN107977411A (en) * 2017-11-21 2018-05-01 腾讯科技(成都)有限公司 Group recommending method, device, storage medium and server
WO2018184395A1 (en) * 2017-04-07 2018-10-11 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for activity recommendation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103229223A (en) * 2010-09-28 2013-07-31 国际商业机器公司 Providing answers to questions using multiple models to score candidate answers
CN104965890A (en) * 2015-06-17 2015-10-07 深圳市腾讯计算机系统有限公司 Advertisement recommendation method and apparatus
CN106682387A (en) * 2016-10-26 2017-05-17 百度国际科技(深圳)有限公司 Method and device used for outputting information
WO2018184395A1 (en) * 2017-04-07 2018-10-11 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for activity recommendation
CN107451199A (en) * 2017-07-05 2017-12-08 阿里巴巴集团控股有限公司 Method for recommending problem and device, equipment
CN107463704A (en) * 2017-08-16 2017-12-12 北京百度网讯科技有限公司 Searching method and device based on artificial intelligence
CN107977411A (en) * 2017-11-21 2018-05-01 腾讯科技(成都)有限公司 Group recommending method, device, storage medium and server

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688641A (en) * 2017-08-28 2018-02-13 江西博瑞彤芸科技有限公司 One kind puts question to management method and system
CN107688641B (en) * 2017-08-28 2021-12-28 江西博瑞彤芸科技有限公司 Question management method and system
CN112529602A (en) * 2020-12-23 2021-03-19 北京嘀嘀无限科技发展有限公司 Data processing method and device, readable storage medium and electronic equipment
CN112885175A (en) * 2021-01-15 2021-06-01 杭州安恒信息安全技术有限公司 Information security question generation method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN111259119B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
TWI676783B (en) Method and system for estimating time of arrival
US20200051193A1 (en) Systems and methods for allocating orders
CN111353092B (en) Service pushing method, device, server and readable storage medium
JP2021504850A (en) Systems and methods for charging electric vehicles
CN109791731B (en) Method and system for estimating arrival time
TWI724958B (en) Systems, methods, and computer readable media for online to offline service
CN109416823A (en) System and method for determining driver safety point
CN111105120B (en) Work order processing method and device
CN110910180B (en) Information pushing method and device, electronic equipment and storage medium
CN111105251A (en) Information pushing method and device
CN111259119B (en) Question recommending method and device
CN111316308A (en) System and method for identifying wrong order requests
CN111104585B (en) Question recommending method and device
CN111367575A (en) User behavior prediction method and device, electronic equipment and storage medium
CN111433795A (en) System and method for determining estimated arrival time of online-to-offline service
US20210042873A1 (en) Systems and methods for distributing a request
CN110750709A (en) Service recommendation method and device
CN111831967A (en) Store arrival identification method and device, electronic equipment and medium
CN111489214A (en) Order allocation method, condition setting method and device and electronic equipment
CN111259229B (en) Question recommending method and device
CN111353093B (en) Problem recommendation method, device, server and readable storage medium
CN111274471B (en) Information pushing method, device, server and readable storage medium
CN111291253A (en) Model training method, consultation recommendation method, device and electronic equipment
CN111275062A (en) Model training method, device, server and computer readable storage medium
CN111695919B (en) Evaluation data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant