CN113869596A - Task prediction processing method, device, product and medium - Google Patents

Task prediction processing method, device, product and medium Download PDF

Info

Publication number
CN113869596A
CN113869596A CN202111186825.3A CN202111186825A CN113869596A CN 113869596 A CN113869596 A CN 113869596A CN 202111186825 A CN202111186825 A CN 202111186825A CN 113869596 A CN113869596 A CN 113869596A
Authority
CN
China
Prior art keywords
task
processed
candidate
sample
performer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111186825.3A
Other languages
Chinese (zh)
Inventor
陈杰
李家旺
陈高均
潘昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Fangjianghu Technology Co Ltd
Original Assignee
Beijing Fangjianghu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fangjianghu Technology Co Ltd filed Critical Beijing Fangjianghu Technology Co Ltd
Priority to CN202111186825.3A priority Critical patent/CN113869596A/en
Publication of CN113869596A publication Critical patent/CN113869596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Resources & Organizations (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure discloses a task prediction processing method, a device, a product and a medium, aiming at a task to be processed and any candidate task executor in a plurality of candidate task executors, static characteristics and time sequence characteristics are obtained; the static features include: task characteristics, executor characteristics and action side characteristics of the task to be processed; the timing characteristics include: in a preset latest time period, behavior characteristics of candidate task performers and change characteristics of an acting party of a task to be processed; predicting the comprehensive probability of completing multiple targets of any candidate task performer by using a neural network model based on static characteristics and time sequence characteristics so as to determine the task performer of the task to be processed from multiple candidate task performers based on the comprehensive probability; wherein, accomplishing multiple objectives includes: and finishing the task to be processed and finishing the associated target of the task to be processed. The embodiment of the disclosure can improve the comprehensive effect of task completion and project goal achievement after task completion.

Description

Task prediction processing method, device, product and medium
Technical Field
The disclosed embodiments relate to a task prediction processing method, device, product and medium.
Background
Matching tasks with workers and assigning tasks to workers are a problem that is often encountered in the industry. For example, in the application field of house property trading, in order to improve the work efficiency of a house property broker, the platform side issues tasks to the broker, for example, bargaining a house source, bringing a customer to watch on a house source, and the like, so as to guide the broker to work, thereby improving the benefits of the broker and the platform side.
In general, a platform side may have a large number of projects at the same time, each project may involve tasks in multiple links, the difficulty of tasks in the same link may be different for different projects, the difficulty of tasks in different links may also be different for the same project, and the difficulty and the completion of tasks in each link may affect the achievement of the final goal of the project.
In the prior art, task sequencing and allocation are mainly performed based on a single-target model sequencing method: that is, an important optimization target is used as a sorting standard, and the target is pre-estimated by using a model, for example, in a room source task system, a pre-estimation model for predicting whether a task is completed or not is constructed, and the probability of predicting whether the task is completed or not output by the pre-estimation model is used as a standard to sort and distribute the tasks.
In the process of implementing the present disclosure, through research, the inventor finds that, in the above-mentioned method for ordering based on a single-target model, only a certain single target is considered for task ordering, but the influence on the final target achievement of a project after the task is completed is not comprehensively considered, and a task with a single target ordering earlier may not greatly help the final target achievement of the project, for example, in a house source task system, task ordering is performed only by using the probability of whether a predicted task output by an estimation model is completed or not, and a task with an ordering earlier may not greatly help the final target of the project of house source bargain, so the method for ordering and allocating tasks based on the single-target model ordering method is not good in the project target achievement effect.
Therefore, how to reasonably distribute the tasks of the project to achieve the task completion and the achievement of the project goal after the task completion at the same time is crucial.
Disclosure of Invention
The embodiment of the disclosure provides a task prediction processing method, a device, equipment, a product and a medium, so as to improve the comprehensive effect of task completion and project goal achievement after the task is completed.
According to an aspect of the embodiments of the present disclosure, a task prediction processing method is provided, which includes:
the method comprises the steps of obtaining static characteristics and time sequence characteristics aiming at a task to be processed and any candidate task performer in a plurality of candidate task performers; wherein the static features include: the task characteristics of the task to be processed, the executor characteristics of the candidate task executor and the acting party characteristics of the task to be processed; the timing characteristics include: within a preset latest time period, behavior characteristics of the candidate task performers and change characteristics of the acting party of the task to be processed;
predicting a comprehensive probability of completing multiple targets by any candidate task performer based on the static characteristics and the time sequence characteristics by utilizing a neural network model, so as to determine task performers of the tasks to be processed from the candidate task performers based on the comprehensive probability; wherein the accomplishing multiple objectives comprises: and finishing the task to be processed and finishing the associated target of the task to be processed.
In another embodiment of the above method based on the present disclosure, the predicting, by using a neural network model, a comprehensive probability of completing multiple objectives by any candidate task performer based on the static features and the time series features includes:
inputting the static features into a deep neural network, and outputting first features through the deep neural network;
inputting the time sequence characteristic into a recurrent neural network, and outputting a second characteristic through the recurrent neural network;
fusing the first feature and the second feature to obtain a fused feature;
and predicting the comprehensive probability of finishing the multiple targets of any candidate task performer based on the fusion characteristics.
In another embodiment of the above method based on the present disclosure, the method further comprises:
and distributing the task to be processed to the candidate task performer with the highest comprehensive probability based on the comprehensive probability of completing multiple targets of the candidate task performers.
In another embodiment of the above method according to the present disclosure, the task to be processed is plural;
the acquiring of the static feature and the time sequence feature for the task to be processed and any candidate task performer of the plurality of candidate task performers includes:
respectively aiming at any task to be processed in a plurality of tasks to be processed, and aiming at any task candidate executor in the task to be processed and a plurality of candidate task executors, obtaining a static characteristic and a time sequence characteristic;
the predicting, by using the neural network model, the comprehensive probability of completing multiple targets by any candidate task performer based on the static feature and the time sequence feature includes:
predicting the comprehensive probability of any candidate task performer for completing multiple targets corresponding to any task to be processed by using a neural network model based on the static characteristics and the time sequence characteristics;
the method further comprises the following steps:
and respectively determining task executors of all tasks to be processed in the plurality of tasks to be processed based on the sequence of the comprehensive probability corresponding to the plurality of candidate task executors for completing the plurality of tasks to be processed from high to low and a preset task allocation rule, and allocating all the tasks to be processed to the determined task executors.
In another embodiment of the foregoing method based on the present disclosure, the task to be processed is a task in a room-source task system; the comprehensive probability of completing multiple targets comprises the following steps: the comprehensive probability of completing the task to be processed and completing the house source transaction is obtained;
the task features include any one or more of the following: task type, task amplitude;
the actor characteristics may comprise any one or more of the following: the age of the broker, the scholarship of the broker, the age of the broker;
the method features include any one or more of the following: house source characteristics and owner characteristics of house sources; the house source characteristics comprise any one or more of the following characteristics: house source price, house source position, house source property; the owner characteristic comprises any one or more of the following characteristics: whether the house is changed by the owner or not and whether the owner has bought the house or not;
the behavior characteristics of the candidate task performers comprise any one or more of the following characteristics: the type and number of tasks the broker performs;
the change characteristics of the acting party of the task to be processed comprise any one or more of the following characteristics: the price of the house source varies, the number of times the house source is executed for each task type.
In another embodiment of the above method based on the present disclosure, the method further includes training the neural network model by:
obtaining at least one training sample; each training sample comprises a task sample and a static characteristic and a time sequence characteristic corresponding to a task executor sample, and the training samples comprise task sample completion labels and task sample associated target completion labels;
predicting the comprehensive probability of the task executor samples corresponding to the training samples for completing multiple targets by utilizing a neural network model based on the static characteristics and the time sequence characteristics of the training samples to obtain a comprehensive probability predicted value; wherein the task performer sample completing multiple objectives comprises: completing the task sample and completing an associated goal of the task sample;
calculating a loss function value corresponding to each training sample by using a preset multi-target loss function based on whether the task sample of each training sample in the at least one training sample is labeled or not, whether the associated target of the task sample is labeled or not and a comprehensive probability predicted value;
and training the neural network model based on the loss function value corresponding to the at least one training sample.
In another embodiment of the above method according to the present disclosure, the multi-objective loss function is:
Figure BDA0003299601130000041
wherein L (w) a loss function value,
Figure BDA0003299601130000042
and obtaining the comprehensive probability predicted value, wherein y is whether the task sample is labeled or not, z is whether the associated target of the task sample is labeled or not, C is a multi-target loss weight, and the value of C is a real number greater than 0.
In another embodiment of the above method based on the present disclosure, the task sample is a task sample in a room-source task system; the task types of the task samples comprise any one or more of the following types: bargaining, area is seen, and virtual reality VR area is seen, and VR speaks the room, and pronunciation follow-up, face visit, room source are surveyed in fact, take the key.
In another embodiment of the above method based on the present disclosure, the value of C is determined based on the task type of the task sample:
Figure BDA0003299601130000051
wherein c is a preset fixed weight value, and the value of lambda is a real number greater than 1.
According to another aspect of the embodiments of the present disclosure, there is provided a task prediction processing apparatus including:
the system comprises a characteristic acquisition module, a task processing module and a task scheduling module, wherein the characteristic acquisition module is used for acquiring a static characteristic and a time sequence characteristic aiming at a task to be processed and any candidate task executor in a plurality of candidate task executors; wherein the static features include: the task characteristics of the task to be processed, the executor characteristics of the candidate task executor and the acting party characteristics of the task to be processed; the timing characteristics include: within a preset latest time period, behavior characteristics of the candidate task performers and change characteristics of the acting party of the task to be processed;
a prediction module, configured to predict, by using a neural network model, a comprehensive probability of completing multiple objectives by any candidate task performer based on the static feature and the time-series feature, so as to determine, from the multiple candidate task performers, a task performer of the task to be processed based on the comprehensive probability; wherein the accomplishing multiple objectives comprises: and finishing the task to be processed and finishing the associated target of the task to be processed.
In another embodiment of the above apparatus according to the present disclosure, the prediction module includes:
the deep neural network is used for receiving the static features and outputting first features;
the long-short term memory recurrent neural network is used for receiving the time sequence characteristics and outputting second characteristics;
the feature fusion unit is used for fusing the first feature and the second feature to obtain a fusion feature;
and the prediction unit is used for predicting the comprehensive probability of finishing the multiple targets by any candidate task executor based on the fusion characteristics.
In another embodiment of the above apparatus according to the present disclosure, further comprising:
and the distribution module is used for distributing the task to be processed to the candidate task performer with the highest comprehensive probability based on the comprehensive probability of completing multiple targets of the candidate task performers.
In another embodiment of the above apparatus according to the present disclosure, the task to be processed is plural;
the feature obtaining module is specifically configured to obtain a static feature and a time sequence feature for any one to-be-processed task of the multiple to-be-processed tasks and for any one candidate task performer of the any to-be-processed task and the multiple candidate task performers, respectively;
the prediction module is specifically configured to predict, by using a neural network model, a comprehensive probability that the candidate task performer completes multiple targets corresponding to any task to be processed based on the static feature and the time sequence feature;
the device further comprises:
and the distribution module is used for respectively determining the task performer of each task to be processed in the plurality of tasks to be processed based on the sequence of the comprehensive probability corresponding to the completion of the plurality of tasks to be processed by the plurality of candidate task performers from high to low and a preset task distribution rule, and distributing each task to be processed to the determined task performer.
In another embodiment of the above apparatus based on the present disclosure, the task to be processed is a task in a room-source task system; the comprehensive probability of completing multiple targets comprises the following steps: the comprehensive probability of completing the task to be processed and completing the house source transaction is obtained;
the task features include any one or more of the following: task type, task amplitude;
the actor characteristics may comprise any one or more of the following: the age of the broker, the scholarship of the broker, the age of the broker;
the method features include any one or more of the following: house source characteristics and owner characteristics of house sources; the house source characteristics comprise any one or more of the following characteristics: house source price, house source position, house source property; the owner characteristic comprises any one or more of the following characteristics: whether the house is changed by the owner or not and whether the owner has bought the house or not;
the behavior characteristics of the candidate task performers comprise any one or more of the following characteristics: the type and number of tasks the broker performs;
the change characteristics of the acting party of the task to be processed comprise any one or more of the following characteristics: the price of the house source varies, the number of times the house source is executed for each task type.
In another embodiment of the above apparatus according to the present disclosure, further comprising:
the device comprises a sample acquisition module, a training module and a training module, wherein the sample acquisition module is used for acquiring at least one training sample; each training sample comprises a task sample and a static characteristic and a time sequence characteristic corresponding to a task executor sample, and the training samples comprise task sample completion labels and task sample associated target completion labels;
the prediction module is further used for predicting the comprehensive probability of the task executor samples corresponding to the training samples for completing multiple targets based on the static characteristics and the time sequence characteristics of the training samples by using a neural network model to obtain a comprehensive probability prediction value; wherein the task performer sample completing multiple objectives comprises: completing the task sample and completing an associated goal of the task sample;
the training module is used for calculating a loss function value corresponding to each training sample based on whether the task sample of each training sample in the at least one training sample completes the label, whether the associated target of the task sample completes the label and a comprehensive probability prediction value by utilizing a preset multi-target loss function; and training the neural network model based on the loss function value corresponding to the at least one training sample.
In another embodiment of the above apparatus according to the present disclosure, the multi-objective loss function is:
Figure BDA0003299601130000071
wherein L (w) a loss function value,
Figure BDA0003299601130000072
and obtaining the comprehensive probability predicted value, wherein y is whether the task sample is labeled or not, z is whether the associated target of the task sample is labeled or not, C is a multi-target loss weight, and the value of C is a real number greater than 0.
In another embodiment of the above apparatus based on the present disclosure, the task sample is a task sample in a room-source task system; the task types of the task samples comprise any one or more of the following types: bargaining, area is seen, and virtual reality VR area is seen, and VR speaks the room, and pronunciation follow-up, face visit, room source are surveyed in fact, take the key.
In another embodiment of the foregoing apparatus based on the present disclosure, a value of C is determined based on a task type of the task sample:
Figure BDA0003299601130000073
wherein c is a preset fixed weight value, and the value of lambda is a real number greater than 1.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a memory for storing a computer program;
a processor, configured to execute the computer program stored in the memory, and when the computer program is executed, implement the task prediction processing method according to any of the above embodiments of the present disclosure.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the task prediction processing method according to any of the above embodiments of the present disclosure.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer program product, which includes a computer program/instruction, when executed by a processor, implementing the task prediction processing method according to any of the above embodiments of the present disclosure.
Based on the task prediction processing method and apparatus, device, product and medium provided by the above embodiments of the present disclosure, by acquiring static characteristics (including task characteristics of the task to be processed, performer characteristics of the candidate task performer and acting side characteristics of the task to be processed) and time sequence characteristics (including behavior characteristics of the candidate task performer and change characteristics of the acting side of the task to be processed within a preset latest time period) for the task to be processed and any candidate task performer in a plurality of candidate task performers, then, by utilizing a neural network model, based on the static characteristics and the time sequence characteristics, the comprehensive probability of completing multiple targets (including completing the task to be processed and completing the associated target) by any candidate task performer is predicted, so as to determine a task performer for the task to be processed from the plurality of candidate task performers based on the composite probability. Therefore, the embodiment of the disclosure can comprehensively consider the task completion and the influence of the task completion on the associated target (for example, the final target of the project), determine the task performer and instruct the task to be issued, so as to simultaneously realize the comprehensive effect of the task completion and the project target achievement after the task completion, and contribute to improving the task execution efficiency and the overall project efficiency.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an embodiment of a task prediction processing method according to the present disclosure.
Fig. 2 is a flowchart of another embodiment of a task prediction processing method according to the present disclosure.
FIG. 3 is a flowchart illustrating a task prediction processing method according to another embodiment of the disclosure.
Fig. 4 is a schematic structural diagram of an embodiment of a task prediction processing device according to the present disclosure.
Fig. 5 is a schematic structural diagram of another embodiment of the task prediction processing device according to the present disclosure.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Fig. 1 is a flowchart of an embodiment of a task prediction processing method according to the present disclosure. As shown in fig. 1, the task prediction processing method of this embodiment includes:
102, acquiring static characteristics and time sequence characteristics aiming at the task to be processed and any candidate task performer in a plurality of candidate task performers.
The static features may include, but are not limited to, the following three-dimensional features: task features of the task to be processed (i.e., features of the task itself), performer features of the candidate task performers, and actor features of the task to be processed. Timing characteristics may include, for example, but are not limited to, the following two-dimensional characteristics: within a preset recent time period (e.g., within the last 7 days, within the last 30 days, etc.), the behavioral characteristics of the candidate task performers and the variance characteristics of the actors of the pending task.
And 104, predicting the comprehensive probability of completing the multiple targets of any candidate task performer based on the static characteristics and the time sequence characteristics by utilizing a neural network model, so as to determine the task performer of the task to be processed from the multiple candidate task performers based on the comprehensive probability.
Wherein, accomplishing multiple objectives includes: and finishing the task to be processed and finishing the associated target of the task to be processed. The association target is not limited, and may be a target of a next stage directly associated with the to-be-processed task, or a final target of a project to which the to-be-processed task belongs, or the like.
Based on the task prediction processing method provided by the above embodiment of the present disclosure, static features (including task features of a task to be processed, performer features of a candidate task performer, and actor features of the task to be processed) and timing features (including behavior features of the candidate task performer and variation features of the actor of the task to be processed within a preset latest time period) are obtained for any candidate task performer of a task to be processed and a plurality of candidate task performers, and then, based on the static features and the timing features, a comprehensive probability that any candidate task performer completes multiple objectives (including completing the task to be processed and completing its associated objectives) is predicted by using a neural network model, so as to determine a task performer of the task to be processed from the plurality of candidate task performers based on the comprehensive probability. Therefore, the embodiment of the disclosure can comprehensively consider the task completion and the influence of the task completion on the associated target (for example, the final target of the project), determine the task performer and instruct the task to be issued, so as to simultaneously realize the comprehensive effect of the task completion and the project target achievement after the task completion, and contribute to improving the task execution efficiency and the overall project efficiency.
Fig. 2 is a flowchart of another embodiment of a task prediction processing method according to the present disclosure. As shown in fig. 2, on the basis of the embodiment shown in fig. 1, 104 may include:
1042, the static features are input into a Deep Neural Network (DNN) and the first feature is output via the DNN.
Through DNN, the input static features can be subjected to feature extraction to obtain first features.
1044, the timing characteristic is input into a Recurrent Neural Network (RNN) and a second characteristic is output via the RNN.
RNNs are a class of neural networks that can be used to process time series data, which refers to data collected at different points in time that reflects the state or extent of a change in something, phenomenon, etc. over time.
Optionally, in a possible implementation manner, the Long Short Term Memory (LSTM) network is a modified network of the RNN, and a conventional RNN, LSTM network or other neural network with a time sequence data processing capability may be used as the RNN to learn the input time sequence feature to obtain the second feature. Because the RNN has a feature memory function, it can learn the input time sequence feature, analyze the change state or degree of the time sequence feature with time, obtain the second feature and output it, that is, the output second feature can reflect the change state or degree of the time sequence feature with time.
1046, fusing the first feature and the second feature to obtain a fused feature.
The first feature and the second feature may be directly spliced or added together, or the first feature and the second feature may also be fused in other ways, which is not limited in this disclosure.
Optionally, in a possible implementation, the first feature and the second feature may be input into a neural network including a fully-connected layer, and the first feature and the second feature are directly spliced or otherwise fused through the fully-connected layer to obtain a fused feature.
1048, predicting the comprehensive probability of completing multiple targets of any corresponding candidate task performer based on the fusion characteristics.
Optionally, in a possible implementation manner, the comprehensive probability of completing multiple targets may be predicted based on the fusion feature through an activation function, such as softmax, Sigmoid, or the like, to obtain a value in the range [0,1] as the comprehensive probability of completing multiple targets of any corresponding candidate task performer.
Based on the embodiment, the DNN can further learn and extract the static characteristics, the RNN is used for processing the time sequence data, the time sequence characteristics are learned and extracted, the first characteristics and the second characteristics are fused, and the comprehensive probability of completing the multiple targets by the candidate task performer is predicted based on the obtained fusion characteristics, so that the static characteristics such as the task characteristics of the task to be processed, the performer characteristics of the candidate task performer and the acting side characteristics of the task to be processed can be comprehensively considered, the sequence characteristics such as the behavior characteristics of the candidate task performer and the change characteristics of the acting side of the task to be processed in the last preset time period can be comprehensively considered, the comprehensive probability of completing the multiple targets by the candidate task performer is accurately predicted, and the accuracy of the comprehensive probability prediction result of completing the multiple targets by the candidate task performer is improved.
Optionally, in the above embodiment of the present disclosure, after the comprehensive probability that any candidate task performer completes multiple objectives is obtained through prediction, the task to be processed may be further allocated to the candidate task performer with the highest comprehensive probability based on the comprehensive probability that multiple candidate task performers complete multiple objectives.
Specifically, the candidate task performer with the highest comprehensive probability can be selected from the comprehensive probabilities of directly completing multiple targets by multiple candidate task performers, and the task to be processed is distributed to the candidate task performer with the highest comprehensive probability; or, a plurality of candidate task performers can also finish the comprehensive probability of multiple targets and sort the multiple targets according to the sequence from high to low, and then distribute the tasks to be processed to the candidate task performer with the top ranking.
Based on the embodiment, the method can be applied to the scenes of a single processing task and a plurality of candidate task performers, and after the comprehensive probability that each candidate task performer completes multiple targets is obtained through prediction based on the embodiment of the disclosure, the candidate task performer with the highest comprehensive probability is selected to perform the task, so that the optimal optimization of the task performance efficiency and the overall project efficiency can be realized.
Optionally, in the above embodiment of the present disclosure, when there are multiple tasks to be processed, the operations of 102 and 104 may be executed respectively for any one of the multiple tasks to be processed, so as to obtain the comprehensive probability that any candidate task performer completes multiple targets corresponding to any one of the tasks to be processed. Specifically, in 102, the static feature and the time-series feature may be acquired for any one of the plurality of tasks to be processed, for any one of the tasks to be processed and any one of the candidate task performers, respectively. And in 104, predicting the comprehensive probability of the any candidate task performer for completing multiple targets corresponding to any task to be processed based on the static characteristics and the time sequence characteristics corresponding to the any task to be processed and any candidate task performer by using a neural network model. Then, based on the sequence of the comprehensive probability corresponding to the completion of the multiple tasks to be processed by the multiple candidate task performers from high to low and a preset task allocation rule, respectively determining a task performer of each task to be processed in the multiple tasks to be processed, and allocating each task to be processed to the determined task performer.
Specifically, in a possible implementation manner, the comprehensive probabilities of the candidate task performers completing the multiple tasks to be processed may be sorted in an order from high to low, and based on a preset task allocation rule, for example, the maximum number N of tasks assignable by each candidate task performer is limited (for example, the highest assigned task cannot be greater than 3), where N is an integer greater than 0, and the candidate task performers with higher comprehensive probabilities are sequentially selected as task performers of the respective tasks to be processed in the order from high to low of the comprehensive probabilities, and the respective tasks to be processed are assigned to the determined task performers.
Based on the maximum number N of assignable tasks of each candidate task performer, the assignable tasks of each candidate task performer can be limited, for example, according to the sequence of the comprehensive probability from high to low, if M (M is larger than N) to-be-processed tasks all correspond to the same candidate task performer, the N to-be-processed tasks are selected according to the sequence of the comprehensive probability from high to low to be issued to the candidate task performer, and corresponding candidate task performers are selected according to the sequence of the comprehensive probability from high to low and the maximum number N of assignable tasks of each candidate task performer.
Based on the embodiment, when the candidate task performers with higher comprehensive probability are sequentially selected as the task performers of each task to be processed according to the sequence from high comprehensive probability to low comprehensive probability, the task assignable by each candidate task performer is limited based on the maximum number N of the assignable tasks of each candidate task performer, so that the task assigned by the same candidate task performer can be prevented from exceeding the performance capability of the task assignable by the same candidate task performer, and the task can be ensured to be effectively executed under the condition that the comprehensive effect of task completion and the achievement of the project goal after the task completion are simultaneously realized.
Optionally, the task prediction processing method of any embodiment of the disclosure may be used for processing tasks in any field and any project. For example, when the embodiment of the present disclosure is applied to task management in a room source task system, the task to be processed in the above embodiment is a task in the room source task system, and accordingly, the comprehensive probability of completing multiple targets includes a comprehensive probability of completing the task to be processed and completing room source bargaining. Task features may include, for example, but are not limited to, any one or more of the following: task type, task magnitude, etc. The actor characteristics may include, for example, but are not limited to, any one or more of the following: the age of the broker, the scholarship of the broker, the age of the broker, etc. The actor characteristics may include, for example, but are not limited to, any one or more of the following: house source characteristics and owner characteristics of the house source, etc. The house-source characteristics may include, for example, but are not limited to, any one or more of the following: house source price, house source location, house source nature, etc. Owner characteristics may include, for example, but are not limited to, any one or more of the following: whether the owner changes houses, whether the owner has bought houses, etc. The behavioral characteristics of the candidate task performer may include, for example, but are not limited to, any one or more of the following: the type and number of tasks performed by the broker (e.g., the number of times the house source has seen, the magnitude of the bargain, the number of times the VR speaks the house in the last seven days), etc. The changing characteristics of the actors of the pending task may include, for example, but are not limited to, any one or more of the following characteristics: price changes for the source (e.g., price changes for the source in the last seven days), number of times the source is executed for each task type (e.g., number of times the source is seen in the last seven days), and so forth. Wherein each task type may include, for example, but is not limited to, any one or more of the following features: bargaining, see-through, Virtual Reality (VR) see-through, VR speak a room, voice follow-up, interview, real survey of house resources, take a key, and the like. The task range is the range under the corresponding task type, for example, the bargaining rate is 40 ten thousand, the bargaining rate is 60 ten thousand, VR speaks the house 8 times or 10 times, and watch 8 times or 10 times, and so on.
In practical application, the relevant information of the task to be processed, the relevant information of the candidate task performer and the relevant information of the acting party of the task to be processed can be obtained from an information database which is used for realizing the butt joint of a task distribution system for distributing and distributing the task, and the static characteristic and the time sequence characteristic are obtained by total extraction of the information. When corresponding task features (including static features and time sequence features) are extracted from the related information, if the corresponding information is represented as a number, the number can be directly extracted as the corresponding task feature; if the corresponding information represents a text, the text may be converted into a corresponding task feature, and as an example, the text may be converted into a corresponding feature by using an existing word embedding method, for example, a one-hot algorithm, a word-to-vector (word2vec) algorithm, and the like. The embodiment of the present disclosure does not limit the specific implementation manner of extracting the task features.
Based on the embodiment, static characteristics (task characteristics such as task type and task amplitude, performer characteristics such as age of the broker, academic history of the broker and working age of the broker, and actor characteristics of house source characteristics and owner characteristics of the house source waiting for processing tasks) and timing characteristics (behavior characteristics such as type and times of tasks executed by the broker, price change of the house source, and variation characteristics of actors waiting for processing tasks of the times of each task type of the house source executed in a preset latest time period) can be obtained for the tasks to be processed and the candidate brokers, so that the comprehensive probability of completing the tasks to be processed and completing the house source transaction can be predicted, therefore, the broker of the task to be processed is determined from the candidate brokers, the comprehensive effect of task completion and house resource transaction after the task is completed can be achieved simultaneously, and the improvement of the task execution efficiency and the overall efficiency of a house resource task system is facilitated.
In addition, in the embodiment of the present disclosure, the neural network model may be obtained by training a training sample in advance before 102. FIG. 3 is a flowchart illustrating a task prediction processing method according to another embodiment of the disclosure. As shown in fig. 3, in this embodiment, the method further includes training the neural network model by:
at least one training sample is obtained 202.
Each training sample comprises a task sample and a static feature and a time sequence feature corresponding to the task executor sample, and each training sample has a task sample completion label and a task sample associated target completion label, namely has a plurality of labels. For the static features and the time sequence features corresponding to one task sample and one task performer sample in each training sample, reference may be made to the description of the static features and the time sequence features in the foregoing embodiments of the present disclosure, and details are not repeated here.
And 204, predicting the comprehensive probability of the task performer sample corresponding to each training sample for completing multiple targets by utilizing a neural network model based on the static characteristics and the time sequence characteristics of each training sample respectively to obtain a comprehensive probability predicted value.
Wherein the task performer samples completing multiple objectives comprises: the task sample is completed and the associated goal for the task sample is completed.
And 206, calculating a loss function value corresponding to each training sample by using a preset multi-target loss function based on whether the task sample of each training sample in the at least one training sample is labeled, whether the associated target of the task sample is labeled and the comprehensive probability predicted value.
And 208, training the neural network model based on the loss function value corresponding to the at least one training sample.
Wherein, the operations 202-. The preset training completion condition may include, but is not limited to, any one or more of: the number of iterative training of the neural network model (i.e., the number of iterative executions of the operation 202 and 208 or 204 and 208) reaches a predetermined number (e.g., 200), the loss function value corresponding to the at least one training sample is less than or equal to a predetermined threshold, and so on. For example, the loss function value corresponding to each training sample is smaller than or equal to the preset threshold, or the mean value of the loss function values corresponding to at least one training sample is smaller than or equal to the preset threshold, which is not limited in the embodiments of the present disclosure.
The loss function is a way to measure the predicted and actual values (labels) of the neural network model by mathematical formulas. The design and construction of the loss function are core design points related to the good and bad performance of the neural network model.
The embodiment of the present disclosure provides a multi-objective loss function, and in a possible implementation manner, the multi-objective loss function may be:
Figure BDA0003299601130000171
in the above formula (1), l (w) is a loss function value corresponding to each training sample calculated by using a multi-objective loss function, and may also be referred to as a neural network model loss;
Figure BDA0003299601130000172
for neural network modelA comprehensive probability prediction value of the type output; y is a real tag (i.e. an actual value) of whether the task sample completes the tag, in actual application, a specific value may be 1 or 0, a value of the task sample completion tag may be set to 1, and a value of the task sample incomplete tag is set to 0; z is whether the associated target of the task sample completes the label, in practical application, a specific value can be 1 or 0, the value of the associated target completion label of the task sample can be set to be 1, and the value of the associated target incomplete label of the task sample is set to be 0; c is the multi-target loss weight, and the value of C is a real number greater than 0.
In the above-mentioned formula (1),
Figure BDA0003299601130000173
representing the prediction loss of the neural network model for the combined probability prediction values of the completed task sample and the associated target output for completing the task sample,
Figure BDA0003299601130000174
Figure BDA0003299601130000175
representing the prediction loss of the integrated probabilistic predictor output for the completed task sample. Based on multi-objective loss function
Figure BDA0003299601130000176
The influence of the samples which complete the task and complete the associated target of the task on the prediction loss of the neural network model is strengthened, and the punishment of prediction errors of the samples is increased, so that after the training of the neural network model is completed, the accuracy of the comprehensive probability of predicting the task corresponding to the completed task sample and completing the associated target of the task can be improved.
For example, in a house resource task system, one row of labels y is whether a task sample is a finished label, the other row of labels z is whether a cross label is formed for the house resource, and both rows of labels are applied to the multi-target loss function to realize the learning of the neural network model on the multi-target information. After the neural network model is trained, the comprehensive probability of completing the task corresponding to the task sample and completing the associated target of the task can be accurately predicted.
Optionally, when the embodiment of the present disclosure is applied to task management in a room source task system, the task sample in the embodiment may be a task sample in the room source task system. Accordingly, the task types of the task samples may include, but are not limited to, any one or more of the following, for example: bargaining, watching with the VR, talking about the house with the VR, following up with the voice, visiting with the face, exploring the house source, taking the key, and the like.
By adjusting the value of C in the formula (2), a neural network model which can meet the requirement of task completion rate and can also meet the associated target achievement rate after the task is completed can be obtained, so that the task comprehensive probability prediction algorithm combining multiple targets is realized. Therefore, how to select the value of C is crucial.
Optionally, in a possible implementation manner, the value of C is determined based on the task type of the task sample, and may specifically be determined in the following manner, for example:
Figure BDA0003299601130000181
in the above formula (2), c is a preset fixed weight value, a value of c is greater than 1, and a value of λ is a real number greater than 1.
In most task systems, tasks can be divided into several different task types, and each task type has its distinct features. Based on the embodiment, the value of the multi-target loss weight C can be adjusted according to different task types, so that the trained neural network model can be suitable for various task types in a task system, the influence of task completion of different task types and the achievement of the associated target after the task is completed can be reasonably balanced, and the comprehensive effect of the task completion and the achievement of the final project target after the task is completed can be simultaneously realized.
For example, in the house resource task system, the task types can be divided into 8 task types, such as bargaining, watch-taking, VR speak house, voice follow-up, interview, house resource survey, key getting, and the like. The bargaining task has obvious promotion effect on the house source bargaining target compared with other task types, but the predicted task completion probability is not high because the bargaining task is difficult, so the weighting of the bargaining task can be punished. And through multiple off-line tests, better values of lambda and c can be obtained, and thus a neural network model integrating multiple indexes is obtained through training.
The inventor of the present disclosure finds that, based on a certain number of tasks, through multiple off-line tests, when the value of λ is in a value range [1,2], and the value of c is in a value range [30,80], and under the same other conditions, task allocation is performed based on the comprehensive probability predicted by the neural network model, and can achieve optimization of an index, i.e., a task completion removal rate, under the condition that the index requirement, i.e., a task completion rate, is satisfied. The task completion rate refers to a ratio of the number of completed tasks to the number of all tasks in all tasks participating in the offline test. The task completion removal rate refers to a ratio of the number of tasks completed and associated targets of the completed tasks to the number of completed tasks.
When the embodiment of the disclosure is applied to a house source task system, the following method can be used for realizing:
acquiring a task list of all tasks to be allocated, and sequentially selecting one task to be allocated from the task list as a task to be processed in the embodiment;
acquiring all candidate brokers in the broker list;
acquiring relevant information of a task to be processed, relevant information of all candidate brokers and relevant information of an actor of the task to be processed from an information database, and extracting static characteristics and time sequence characteristics from the information, wherein the static characteristics comprise: task characteristics such as task type and task amplitude of a task to be processed, performer characteristics such as age of a broker, academic history of the broker and working age of the broker, house source characteristics and action side characteristics of owner characteristics of the house source for waiting for processing the task; the timing characteristics include: in nearly 7 days, the broker executes the behavior characteristics of task types, times and the like, and the price change of the house resources and the change characteristics of the acting party waiting for processing the tasks when the house resources are executed by each task type;
predicting the comprehensive probability of each candidate broker for completing the tasks to be processed and the house source bargaining based on the embodiment by using the neural network model obtained by training;
after the comprehensive probabilities that all candidate brokers complete all tasks to be processed and complete house source bargaining are respectively predicted in the mode, all candidate brokers are ranked according to the comprehensive probabilities of all tasks to be processed from high to low;
based on the limitation of the maximum number N (for example, 3) of assignable tasks of the same candidate broker, selecting the candidate broker with higher comprehensive probability as the broker of each task to be processed in turn according to the sequence of the comprehensive probability from high to low, and assigning each task to be processed to the determined broker.
Any one of the task prediction processing methods provided by the embodiments of the present disclosure may be executed by any suitable device having a data processing capability, including but not limited to: terminal equipment, a server and the like. Alternatively, any of the task prediction processing methods provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any of the task prediction processing methods mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. And will not be described in detail below.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Fig. 4 is a schematic structural diagram of an embodiment of a task prediction processing device according to the present disclosure. The device of the embodiment can be used for realizing the embodiment of the prediction processing method of each task of the disclosure. As shown in fig. 4, the task prediction processing device of this embodiment includes: a feature acquisition module 302 and a prediction module 304. Wherein:
the feature obtaining module 302 is configured to obtain a static feature and a time sequence feature for a task to be processed and any candidate task performer of the multiple candidate task performers. Wherein the static features may include: task characteristics of the task to be processed, executor characteristics of the candidate task executor and action side characteristics of the task to be processed; the timing characteristics may include: and in a preset latest time period, behavior characteristics of the candidate task performers and change characteristics of the acting party of the task to be processed.
And the predicting module 304 is used for predicting the comprehensive probability of completing multiple targets by any candidate task performer based on the static characteristics and the time sequence characteristics by using a neural network model so as to determine the task performer of the task to be processed from the multiple candidate task performers based on the comprehensive probability. Wherein the accomplishing multiple objectives comprises: and finishing the task to be processed and finishing the associated target of the task to be processed.
The task prediction processing device provided based on the above embodiment of the present disclosure obtains a static feature (including a task feature of a task to be processed, a performer feature of a candidate task performer, and an actor feature of the task to be processed) and a time sequence feature (including a behavior feature of the candidate task performer and a variation feature of the actor of the task to be processed within a preset latest time period) for any candidate task performer, and then predicts a comprehensive probability that any candidate task performer completes multiple objectives (including completing the task to be processed and completing its associated objectives) based on the static feature and the time sequence feature by using a neural network model, so as to determine a task performer of the task to be processed from the multiple candidate task performers based on the comprehensive probability. Therefore, the embodiment of the disclosure can comprehensively consider the task completion and the influence of the task completion on the associated target (for example, the final target of the project), determine the task performer and instruct the task to be issued, so as to simultaneously realize the comprehensive effect of the task completion and the project target achievement after the task completion, and contribute to improving the task execution efficiency and the overall project efficiency.
Fig. 5 is a schematic structural diagram of another embodiment of the task prediction processing device according to the present disclosure. As shown in fig. 5, on the basis of the embodiment shown in fig. 4, in the task prediction processing apparatus of this embodiment, the prediction module 302 may include: a DNN3022, configured to receive the static feature and output a first feature; RNN3024, configured to receive the timing characteristic and output a second characteristic; a feature fusion unit 3026, configured to fuse the first feature and the second feature to obtain a fusion feature; a prediction unit 3028 configured to predict a comprehensive probability that the any candidate task performer completes the multi-objective based on the fusion feature.
Optionally, referring to fig. 5 again, in the task prediction processing apparatus of the foregoing embodiment, an allocating module 306 may further be included, configured to allocate the task to be processed to the candidate task performer with the highest comprehensive probability based on the comprehensive probability that the plurality of candidate task performers complete multiple objectives.
Optionally, in a possible implementation manner, the number of the tasks to be processed is multiple. Correspondingly, the feature obtaining module 302 is specifically configured to obtain a static feature and a time sequence feature for any one to-be-processed task of the multiple to-be-processed tasks and for any one candidate task performer of the any to-be-processed task and the multiple candidate task performers, respectively; the predicting module 304 is specifically configured to predict, by using a neural network model, a comprehensive probability that the candidate task performer completes multiple targets corresponding to the task to be processed based on the static feature and the time-series feature. Accordingly, referring to fig. 5 again, in the task prediction processing device of the above embodiment, the allocating module 306 is configured to determine task performers of each task to be processed in the plurality of tasks to be processed respectively and allocate each task to be processed to the determined task performers based on the sequence of the comprehensive probabilities corresponding to the plurality of candidate task performers for completing the plurality of tasks to be processed from high to low and the preset task allocation rule.
Alternatively, the task prediction processing device according to any embodiment of the present disclosure may be used for processing tasks in any field and any project. For example, when the embodiment of the present disclosure is applied to task management in a room source task system, the task to be processed in the above embodiment is a task in the room source task system, and accordingly, the comprehensive probability of completing multiple targets includes a comprehensive probability of completing the task to be processed and completing room source bargaining. Task features may include, for example, but are not limited to, any one or more of the following: task type, task magnitude, etc. The actor characteristics may include, for example, but are not limited to, any one or more of the following: the age of the broker, the scholarship of the broker, the age of the broker, etc. The actor characteristics may include, for example, but are not limited to, any one or more of the following: house source characteristics and owner characteristics of the house source, etc. The house-source characteristics may include, for example, but are not limited to, any one or more of the following: house source price, house source location, house source nature, etc. Owner characteristics may include, for example, but are not limited to, any one or more of the following: whether the owner changes houses, whether the owner has bought houses, etc. The behavioral characteristics of the candidate task performer may include, for example, but are not limited to, any one or more of the following: the type and number of tasks performed by the broker, etc. The changing characteristics of the actors of the pending task may include, for example, but are not limited to, any one or more of the following characteristics: the price of the house source varies, the number of times the house source is executed for each task type, etc. Wherein each task type may include, for example, but is not limited to, any one or more of the following features: bargaining, watching with the VR, talking about the house with the VR, following up with the voice, visiting with the face, exploring the house source, taking the key, and the like.
Optionally, referring to fig. 5 again, in the task prediction processing apparatus of the foregoing embodiment, a sample obtaining module 308 and a training module may also be included. Wherein:
a sample acquisition module 308 for acquiring at least one training sample; each training sample comprises a task sample and a static feature and a time sequence feature corresponding to the task executor sample, and each training sample has a task sample completion label and a task sample associated target completion label.
Correspondingly, in this embodiment, the prediction module 304 may be further configured to predict, by using the neural network model, a comprehensive probability that the task performer sample corresponding to the training sample completes multiple targets based on the static features and the time sequence features of the training sample, so as to obtain a predicted value of the comprehensive probability; wherein the task performer sample completing multiple objectives comprises: the task sample is completed and the associated goal for the task sample is completed.
A training module 310, configured to calculate, by using a preset multi-target loss function, a loss function value corresponding to each training sample based on whether a task sample of each training sample in the at least one training sample completes a label, whether an associated target of the task sample completes the label, and a comprehensive probability prediction value; and training the neural network model based on the loss function value corresponding to the at least one training sample.
Optionally, in a possible implementation manner, the multi-objective loss function may be:
Figure BDA0003299601130000231
in the above formula (1), l (w) is a loss function value corresponding to each training sample calculated by using the multi-objective loss function,
Figure BDA0003299601130000232
and D, calculating a comprehensive probability predicted value, wherein y is whether the task sample is labeled or not, z is whether the associated target of the task sample is labeled or not, C is a multi-target loss weight, and the value of C is a real number greater than 0.
Optionally, when the embodiment of the present disclosure is applied to task management in a room source task system, the task sample in the embodiment may be a task sample in the room source task system. Accordingly, the task types of the task samples may include, but are not limited to, any one or more of the following, for example: bargaining, watching with the VR, talking about the house with the VR, following up with the voice, visiting with the face, exploring the house source, taking the key, and the like.
Optionally, in a possible implementation manner, the value of C is determined based on the task type of the task sample, and may specifically be determined in the following manner, for example:
Figure BDA0003299601130000233
in the above formula (2), c is a preset fixed weight value, a value of c is greater than 1, and a value of λ is a real number greater than 1.
In addition, an embodiment of the present disclosure also provides an electronic device, including:
a memory for storing a computer program;
a processor, configured to execute the computer program stored in the memory, and when the computer program is executed, implement the task prediction processing method according to any of the above embodiments of the present disclosure.
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 6. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 6, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by a processor to implement the task prediction processing methods of the various embodiments of the present disclosure described above and/or other desired functions.
In one example, the electronic device may further include: an input device and an output device, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device may also include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, and the like to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 6, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of task prediction processing according to various embodiments of the present disclosure described in the above section of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the task prediction processing method according to various embodiments of the present disclosure described in the above section of the present specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A task prediction processing method is characterized by comprising the following steps:
the method comprises the steps of obtaining static characteristics and time sequence characteristics aiming at a task to be processed and any candidate task performer in a plurality of candidate task performers; wherein the static features include: the task characteristics of the task to be processed, the executor characteristics of the candidate task executor and the acting party characteristics of the task to be processed; the timing characteristics include: within a preset latest time period, behavior characteristics of the candidate task performers and change characteristics of the acting party of the task to be processed;
predicting a comprehensive probability of completing multiple targets by any candidate task performer based on the static characteristics and the time sequence characteristics by utilizing a neural network model, so as to determine task performers of the tasks to be processed from the candidate task performers based on the comprehensive probability; wherein the accomplishing multiple objectives comprises: and finishing the task to be processed and finishing the associated target of the task to be processed.
2. The method of claim 1, wherein predicting, using a neural network model, a composite probability of any candidate task performer completing the multi-objective based on the static features and the timing features comprises:
inputting the static features into a deep neural network, and outputting first features through the deep neural network;
inputting the time sequence characteristic into a recurrent neural network, and outputting a second characteristic through the recurrent neural network;
fusing the first feature and the second feature to obtain a fused feature;
and predicting the comprehensive probability of finishing the multiple targets of any candidate task performer based on the fusion characteristics.
3. The method of claim 1 or 2, further comprising:
and distributing the task to be processed to the candidate task performer with the highest comprehensive probability based on the comprehensive probability of completing multiple targets of the candidate task performers.
4. The method according to claim 1 or 2, wherein the task to be processed is plural;
the acquiring of the static feature and the time sequence feature for the task to be processed and any candidate task performer of the plurality of candidate task performers includes:
respectively aiming at any task to be processed in a plurality of tasks to be processed, and aiming at any task candidate executor in the task to be processed and a plurality of candidate task executors, obtaining a static characteristic and a time sequence characteristic;
the predicting, by using the neural network model, the comprehensive probability of completing multiple targets by any candidate task performer based on the static feature and the time sequence feature includes:
predicting the comprehensive probability of any candidate task performer for completing multiple targets corresponding to any task to be processed by using a neural network model based on the static characteristics and the time sequence characteristics;
the method further comprises the following steps:
and respectively determining task executors of all tasks to be processed in the plurality of tasks to be processed based on the sequence of the comprehensive probability corresponding to the plurality of candidate task executors for completing the plurality of tasks to be processed from high to low and a preset task allocation rule, and allocating all the tasks to be processed to the determined task executors.
5. The method according to any one of claims 1 to 4, wherein the task to be processed is a task in a room-source task system; the comprehensive probability of completing multiple targets comprises the following steps: the comprehensive probability of completing the task to be processed and completing the house source transaction is obtained;
the task features include any one or more of the following: task type, task amplitude;
the actor characteristics may comprise any one or more of the following: the age of the broker, the scholarship of the broker, the age of the broker;
the method features include any one or more of the following: house source characteristics and owner characteristics of house sources; the house source characteristics comprise any one or more of the following characteristics: house source price, house source position, house source property; the owner characteristic comprises any one or more of the following characteristics: whether the house is changed by the owner or not and whether the owner has bought the house or not;
the behavior characteristics of the candidate task performers comprise any one or more of the following characteristics: the type and number of tasks the broker performs;
the change characteristics of the acting party of the task to be processed comprise any one or more of the following characteristics: the price of the house source varies, the number of times the house source is executed for each task type.
6. The method of any one of claims 1-5, further comprising training the neural network model by:
obtaining at least one training sample; each training sample comprises a task sample and a static characteristic and a time sequence characteristic corresponding to a task executor sample, and the training samples comprise task sample completion labels and task sample associated target completion labels;
predicting the comprehensive probability of the task executor samples corresponding to the training samples for completing multiple targets by utilizing a neural network model based on the static characteristics and the time sequence characteristics of the training samples to obtain a comprehensive probability predicted value; wherein the task performer sample completing multiple objectives comprises: completing the task sample and completing an associated goal of the task sample;
calculating a loss function value corresponding to each training sample by using a preset multi-target loss function based on whether the task sample of each training sample in the at least one training sample is labeled or not, whether the associated target of the task sample is labeled or not and a comprehensive probability predicted value;
and training the neural network model based on the loss function value corresponding to the at least one training sample.
7. The method of claim 6, wherein the multi-objective loss function is:
Figure FDA0003299601120000031
wherein L (w) a loss function value,
Figure FDA0003299601120000032
and obtaining the comprehensive probability predicted value, wherein y is whether the task sample is labeled or not, z is whether the associated target of the task sample is labeled or not, C is a multi-target loss weight, and the value of C is a real number greater than 0.
8. The method of claim 7, wherein the task sample is a task sample in a house-sourced task hierarchy; the task types of the task samples comprise any one or more of the following types: bargaining, area is seen, and virtual reality VR area is seen, and VR speaks the room, and pronunciation follow-up, face visit, room source are surveyed in fact, take the key.
9. The method of claim 8, wherein a value of C is determined based on a task type of the task sample:
Figure FDA0003299601120000033
wherein c is a preset fixed weight value, and the value of lambda is a real number greater than 1.
10. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory, and when executed, implementing the method of any of the preceding claims 1-9.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 9.
12. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the method of any of the preceding claims 1-9.
CN202111186825.3A 2021-10-12 2021-10-12 Task prediction processing method, device, product and medium Pending CN113869596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111186825.3A CN113869596A (en) 2021-10-12 2021-10-12 Task prediction processing method, device, product and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111186825.3A CN113869596A (en) 2021-10-12 2021-10-12 Task prediction processing method, device, product and medium

Publications (1)

Publication Number Publication Date
CN113869596A true CN113869596A (en) 2021-12-31

Family

ID=78998793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111186825.3A Pending CN113869596A (en) 2021-10-12 2021-10-12 Task prediction processing method, device, product and medium

Country Status (1)

Country Link
CN (1) CN113869596A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387062A (en) * 2022-01-13 2022-04-22 北京自如信息科技有限公司 Training of housekeeper recommendation model, housekeeper recommendation method and electronic equipment
CN116187632A (en) * 2022-09-08 2023-05-30 贝壳找房(北京)科技有限公司 Behavior prediction model training method, house source matching method, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163474A (en) * 2019-04-12 2019-08-23 平安普惠企业管理有限公司 A kind of method and apparatus of task distribution
CN110618855A (en) * 2018-12-25 2019-12-27 北京时光荏苒科技有限公司 Task allocation method and device, electronic equipment and storage medium
CN110766269A (en) * 2019-09-02 2020-02-07 平安科技(深圳)有限公司 Task allocation method and device, readable storage medium and terminal equipment
CN111242752A (en) * 2020-04-24 2020-06-05 支付宝(杭州)信息技术有限公司 Method and system for determining recommended object based on multi-task prediction
CN111275358A (en) * 2020-02-25 2020-06-12 北京多禾聚元科技有限公司 Dispatch matching method, device, equipment and storage medium
CN111754121A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Task allocation method, device, equipment and storage medium
CN113159628A (en) * 2021-05-13 2021-07-23 中国建设银行股份有限公司 Task allocation method and device
CN113256038A (en) * 2021-07-15 2021-08-13 腾讯科技(深圳)有限公司 Data processing method, data processing equipment and computer readable storage medium
CN113379177A (en) * 2020-03-10 2021-09-10 北京沃东天骏信息技术有限公司 Task scheduling system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110618855A (en) * 2018-12-25 2019-12-27 北京时光荏苒科技有限公司 Task allocation method and device, electronic equipment and storage medium
CN110163474A (en) * 2019-04-12 2019-08-23 平安普惠企业管理有限公司 A kind of method and apparatus of task distribution
CN110766269A (en) * 2019-09-02 2020-02-07 平安科技(深圳)有限公司 Task allocation method and device, readable storage medium and terminal equipment
CN111275358A (en) * 2020-02-25 2020-06-12 北京多禾聚元科技有限公司 Dispatch matching method, device, equipment and storage medium
CN113379177A (en) * 2020-03-10 2021-09-10 北京沃东天骏信息技术有限公司 Task scheduling system and method
CN111242752A (en) * 2020-04-24 2020-06-05 支付宝(杭州)信息技术有限公司 Method and system for determining recommended object based on multi-task prediction
CN111754121A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Task allocation method, device, equipment and storage medium
CN113159628A (en) * 2021-05-13 2021-07-23 中国建设银行股份有限公司 Task allocation method and device
CN113256038A (en) * 2021-07-15 2021-08-13 腾讯科技(深圳)有限公司 Data processing method, data processing equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387062A (en) * 2022-01-13 2022-04-22 北京自如信息科技有限公司 Training of housekeeper recommendation model, housekeeper recommendation method and electronic equipment
CN116187632A (en) * 2022-09-08 2023-05-30 贝壳找房(北京)科技有限公司 Behavior prediction model training method, house source matching method, device and storage medium

Similar Documents

Publication Publication Date Title
US11915123B2 (en) Fusing multimodal data using recurrent neural networks
WO2023040494A1 (en) Resource recommendation method, and multi-target fusion model training method and apparatus
CN107220217A (en) Characteristic coefficient training method and device that logic-based is returned
US11574016B2 (en) System and method for prioritization of support requests
US20110161263A1 (en) Computer-Implemented Systems And Methods For Constructing A Reduced Input Space Utilizing The Rejected Variable Space
US10755332B2 (en) Multi-perceptual similarity detection and resolution
CN113869596A (en) Task prediction processing method, device, product and medium
US20230342787A1 (en) Optimized hardware product returns for subscription services
CN116438555A (en) Automatic deep learning architecture selection for time series prediction with user interaction
CN112036954A (en) Item recommendation method and device, computer-readable storage medium and electronic device
CN113239702A (en) Intention recognition method and device and electronic equipment
CN113986674A (en) Method and device for detecting abnormity of time sequence data and electronic equipment
CN112070545A (en) Method, apparatus, medium, and electronic device for optimizing information reach
CN115456707A (en) Method and device for providing commodity recommendation information and electronic equipment
CN111242162A (en) Training method and device of image classification model, medium and electronic equipment
CN114360027A (en) Training method and device for feature extraction network and electronic equipment
KR102438923B1 (en) Deep Learning based Bitcoin Block Data Prediction System Considering Characteristics of Time-Series Distribution
CN116029766A (en) User transaction decision recognition method, incentive strategy optimization method, device and equipment
CN112328899B (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN113377640B (en) Method, medium, device and computing equipment for explaining model under business scene
CN111178987B (en) Method and device for training user behavior prediction model
CN114742645A (en) User security level identification method and device based on multi-stage time sequence multitask
CN115796984A (en) Training method of item recommendation model, storage medium and related equipment
CN117473457B (en) Big data mining method and system based on digital service
WO2024113641A1 (en) Video recommendation method and apparatus, and electronic device, computer-readable storage medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination