CN116257689A - Model training method, crowded verification task recommending method, electronic equipment and storage medium - Google Patents

Model training method, crowded verification task recommending method, electronic equipment and storage medium Download PDF

Info

Publication number
CN116257689A
CN116257689A CN202310248999.0A CN202310248999A CN116257689A CN 116257689 A CN116257689 A CN 116257689A CN 202310248999 A CN202310248999 A CN 202310248999A CN 116257689 A CN116257689 A CN 116257689A
Authority
CN
China
Prior art keywords
crowded
task
scene
fusion vector
verification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310248999.0A
Other languages
Chinese (zh)
Inventor
朱昭苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310248999.0A priority Critical patent/CN116257689A/en
Publication of CN116257689A publication Critical patent/CN116257689A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Databases & Information Systems (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a model training method, a crowded verification task recommending method, electronic equipment and a storage medium, and relates to the technical field of information recommendation. The model training method comprises the following steps: performing first learning according to non-scene features and scene features of training data of the crowded verification task to obtain a first fusion vector; performing second learning on the first fusion vector to obtain click passing rate of the verification task; and taking the first learning and the second learning as one training, and iteratively performing multiple training until convergence conditions are reached, so as to obtain a crowded test task model for recommending crowded test tasks. According to the method and the device for recommending the crowded verification tasks, the models trained according to the embodiments of the application are used for recommending the crowded verification tasks, the characteristics of users can be accurately captured, the defect of long tail distribution of scene data is overcome, multi-scene fusion and differentiated recommendation are realized, the recommending effect of the crowded verification tasks is improved, and user experience is improved.

Description

Model training method, crowded verification task recommending method, electronic equipment and storage medium
Technical Field
The application relates to the technical field of information recommendation, in particular to a crowded verification task recommendation method, a model training method, electronic equipment and a storage medium.
Background
The crowded verification task generally refers to showing a task to a user in the form of a selection question or a question answer, and obtaining feedback information of the user by answering the question by the user. The crowded verification task can be recommended to the user in a plurality of scenes, and unified models are commonly used at present to recommend the crowded verification task in different scenes. However, due to the fact that long tail distribution characteristics exist in the data of each scene, tasks with less feedback to the user cannot be accurately captured, and the different characteristics of the user in different scenes cannot be captured, so that the recommendation effect is poor and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a model training method, a crowded verification task recommending method, electronic equipment and a storage medium, so as to improve the recommending effect of crowded verification tasks and user experience.
In a first aspect, an embodiment of the present application provides a model training method, including:
performing first learning according to non-scene features and scene features of training data of the crowded verification task to obtain a first fusion vector;
performing second learning on the first fusion vector to obtain the click passing rate of the crowded verification task;
and taking the first learning and the second learning as one training, and iteratively performing multiple training until convergence conditions are reached, so as to obtain a crowded test task model for recommending crowded test tasks.
In a second aspect, an embodiment of the present application provides a method for recommending a verification task, including:
inputting data of a plurality of crowded verification tasks into a crowded verification task model to obtain click passing rate of each crowded verification task;
recommending the plurality of verification tasks according to the order of the click passing rate from high to low;
the crowded test task model is a model trained by using the model training method.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of the preceding claims when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when executed by a processor, implements the method of any of the above.
Compared with the prior art, the application has the following advantages:
according to the training data of the crowded verification task, non-scene features and scene features are subjected to first learning to obtain a first fusion vector, the first fusion vector is subjected to second learning to obtain the click passing rate of the crowded verification task, the first learning and the second learning are used as one training, repeated training is performed until convergence conditions are reached, a crowded verification task model is obtained, and because training is performed based on the non-scene features and the scene features, the difference between different scenes is fully considered, and the capability of accurately capturing user features of the model is improved. The recommendation of the crowded verification task is carried out based on the trained model, the defect of long-tail distribution of scene data is overcome, multi-scene fusion and differentiated recommendation are realized, the recommendation effect of the crowded verification task is improved, and the user experience is improved.
The foregoing description is merely an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, it is possible to implement the present application according to the content of the present specification, and in order to make the above and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the application and are not to be considered limiting of its scope.
FIG. 1 is a flow chart of a model training method according to an embodiment of the present application;
FIG. 2 is a flow chart of a model training method according to another embodiment of the present application;
FIG. 3 is a flowchart of a method for crowdsourcing task recommendation according to another embodiment of the present application;
FIG. 4 is a model training schematic of another embodiment of the present application;
fig. 5 is a block diagram of an electronic device used to implement an embodiment of the present application.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following describes related technologies of the embodiments of the present application. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application.
The terms referred to in this application are explained first.
POI (Point of Interest ): refers to a location recorded in a geographic information system. For example, a POI may be a house, a shop, a post, a bus stop, etc.
Multiple scenarios: different themes displayed to the user within the client, including but not limited to: a main graph feedback scene, an orchard scene, a hot activity scene, a carefully chosen service scene and the like. Each scene may include multiple pages, such as a primary page, a secondary page, a tertiary page, and the like.
And (5) a crowded inspection task: the method is that tasks are displayed to the user in the form of selection questions or question answers, and feedback information of the user is obtained through the answer of the user. The crowdsourcing tasks are often used to recommend information to a user on a client. For example, the map client may set questions related to POI attributes, such as whether the POI's business hours are correct, whether the phone can dial through, etc.
CTR (Click-Through-Rate): refers to the click-through arrival rate of a web advertisement, i.e., the actual number of clicks for the advertisement divided by the amount of the advertisement presented. CTR is an important index for measuring the effect of Internet advertisements. In the embodiment of the application, the CTR refers to the click arrival rate of the crowdsourcing task, and the click arrival rate can reflect the user traffic of the page where the CTR is located.
The crowdsourcing task recommendation method provided by the embodiment of the application can be applied to any electronic equipment, including but not limited to: computers, tablet computers, notebook computers, etc. are commonly used in servers. The server may present questions and obtain feedback to the user via the above-described crowded test task recommendation method. Specific application scenarios may be varied, including but not limited to a main graph feedback scenario, an orchard scenario, a hot activity scenario, a choice service scenario, etc. The presentation to the user according to the embodiments of the present application may send the recommended verification task to the client through the server and then be presented to the user, where the client may be of various types, including but not limited to a map application program or a navigation application program, etc. In a map application program, the recommendation of the verification task is performed by using the method, the method has important value for the supply and update of POI data, and the data recovery is guaranteed.
An embodiment of the present application provides a model training method, as shown in fig. 1, which is a flowchart of a model training method according to an embodiment of the present application, and the method may include the following steps S101 to S103.
In step S101, first learning is performed according to non-scene features and scene features of training data of a verification task to obtain a first fusion vector.
In the embodiment of the application, scene features refer to features related to a scene, and non-scene features refer to features unrelated to the scene. The first fusion vector is a vector obtained after the non-scene features and the scene features are fused, and can represent information related to the features in the crowded test task data, such as differential representation of different scenes. Different first fusion vectors can be obtained by the same user in different scenes, so that the same user can be pushed in a different mode in scenes with more obvious differences.
Wherein the non-scene features may include at least one of: user features, task features, or user task crossing features. The scene features and the non-scene features are distinguished, so that the variability among different scenes can be better captured, differentiated recommendation can be realized, and the user experience is improved.
In step S102, a second learning is performed on the first fusion vector to obtain the click through rate of the verification task.
In the embodiment of the application, for the crowdsourcing verification task displayed on the page, calculating the CTR of the crowdsourcing verification task can be understood as scoring the crowdsourcing verification task. The higher the CTR value for a crowdsourcing task, the higher the score for the crowdsourcing task, indicating a higher likelihood that the crowdsourcing task will be focused and accepted by the user. Conversely, a lower CTR value for a crowdsourcing task corresponds to a lower score for the crowdsourcing task, indicating a lower likelihood that the crowdsourcing task is focused and accepted by the user.
In step S103, the first learning and the second learning are used as a training, and the training is iterated for multiple times until reaching the convergence condition, so as to obtain a crowded test task model for recommending the crowded test task.
In one embodiment, the step S102 specifically includes:
determining a second fusion vector according to the sharing parameter and the private parameter related to the scene in the first fusion vector; determining click passing rate of the verification task according to the second fusion vector; the common parameters set for the pages corresponding to the training data are sharing parameters, and the set individual parameters are private parameters.
For example, the sharing parameters include, but are not limited to, PV (PageView) or user group, etc., and the private parameters include, but are not limited to, user behavior on the page, etc., which refers to user answer feedback to the question, such as select "yes" or select "no".
According to the method provided by the embodiment, because the training is performed based on the non-scene features and the scene features, the difference between different scenes is fully considered, the capability of accurately capturing the user features of the model is improved, finer features of the user can be effectively captured, and the portrait depicting capability of the user is enhanced, so that multi-scene fusion and differentiated recommendation of crowded verification tasks are facilitated.
The embodiment of the application also provides a model training method, as shown in fig. 2, which is a flowchart of a model training method according to another embodiment of the application, and the method may include the following steps S201 to S207.
In step S201, non-scene features and scene features are extracted from training data of the verification task.
Wherein the non-scene features include at least one of: user features, task features, or user task crossing features. Scene features include, but are not limited to, feedback scene features, orchard scene features, trending scene features, or choiceness service scene features, etc.
In step S202, if the non-scene feature and the scene feature are different in dimension, the non-scene feature and the scene feature are adjusted to the same dimension using the encoder.
In the embodiment of the application, the dimensions of the non-scene feature and the scene feature may be the same or different. In the scenes with different dimensions, the dimension adjustment is needed to be performed firstly to ensure that the two have the same dimension, so that information fusion can be performed. There are a number of ways in which the dimension can be adjusted, including but not limited to compression processing, etc.
Illustratively, there are 100 non-scene features, 80 scene features, and the two dimensions are different. The non-scene features may be compressed to 80 dimensions using the encoder and then fused with scene features of the same dimensions. The encoder can also be used to compress both the non-scene features and the scene features to 50 dimensions and then fuse the two.
In step S203, the non-scene feature and the scene feature are fused by using a weighted average method to obtain a first fusion vector.
Wherein the non-scene features may be weighted using a first weight and the scene features may be weighted using a second weight. The first weight and the second weight can be set randomly or empirically during initial training, and the values of the first weight and the second weight can be updated continuously along with continuous learning during training, and the first weight and the second weight of fixed values are obtained when the model converges.
In step S204, a shared parameter matrix and a private parameter matrix related to the scene are established according to the first fusion vector.
In the embodiment of the present application, only the sharing parameters and the private parameters related to the scene are considered, but the sharing parameters and the private parameters unrelated to the scene are not considered, because the non-scene features are already fused in the first fusion vector, here, the non-scene related factors can be ignored, and the re-fusion is performed with the scene related factors as the key points.
In step S205, the shared parameter matrix and the private parameter matrix are fused by using a weighted average method to obtain a second fusion vector.
Wherein the shared parameter matrix may be weighted using a third weight and the private parameter matrix may be weighted using a fourth weight. The third weight and the fourth weight can be set randomly or empirically during initial training, and the values of the third weight and the fourth weight can be updated continuously along with continuous learning during training, and the third weight and the fourth weight of fixed values are obtained when the model converges.
In the embodiment of the application, the second fusion vector is a vector obtained by fusing the weighted shared parameter matrix and the weighted private parameter matrix, can represent information sharing and information difference in the crowded test task data, and can balance conflict of the information sharing and the information difference under multiple scenes. The click through rate obtained based on the second fusion vector is recommended, so that the data commonality among multiple scenes can be fully utilized, meanwhile, the difference among the scenes is reserved, the recommending effect is more accurate, and the granularity of user characterization is finer.
In step S206, the second fusion vector is normalized to obtain the click through rate of the verification task.
In the embodiment of the application, the normalization processing can be realized by adopting softmax operation, and a numerical value between 0 and 1 can be obtained after the softmax operation, so that the click through rate of the crowded verification task is used.
In step S207, all the above steps are used as a training, and the training is iterated for multiple times until reaching the convergence condition, so as to obtain a crowded test task model for recommending crowded test tasks.
In one embodiment, the crowd-test task data used in each (batch) of the multiple exercises may be data in the same scenario. For example, the training task data are all data in an orchard scene or all data in a green travel scene. Moreover, the shared parameter matrix and the private parameter matrix are updated simultaneously for each training data. After convergence of multiple rounds of training, the crowded verification task recommendation model with higher accuracy can be obtained. In addition, the crowded test task recommendation model can generate various parameters in a random initialization mode during initial learning training, and excessive description is omitted here.
According to the method provided by the embodiment, the scene fusion modeling mode is adopted to train the crowded verification task model, so that the problem that training effects are poor due to the fact that scenes are more and data are unbalanced is solved, the embarrassment of modeling each scene independently is effectively avoided, modeling and maintenance costs are greatly saved, and the recommendation effect and user experience of the model are facilitated to be optimized. By measuring importance weights of scene features in different scenes, different fusion vectors are obtained when different scene requests of the same user are controlled, and guarantee is provided for scene differentiation recommendation. The public attribute and the special attribute are extracted from each scene through the shared parameter matrix and the private parameter matrix, and are fused according to different weights, so that the aim of fully utilizing the data of each scene is fulfilled, the training precision of the crowded test task model is improved, and the multi-scene fusion and the differentiated recommendation of the crowded test task are facilitated.
Another embodiment of the present application provides a method for recommending a crowded laboratory task, as shown in fig. 3, which is a flowchart of the method for recommending a crowded laboratory task according to another embodiment of the present application, and the method may include:
in step S301, data of a plurality of crowdsourcing tasks are input into a crowdsourcing task model, so as to obtain click passing rate of each crowdsourcing task.
In step S302, a plurality of verification tasks are recommended in order of high click through rate.
The crowded test task model is a model trained by using the model training method described in the embodiment.
Under one embodiment, the crowded verification tasks can be displayed on the page of the client after being sequenced, and the user can sequentially see the crowded verification tasks with the click passing rate from high to low, so that accurate recommendation is realized, and the satisfaction degree of the user is improved.
According to the method provided by the embodiment, the difference between different scenes is fully considered, the user characteristics can be accurately captured, the user portrait depicting capability is enhanced, the reality and the user experience of the business are more attached, multi-scene fusion and differentiated recommendation are finally realized, the recommendation effect of the crowded verification task is improved, and the user experience is improved.
FIG. 4 is a schematic diagram of model training according to an embodiment of the present application. As shown in fig. 4, first, scene information is fused in the first learning process, and according to non-scene features and scene features of training data, non-scene features are weighted by α1 and scene features are weighted by α2, and then an average calculation is performed to obtain a first fusion vector. The first fusion vector serves as the input vector for the second learning. Secondly, in the second learning process, multi-scene task training is carried out, a shared parameter matrix and a private parameter matrix related to a scene are established, then the shared parameter matrix is weighted by beta 1 and the private parameter matrix is weighted by beta 2, and then average calculation is carried out to obtain a second fusion vector. After the second fusion vector is learned and output for the second time, the second fusion vector can be normalized by adopting softmax operation, so that the click passing rate of the verification task is obtained. The first learning and the second learning can be regarded as one training, and the training can be iteratively performed for a plurality of times until convergence conditions are reached, so as to obtain the verification task model.
Fig. 5 is a block diagram of an electronic device used to implement an embodiment of the present application. As shown in fig. 5, the electronic device includes: memory 510 and processor 520, memory 510 stores a computer program executable on processor 520. The processor 520, when executing the computer program, implements the methods of the above-described embodiments. The number of memories 510 and processors 520 may be one or more.
The electronic device further includes: and the communication interface 530 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 510, the processor 520, and the communication interface 530 are implemented independently, the memory 510, the processor 520, and the communication interface 530 may be connected to each other and communicate with each other through buses. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may communicate with each other through internal interfaces.
The present embodiments provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements the methods provided in the embodiments of the present application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication device provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip, which comprises: the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
Further alternatively, the memory may include a read-only memory and a random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static RAM (SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (SDRAM), double Data Rate Synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method described in flow charts or otherwise herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps described in the flowcharts or otherwise described herein, e.g., may be considered a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
It should be noted that, in the embodiments of the present application, the use of user data may be involved, and in practical applications, user specific personal data may be used in the schemes described herein within the scope allowed by applicable legal regulations in the country where the applicable legal regulations are met (for example, the user explicitly agrees to the user to actually notify the user, etc.).
The foregoing is merely exemplary embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, which should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of model training, the method comprising:
performing first learning according to non-scene features and scene features of training data of the crowded verification task to obtain a first fusion vector;
performing second learning on the first fusion vector to obtain the click passing rate of the crowded verification task;
and taking the first learning and the second learning as one training, and iteratively performing multiple training until convergence conditions are reached, so as to obtain a crowded test task model for recommending crowded test tasks.
2. The method of claim 1, wherein the first learning based on the non-scene features and scene features of the training data of the crowdsourcing task to obtain the first fusion vector comprises:
extracting non-scene features and scene features from training data of the verification task;
fusing the non-scene features and the scene features by using a weighted average method to obtain a first fusion vector;
wherein the non-scene features are weighted using a first weight and the scene features are weighted using a second weight.
3. The method of claim 1, wherein the second learning of the first fusion vector results in a click through rate of the crowdsourcing task, comprising:
determining a second fusion vector according to the sharing parameter and the private parameter related to the scene in the first fusion vector;
determining click passing rate of the verification task according to the second fusion vector;
the common parameters set for the pages corresponding to the training data are sharing parameters, and the set individual parameters are private parameters.
4. A method according to claim 3, wherein said determining a second fusion vector from the shared and private parameters associated with the scene in the first fusion vector comprises:
establishing a shared parameter matrix and a private parameter matrix related to a scene according to the first fusion vector;
fusing the shared parameter matrix and the private parameter matrix by using a weighted average method to obtain a second fusion vector;
wherein the shared parameter matrix is weighted using a third weight and the private parameter matrix is weighted using a fourth weight.
5. A method according to claim 3, wherein said determining click through rate of the crowdsourcing task from the second fusion vector comprises:
and normalizing the second fusion vector to obtain the click passing rate of the verification task.
6. The method of claim 1, wherein the non-scene features comprise at least one of: user features, task features, or user task crossing features.
7. The method of claim 1, wherein the training data of the crowdsourcing task used for each of the plurality of exercises is training data of crowdsourcing tasks in the same scenario.
8. A method for recommending a crowdsourcing test task, the method comprising:
inputting data of a plurality of crowded verification tasks into a crowded verification task model to obtain click passing rate of each crowded verification task;
recommending the plurality of verification tasks according to the order of the click passing rate from high to low;
wherein the crowdsourcing task model is a model trained using the method of any one of claims 1-7.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of claims 1-8 when the computer program is executed.
10. A computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-8.
CN202310248999.0A 2023-03-13 2023-03-13 Model training method, crowded verification task recommending method, electronic equipment and storage medium Pending CN116257689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310248999.0A CN116257689A (en) 2023-03-13 2023-03-13 Model training method, crowded verification task recommending method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310248999.0A CN116257689A (en) 2023-03-13 2023-03-13 Model training method, crowded verification task recommending method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116257689A true CN116257689A (en) 2023-06-13

Family

ID=86680764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310248999.0A Pending CN116257689A (en) 2023-03-13 2023-03-13 Model training method, crowded verification task recommending method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116257689A (en)

Similar Documents

Publication Publication Date Title
WO2020135535A1 (en) Recommendation model training method and related apparatus
CN108121795B (en) User behavior prediction method and device
CN107885796B (en) Information recommendation method, device and equipment
WO2019114423A1 (en) Method and apparatus for merging model prediction values, and device
CN111079056A (en) Method, device, computer equipment and storage medium for extracting user portrait
CN110147925B (en) Risk decision method, device, equipment and system
CN111177473B (en) Personnel relationship analysis method, device and readable storage medium
CN111460290B (en) Information recommendation method, device, equipment and storage medium
US11775674B2 (en) Apparatus and method for recommending user privacy control
CN107545301B (en) Page display method and device
US20190303771A1 (en) Inferred profiles on online social networking systems using network graphs
CN113191838A (en) Shopping recommendation method and system based on heterogeneous graph neural network
US10951668B1 (en) Location based community
CN114298326A (en) Model training method and device and model training system
CN112115372B (en) Parking lot recommendation method and device
CN113434746A (en) Data processing method based on user label, terminal equipment and storage medium
CN111489196B (en) Prediction method and device based on deep learning network, electronic equipment and medium
CN111931069A (en) User interest determination method and device and computer equipment
CN111125507B (en) Group activity recommendation method and device, server and computer storage medium
CN111143700B (en) Activity recommendation method, activity recommendation device, server and computer storage medium
CN116257689A (en) Model training method, crowded verification task recommending method, electronic equipment and storage medium
CN114139046B (en) Object recommendation method and device, electronic equipment and storage medium
CN115017362A (en) Data processing method, electronic device and storage medium
CN113850669A (en) User grouping method and device, computer equipment and computer readable storage medium
CN114510627A (en) Object pushing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination