CN114912538A - Information push model training method, information push method, device and equipment - Google Patents

Information push model training method, information push method, device and equipment Download PDF

Info

Publication number
CN114912538A
CN114912538A CN202210598052.8A CN202210598052A CN114912538A CN 114912538 A CN114912538 A CN 114912538A CN 202210598052 A CN202210598052 A CN 202210598052A CN 114912538 A CN114912538 A CN 114912538A
Authority
CN
China
Prior art keywords
policy
information
characteristic value
keyword
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210598052.8A
Other languages
Chinese (zh)
Inventor
孙文岩
姜虎城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN202210598052.8A priority Critical patent/CN114912538A/en
Publication of CN114912538A publication Critical patent/CN114912538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The application discloses an information push model training method, an information push device and information push equipment. The method comprises the following steps: acquiring a plurality of preset policy information and generating a plurality of random numbers as a plurality of training samples based on a random function; performing dimension reduction processing on the keyword characteristic values corresponding to the plurality of keywords in each policy information to obtain the policy characteristic value corresponding to each policy information; inputting a random number in a training sample into an information pushing model, matching the random number with a policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain prediction policy information corresponding to the random number; and adjusting model parameters of the information pushing model according to the random number and the policy characteristic value corresponding to the prediction policy information until the information pushing model is converged to obtain the trained information pushing model. Therefore, the pushing efficiency of the policy information can be improved, the waiting time of the user is reduced, and the user experience is improved.

Description

Information push model training method, information push method, device and equipment
Technical Field
The application belongs to the technical field of data processing, and particularly relates to an information push model training method, an information push device and information push equipment.
Background
With the increase of various policies, the items to be transacted by the user are more and more, but most users do not know the transaction flow of the items, which needs to push corresponding policy information for the user to help the user transact the items.
In the existing policy system, when a user searches for required policy information, because the data size of the policy information is huge, the efficiency of determining the policy information required by the user from massive policy information is low, the waiting time of the user is long, and the user experience is greatly influenced.
Disclosure of Invention
The embodiment of the application provides an information pushing model training method, an information pushing device and information pushing equipment, and can at least solve the problems that in the prior art, the efficiency of determining policy data required by a user from massive policy data is low, the waiting time of the user is long, and the user experience is influenced.
In a first aspect, an embodiment of the present application provides an information push model training method, where the method includes:
acquiring a plurality of preset policy information and generating a plurality of random numbers as a plurality of training samples based on a random function, wherein each training sample comprises one or more random numbers;
performing dimension reduction processing on the keyword characteristic values corresponding to the plurality of keywords in each policy information to obtain the policy characteristic value corresponding to each policy information;
inputting a random number in a training sample into an information pushing model, matching the random number with a policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain prediction policy information corresponding to the random number;
and adjusting the model parameters of the information pushing model according to the random number and the policy characteristic value corresponding to the prediction policy information until the information pushing model converges to obtain the trained information pushing model.
In a second aspect, an embodiment of the present application provides an information pushing method, where the method includes:
receiving a keyword input by a user;
inputting a first keyword characteristic value corresponding to a keyword input by a user into an information pushing model, and pushing target policy information corresponding to the first keyword characteristic value to the user by using the information pushing model, wherein the information pushing model is obtained by training according to the information pushing model training method shown in any embodiment of the first aspect.
In a third aspect, an embodiment of the present application provides an information push model training apparatus, where the apparatus includes:
the acquisition module is used for acquiring a plurality of preset policy information and generating a plurality of random numbers as a plurality of training samples based on a random function, wherein each training sample comprises one or more random numbers;
the dimension reduction module is used for carrying out dimension reduction processing on the keyword characteristic values corresponding to the keywords in each policy information to obtain the policy characteristic value corresponding to each policy information;
the matching module is used for inputting the random number in the training sample into the information pushing model, matching the random number with the policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number;
and the adjusting module is used for adjusting the model parameters of the information pushing model according to the random number and the policy characteristic value corresponding to the prediction policy information until the information pushing model converges to obtain the trained information pushing model.
In a fourth aspect, an embodiment of the present application provides an information pushing apparatus, where the apparatus includes:
the receiving module is used for receiving keywords input by a user;
the information pushing module is configured to input a first keyword feature value corresponding to a keyword input by a user into an information pushing model, and push target policy information corresponding to the first keyword feature value to the user by using the information pushing model, where the information pushing model is obtained by training according to the information pushing model training method shown in any embodiment of the first aspect.
In a fifth aspect, an embodiment of the present application provides an electronic device, where the device includes: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements an information push model training method as shown in any embodiment of the first aspect and/or an information push method as shown in any embodiment of the second aspect.
In a sixth aspect, the present application provides a computer storage medium having computer program instructions stored thereon, where the computer program instructions, when executed by a processor, implement the information push model training method shown in any one of the embodiments of the first aspect and/or the information push method shown in any one of the embodiments of the second aspect.
In a seventh aspect, this application embodiment provides a computer program product, and when executed by a processor of an electronic device, the instructions cause the electronic device to perform the information push model training method shown in any one of the embodiments of the first aspect and/or the information push method shown in any one of the embodiments of the second aspect.
The information pushing model training method, the information pushing device, the information pushing equipment, the information pushing medium and the information pushing product can acquire a plurality of preset policy information, perform dimension reduction processing on the keyword characteristic values corresponding to a plurality of keywords in each policy information to obtain the policy characteristic value corresponding to each policy information, match the random number in the training sample with the policy characteristic value corresponding to each policy information to obtain the prediction policy information corresponding to the random number, and because the policy characteristic value corresponding to the policy information is obtained by performing dimension reduction processing on the keyword characteristic values corresponding to a plurality of keywords in the policy information, the number of the policy characteristic values can be reduced, so that the time for matching the random number with the policy characteristic values can be greatly shortened, correspondingly, the time for pushing information by using the information pushing model can be greatly shortened, therefore, the pushing efficiency of the policy information can be improved, the waiting time of the user is reduced, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an information push model training method according to an embodiment of the present application;
FIG. 2 is a graph of policy characteristic values of policy information provided in one embodiment of the present application;
FIG. 3 is a diagram illustrating a distribution of policy feature values of policy information according to an embodiment of the present application;
fig. 4 is a flowchart of an information pushing method according to an embodiment of the present application;
FIG. 5 is a flow node diagram of a policy redemption system provided by one embodiment of the present application;
FIG. 6 is a block diagram of a data resource plan according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an information push model training apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an information pushing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In addition, it should be noted that, in the technical solution of the present application, the acquisition, storage, use, processing, etc. of data all comply with relevant regulations of national laws and regulations.
As described in the background art, as the number of policies increases, more and more items need to be transacted by users, but most users do not know the transaction flow of various items, which requires pushing corresponding policy information for users to help users transact various items.
In the existing policy system, when a user searches for required policy information, because the data size of the policy information is huge, the efficiency of determining the policy information required by the user from massive policy information is low, the waiting time of the user is long, and the user experience is greatly influenced.
In addition, policy information can be pushed through artificial intelligence in the prior art, and the artificial intelligence includes very extensive science and is composed of different fields, such as machine learning, computer vision and the like. One of the main goals of artificial intelligence research is to enable machines to perform complex tasks that typically require human intelligence to complete. However, the application of artificial intelligence in the existing service platform is still only in the degree of computer learning, that is, manually setting parameters, so that the computing program obtains set keywords (i.e., buried points) from the database, automatically screens the keywords, stores the determined keywords in a register, and then pushes needed policy information according to the feature needs of set natural persons or legal persons. The current use of government systems underestimates the ability of artificial intelligence completely.
Fig. 1 shows a schematic flow diagram of an information push model training method according to an embodiment of the present application, and it should be noted that the information push model training method may be applied to an information push model training apparatus, and as shown in fig. 1, the information push model training method may include the following steps:
s110, acquiring a plurality of preset policy information and generating a plurality of random numbers as a plurality of training samples based on a random function;
s120, performing dimension reduction processing on the keyword characteristic values corresponding to the keywords in each policy information to obtain a policy characteristic value corresponding to each policy information;
s130, inputting the random number in the training sample into an information pushing model, matching the random number with the policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number;
and S140, adjusting model parameters of the information pushing model according to the random number and the policy characteristic value corresponding to the prediction policy information until the information pushing model converges to obtain the trained information pushing model.
Therefore, a plurality of preset policy information can be obtained, the dimension reduction processing is carried out on the keyword characteristic values corresponding to a plurality of keywords in each policy information to obtain the policy characteristic value corresponding to each policy information, then the random number in the training sample is matched with the policy characteristic value corresponding to each policy information to obtain the predicted policy information corresponding to the random number, and the policy characteristic value corresponding to the policy information is obtained by carrying out the dimension reduction processing on the keyword characteristic values corresponding to a plurality of keywords in the policy information, so that the number of the policy characteristic values can be reduced, the time for matching the random number with the policy characteristic values can be greatly shortened, correspondingly, the time for pushing the information by using the information pushing model can be greatly shortened, the pushing efficiency of the policy information can be improved, and the waiting time of a user can be reduced, and the user experience is improved.
Referring to S110, training samples may be constructed, each training sample may include one or more random numbers, the training samples may be used to train an information pushing model, and the information pushing model may be used to push policy information corresponding to a keyword feature value according to the keyword feature value of the keyword input by a user, so that the random number in the training sample may be input to the information pushing model as the keyword feature value to train the information pushing model. The random number may be generated based on a random function.
Illustratively, the random number may be generated by a random function such as np. For np.random.randn (), when no parameter exists in the function bracket, returning a floating point number, when a parameter exists in the function bracket, returning an array with the rank of 1, which cannot represent a vector or a matrix, and when two or more parameters exist in the function bracket, returning an array with corresponding dimensionality, which can represent a vector or a matrix; for np.random.rand (), the use method is the same as np.random.rand (), and one or a group of random sample values subject to uniform distribution of "0-1" can be returned through np.random.rand (). The value range of the random sample is (0,1), and 1 is not included; for np.random.randint (), the parameters in the function brackets can be a minimum value (low), a maximum value (high), an array dimension size and a data type, a general default data type can be np.int, and a return value can be (low, high), including low, not including high; when high is not filled in, the range of the generated random number is (0, low) by default. Further, the random number may be generated by np.random.standard _ normal (), which is similar to np.random.random.random.random _ normal () except that the parameter in the functional bracket of np.random.standard _ normal () is a tuple (tuple) and the parameter in the functional bracket of np.random.random.random _ normal () is usually an integer, and when it is a floating point number, the conversion into an integer is automatically and directly truncated.
In addition, a plurality of preset policy information can be acquired, so that the characteristic values of the keywords in the policy information can be stored in the information push model to be trained.
In some embodiments, training samples may also be constructed by obtaining historical behavior data of the user. Specifically, the system can be widely connected with government department systems of traffic, sanitation, personnel, public security, civil affairs, residential construction, finance and tax, education, medical treatment and the like, and can realize cooperative linkage with the existing databases of large data platforms such as population libraries, legal people libraries, electronic certificate libraries, electronic seal libraries, public credit libraries and the like, so that the user data can be conveniently called in real time when the access behaviors of users are recognized. According to the user data full information such as a big data platform data source, a data flow direction, data application and the like, data generated in the process of using a government affair service platform by a user are divided into three categories of user basic data, government affair business data and user behavior data, and flag-raising orientation is carried out on the data integration, data analysis and data application in the next step.
Wherein, the user basic data can cover the whole amount of basic information of the user. The personal user basic data may include personal basic information (sex, age, identification number, native place, cell phone number, e-mail box, work unit), five insurance one fund (social security, medical insurance, public accumulation fund, etc.), education experience (graduate colleges, professional directions, academic degree), tax credit (tax payment information, credit investigation report), medical health (physical examination report, outpatient medical record, medical diagnosis), house property (loan record, repayment record, house property mortgage), employment entrepreneurst (personal resume, working year limit, qualification), travel outing (travel record, transportation tool, dining accommodation), insurance investment (commercial insurance, financial service), etc. The basic data of the legal user comprises basic information of the legal person (legal representative information, industry category, social credit code, establishment time and operation range), patent credit (registered patent and enterprise credit investigation), land property (land area, service life and building area), qualification license (operation license, monopoly license and quality certification) and the like.
The government affair service data can cover data and relevant information generated when a user transacts business on any government affair service platform. The individual user government affair service data can comprise event handling (applying for serial number, reporting time, event handling progress and event handling state), event handling evaluation (individual event handling evaluation and event handling guide evaluation), message leaving comment, interactive communication (event handling consultation and mailbox), online payment (life payment and traffic fine), logistics receiving and dispatching (receiving address and logistics history), collection subscription (subscription event, subscription time and event handling guide), material uploading, electronic seal, electronic certificate and certification material and the like. The basic data of the legal user comprises item handling (application serial number, declaration time, item handling progress and item handling state), item handling evaluation (legal item handling evaluation and item handling guide evaluation), message leaving comment, tax payment and refund (tax payment declaration and tax payment history), industrial and commercial water and electricity (enterprise map, stockholder information and enterprise annual report), logistics storage (receiving address, logistics history and storage information), policy consultation, subsidy project, material uploading and patent declaration and the like.
The user behavior data may cover various active behavior information generated when the user logs in any government affair service platform to handle business, and specifically may include user activity information (start-up behavior, login behavior, access channel, access time, area of origin, domain name of way), click behavior information (element click, propagated map (Banner) click, site click amount), browsing behavior information (web residence time, jump rate, reviewer, new visitor, revisit times, revisit interval days, page browsing, H5 browsing, collection attention, browsing footprint, drainage times, average browsing time), retrieval behavior information (search term, associated keyword, search times), user preference information (search habit), and the like. To facilitate the acquisition of user behavior data, user groups may be created: based on the operation motivation, the user range is defined and the user group is created by depending on the user label, the user behavior information and the user service handling information, and in addition, the user group can be manually imported and created. User group profile analysis may also be performed: data distribution statistics and data display of a user group in a label dimension are supported; data statistics and data display of contribution degree of a user group on an operation index are supported; and cross comparison among user groups is supported.
In some embodiments, good data can be used to extract good features, so that feature preprocessing and data cleaning are very critical steps, and the effect and performance of the algorithm can be significantly improved, for example, normalization, discretization, factorization, missing value processing, collinearity removal and the like can be performed, so that the significant features are screened out, the non-significant features are abandoned, and a machine learning engineer is required to repeatedly understand the service. This has a decisive influence on many results, which requires the use of correlation techniques for feature validity analysis, such as correlation coefficients, chi-square test, mean mutual information, conditional entropy, posterior probability, logistic regression weights, etc. Therefore, after the training sample is constructed by acquiring the historical behavior data of the user, the training sample can be preprocessed, for example, the data is rechecked and verified, repeated information is deleted, invalid information is removed, error information is corrected, the consistency, accuracy, authenticity and usability of the data are improved through data filtering and correction, the data quality is improved, and the data processing efficiency is accelerated.
In some embodiments, in order to obtain the policy information more accurately and comprehensively, the obtaining of the preset policy information may specifically include:
the preset policy information is acquired from a government system through an Application Program Interface (API) gateway.
Here, the policy information may be directly acquired from the government system through the API gateway, and the government system may be a system issuing the policy information.
Therefore, the policy information is directly acquired from the government affair system, and the acquired policy information can be more accurate and comprehensive.
Of course, policy information may also be obtained from the network by a crawler, which is not limited herein.
Referring to S120, the keyword feature values corresponding to the keywords in each policy information may be subjected to dimensionality reduction processing through the conversion function to obtain the policy feature value corresponding to each policy information, so that the number of random variables, that is, the number of policy feature values, may be reduced to obtain a set of irrelevant principal variables, so that the feature values may play a better role in a machine learning algorithm. Specifically, dimension reduction of the keyword feature values of several keywords to 1 policy feature value may be preset, for example, dimension reduction of the keyword feature values of 3 keywords to 1 policy feature value is preset, if the policy information a includes 6 keywords, two policy feature values corresponding to the policy information a may be obtained after dimension reduction, and if the policy information B includes 5 keywords, two policy feature values corresponding to the policy information B may also be obtained after dimension reduction. In addition, in order to reduce the computational complexity, the characteristic values can be converted into smaller values which are more suitable for model calculation.
For example, as shown in fig. 2, the policy characteristic values corresponding to one policy information may be fitted to a straight line. The data determines the upper bound of the machine learning result, and the algorithm only approximates this upper bound as closely as possible. The data is representative, otherwise it must be overfitting. Moreover, for the classification problem, the data skew cannot be too severe, and the amount of data in different classes does not have a magnitude difference. And the magnitude of the data is evaluated, the consumption degree of the data on the memory is estimated according to the number of samples and the number of characteristics, whether the memory can be placed in the training process is judged, if the memory can not be placed, an improved algorithm or some dimension reduction skills are considered, and if the data volume is too large, the data is distributed.
Illustratively, policy information obtained by connecting various government systems through an API gateway may be assembled into a data set. In the data set, one row of data may be a sample and one column of data may have a feature value. Since some data have target values and some data do not have target values, two types of data are formed: eigenvalue + target value, only eigenvalue, no target value. The target value may be determined from historical behavioral data of the user. The data can be subjected to characteristic value marking by using policy classification of different departments, and the above statistical information of natural people or enterprises, behavior data operated on the system can be obtained, and the data can be converted into digital characteristics which can be used for machine learning to process the data.
In some embodiments, the first step of performing machine learning, i.e., model training, is a machine learning training process, which is usually a time-consuming matter, and it is necessary to first determine what data is obtained, abstract question, and whether it is a classification, regression, or clustering question.
Here, since policy information required by a natural person and policy information required by an enterprise are usually different, two information push models may be trained based on policy information corresponding to the natural person and policy information corresponding to the enterprise, or only one information push model may be trained, and different characteristic value intervals are set for the policy information corresponding to the natural person and the policy information corresponding to the enterprise, respectively, as shown in fig. 3, during training, so as to distinguish the policy information corresponding to the natural person from the policy information corresponding to the enterprise.
In some embodiments, to facilitate matching the random number with the policy feature value, after S120, the method may further include:
and storing the policy characteristic values into the information pushing model in the form of a one-dimensional array and a two-bit array respectively.
Here, the policy feature value corresponding to the policy information may be stored in the information push model in a form of a one-dimensional array according to the release time of the policy information; meanwhile, the policy characteristic value corresponding to the policy information may be stored in the information pushing model in a form of a two-dimensional array, specifically, the policy characteristic value stored in the form of a one-dimensional array and the policy characteristic value stored in the form of a two-dimensional array may be stored in a container defined by a three-dimensional array, for example: a container for a tabular data structure (DataFrame).
Therefore, the policy characteristic values are stored in the form of one-dimensional arrays, so that the subsequent independent matching can be conveniently carried out, and the cross matching can be conveniently carried out in the subsequent storage in the form of two-dimensional arrays.
Referring to S130, a plurality of training samples may be input into the information push model to train the information push model. For each training sample, the random number in the training sample can be input into the information pushing model, the information pushing model is used for matching the random number with the policy characteristic value corresponding to each policy information, and the prediction policy information corresponding to the random number is output. Before inputting the training samples into the information push model, the parameter weights of the model may be initialized randomly.
Here, the information push model may be a data structure algorithm of Pandas, which is a tool based on NumPy and created for solving the task of data analysis, Pandas incorporates a large number of libraries and some standard data models, and provides a tool capable of efficiently operating a large data set and a large number of functions and methods capable of processing data quickly and conveniently.
In some embodiments, in order to train the information pushing model to be able to push policy information corresponding to a keyword feature value, when the training sample includes a random number, the S130 may specifically include:
and inputting the random number in the training sample into an information pushing model, matching the random number with the policy characteristic value stored in a one-dimensional array form by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number.
Here, for each training sample including one random number, the training sample may be input into an information pushing model, and the information pushing model may be used to individually match the random number with policy feature values stored in a one-dimensional array form, so as to output predicted policy information corresponding to the one random number in the training sample.
Therefore, the training sample training information pushing model comprising a random number can enable the trained information pushing model to push policy information corresponding to a keyword characteristic value.
In some embodiments, in order to enable the training information push model to push policy information corresponding to a plurality of keyword feature values, when the training sample includes a plurality of random numbers, the step S130 may specifically include:
and inputting the random number in the training sample into an information pushing model, matching the random number with the policy characteristic value stored in a two-dimensional array form by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number.
Here, for each training sample including a plurality of random numbers, the training sample may be input into an information push model, and the plurality of random numbers may be cross-matched with policy feature values stored in a two-dimensional array form by using the information push model, so as to output predicted policy information corresponding to the plurality of random numbers in the training sample.
In this way, the trained information push model can push policy information corresponding to a plurality of keyword feature values through the training sample training information push model comprising a plurality of random numbers.
Referring to S140, a loss function value of the information pushing model may be determined according to the random number and a policy feature value corresponding to the prediction policy information, and the model parameter of the information pushing model is adjusted when the loss function value does not satisfy a preset training stop condition until the loss function value satisfies the preset training stop condition, that is, the information pushing model converges, so as to obtain the trained information pushing model. The training stopping condition may be preset according to a user requirement, for example, the training stopping condition may be that a loss function of the information push model is smaller than a certain threshold, or that the number of iterations of the information push model for training reaches a certain threshold.
In some embodiments, after the training of the information push model is completed, the trained information push model may be tested, for example, a learning curve drawing method may be adopted to test whether the trained information push model is over-fit or under-fit, if the over-fit or under-fit problem exists, the training needs to be performed again, the basic tuning idea of over-fitting is to increase the data amount and reduce the complexity of the model, and the basic tuning idea of under-fitting is to increase the number and quality of characteristic values and increase the complexity of the model. And then carrying out error analysis, and comprehensively analyzing the reasons of the errors by observing error samples. The model after diagnosis needs to be optimized, and the new model after optimization needs to be diagnosed again, which is a process of repeated iteration and continuous approximation, and needs to be tried continuously, so that the optimal state is reached. Then model fusion test operation is carried out.
Here, a sample data set may be obtained in advance, and the sample data set may be divided into a training sample and a test sample, for example, may be divided in a ratio of 8: 2. The sample data set may be a random number generated by a random function, or may be obtained by collecting user history behavior data such as item click behavior and a recommendation list.
In some examples, after a test sample is obtained by collecting user historical behavior data such as item click behavior and a recommendation list, product iteration optimization can be guided by analyzing item recommendation effects in a multi-dimensional manner, and analysis indexes can include a recommended click amount, a recommended exposure amount, a recommended click rate, a per-person click frequency, recommended user role analysis, popular item indexes and the like of policy information.
In addition, after the information pushing model is operated on line, the operation accuracy and the operation error can be adjusted according to the use condition, the operation speed (time complexity), the resource consumption degree (space complexity) and the stability can be adjusted, and the stable information pushing model is obtained.
Therefore, the problem of over-fitting or under-fitting of the information push model can be avoided by testing the information push model.
The information pushing method provided by the embodiment of the present application is described in detail below with reference to fig. 4.
Fig. 4 shows a flowchart of an information pushing method according to an embodiment of the present application, and it should be noted that the information pushing method may be applied to an information pushing apparatus, and as shown in fig. 4, the information pushing method may include the following steps:
s410, receiving keywords input by a user;
s420, inputting the first keyword characteristic value corresponding to the keyword input by the user into the information pushing model, and pushing the target policy information corresponding to the first keyword characteristic value to the user by using the information pushing model.
Therefore, the first keyword characteristic value corresponding to the keyword input by the user is input into the information pushing model, the information pushing model can be used for pushing the target policy information corresponding to the first keyword characteristic value to the user, the information pushing model can be a model obtained by training through the information pushing model training method, the information pushing efficiency of the information pushing model is high, the policy information pushing efficiency can be improved, the waiting time of the user is shortened, and the user experience is improved.
Referring to S410, the user may search for policy information by inputting a keyword, and the information push apparatus may receive the keyword input by the user. The keywords that the user can input may be one or more.
Referring to S420, after receiving the keyword input by the user, determining a first keyword feature value corresponding to the keyword input by the user according to a corresponding relationship between the keyword and the feature value, inputting the first keyword feature value into an information pushing model, and pushing target policy information corresponding to the first keyword feature value to the user by using the information pushing model. The information push model may be a model obtained by training through the information push model training method. The number of target policy information may be one or more, and may be 0.
In some embodiments, the user may be a natural person user or an enterprise user, the system or the account number logged in by the natural person user and the enterprise user during the search may be different, and the information push model used by different account numbers or systems may be obtained by training according to different training samples, so that when the user logs in the system or the account number corresponding to the natural person for the natural person user to perform the search, the policy information corresponding to the natural person is pushed, and when the user logs in the system or the account number corresponding to the enterprise for the enterprise user to perform the search, the policy information corresponding to the enterprise is pushed. Therefore, the required policy information can be more purposefully pushed to the user.
In some embodiments, in order to more accurately push policy information to a user, the S410 may specifically include:
inputting a first keyword characteristic value corresponding to a keyword input by a user into an information pushing model, matching the first keyword characteristic value with a policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain target policy information corresponding to the first keyword characteristic value.
Here, when the user inputs keyword search policy information, the first keyword feature value corresponding to the keyword input by the user may be input to the information push model, and the first keyword feature value may be matched with the policy feature value corresponding to each piece of policy information by using the information push model, so as to output target policy information corresponding to the first keyword feature value. Specifically, the process of matching the first keyword feature value with the policy feature value corresponding to each policy information is the same as the process of matching the random number in the training sample with the policy feature value corresponding to each policy information, and is not repeated herein for brevity.
Thus, through the process, the target policy information is determined more accurately, and the policy information can be pushed to the user more accurately.
In some embodiments, if a user inputs a plurality of keywords for searching, a situation that the plurality of keywords correspond to the same policy information may exist, in order to make the policy information pushed to the user simpler, and under the condition that the keywords input by the user at least include a first keyword and a second keyword, the above-mentioned first keyword feature value corresponding to the keyword input by the user is input into the information pushing model, the information pushing model is used to match the first keyword feature value with the policy feature value corresponding to each policy information, and the output obtains the target policy information corresponding to the first keyword feature value, which specifically may include:
inputting a second keyword characteristic value corresponding to the first keyword and a third keyword characteristic value corresponding to the second keyword into an information pushing model, matching the second keyword characteristic value with a policy characteristic value corresponding to each policy information by using the information pushing model, and matching the third keyword characteristic value with the policy characteristic value corresponding to each policy information to obtain a first policy characteristic value successfully matched with the second keyword characteristic value and a second policy characteristic value successfully matched with the third keyword characteristic value;
and under the condition that the first policy characteristic value and the second policy characteristic value comprise the same third policy characteristic value, removing the third policy characteristic value included in the first policy characteristic value or the third policy characteristic value included in the second policy characteristic value to obtain a residual policy characteristic value.
And outputting target policy information corresponding to the remaining policy feature values.
Here, the user inputs a plurality of keywords including a first keyword and a second keyword, the first keyword corresponding to a characteristic value of the second keyword, and the second keyword corresponding to a characteristic value of the third keyword. The second keyword feature value and the third keyword feature value are input into an information pushing model, and the information pushing model can be used for respectively matching the second keyword feature value and the third keyword feature value with the policy feature value corresponding to each policy information, so that a first policy feature value successfully matched with the second keyword feature value and a second policy feature value successfully matched with the third keyword feature value can be obtained. The number of the first policy feature value and the second policy feature value may be one or more.
Since the same policy feature value may exist in the first policy feature value and the second policy feature value, therefore, if the policy information corresponding to the first policy characteristic value and the second policy characteristic value is directly pushed to the user, in order to avoid the situation that a plurality of pieces of identical policy information are pushed, in the case where the identical third policy feature value is included in the first policy feature value and the second policy feature value, removing the third policy characteristic value included in the first policy characteristic value or the third policy characteristic value included in the second policy characteristic value to obtain a remaining policy characteristic value, the remaining policy feature values may include a first policy feature value and a second policy feature value that removes a third policy feature value, or the remaining policy feature values may include the second policy feature value and the first policy feature value excluding the third policy feature value.
In this way, the target policy information corresponding to the remaining policy feature value is output, so that each piece of policy information in the target policy information is unique, and the situation of duplication is avoided.
Therefore, the pushed policy information can be more concise, the user can conveniently and quickly check the policy information, and the waste of time caused by checking the repeated policy information by the user is avoided.
In some embodiments, in order to make the pushed policy information more accurate, before outputting the target policy information corresponding to the remaining policy feature value, the method may further include:
target policy information corresponding to the remaining policy feature values is acquired from the government system through the API gateway.
Here, after determining the remaining policy feature values, target policy information corresponding to the remaining policy feature values may be directly called from the government system through the API gateway.
In this way, since the government affair system is a system that issues policy information, calling target policy information directly with the government affair system can make the pushed policy information more accurate.
In order to more effectively convey various items of supported policy information, improve the policy cashing efficiency and comprehensively and effectively evaluate the policy effect, various government affair service data management departments carry out standardized management on various policy cashing items, the aim is to realize one-door policy cashing item handling, realize one-opening handling, internal circulation, integrated service and time-limited handling, and simultaneously carry out full-flow tracking supervision on approval circulation of application items among various business administration departments so as to ensure the time-limited handling. Good policy cashing service is provided for vast investors and enterprises; meanwhile, the policy cashing items are summarized, counted and analyzed, and analysis decision and improvement measures are provided for optimizing the environment of the operator. The service objects comprise specific natural persons and legal persons meeting policy cashing conditions, and maintenance personnel aiming at policies and subsidy matters by government departments.
At present, the existing policy cashing service system is not sufficient in artificial intelligence application, the platform is used for acquiring, analyzing and pushing policy data through a machine type learning method, the intelligent degree is low, the limitation of manpower is obvious, a reasonable algorithm is not provided, and the intelligent level of artificial intelligence is reduced. The policy cashing service system is dispersed in construction, low in concentration and serious in fragmentation, a unified algorithm and system are not available in the acquisition, statistics and processing of policy information, the popularity is narrow, the policy is pushed accurately, and the quality of the accurate service is to be further improved; each department implements the block management, the departments have more barriers and information cannot be shared, and the feeling of the enterprises and citizens enjoying the policy service is greatly reduced.
Based on this, the embodiment of the present application further provides a flow node schematic diagram of a policy redemption system, and the information push method in the above embodiment may be applied to a working scenario of the policy redemption system. The workflow of the policy redemption system provided by the embodiments of the present application is described in detail below with reference to fig. 5.
Fig. 5 is a flow node diagram of a policy redemption system according to an embodiment of the present application.
As shown in FIG. 5, the flow nodes of the policy redemption system may include: applicant node 510, acceptance node 520, and approval node 530.
The applicant can fill application information in the applicant node 510 to apply for online, then the applicant node 510 and the acceptance node 520 can exchange data and send the application information to the acceptance node 520, if the acceptance result does not pass, the acceptance result can be fed back to the applicant node 510, if the acceptance result passes, the acceptance result can be entered into the approval node 530, and then result data generated after the approval is finished can be fed back to the applicant node 510.
In each process node of the policy cashing system, the applicant can search the policy information without knowing the policy information, and the policy cashing system can push the corresponding policy information for the user based on the information push model, so that convenience can be provided for the applicant to handle matters.
The embodiment of the present application further provides a schematic structural diagram of a data resource plan, and the structure of the data resource plan in the policy redemption system provided in the above embodiment may be as shown in fig. 6, which is described in detail below.
Fig. 6 shows a schematic structural diagram of a data resource planning according to an embodiment of the present application.
As shown in fig. 6, the structure of the data resource plan may include: technical specification security module 610, data resource layer 620, and data management and maintenance module 630. The data resource layer 620 may include: data services 621, database 622, and data source 623.
The data service 621 may include an information resource service module, an information resource directory module, an information resource integration module, and an information resource exchange module;
databases 622 may include business databases, historical databases, data repositories, and metadata repositories;
the data source 623 may include a data acquisition module, a data declaration module, and a data exchange module.
Therefore, the policy cashing integrated service platform data resources can be integrally planned in three layers of collection, storage and utilization, and a corresponding technical specification, data guarantee and data security system and a data management and technical maintenance system are established.
Based on the same inventive concept, the embodiment of the application also provides an information push model training device. The information push model training apparatus provided in the embodiment of the present application is described in detail below with reference to fig. 7.
Fig. 7 shows a schematic structural diagram of an information push model training apparatus according to an embodiment of the present application.
As shown in fig. 7, the information push model training apparatus may include:
an obtaining module 701, configured to obtain a plurality of preset policy information and generate a plurality of random numbers as a plurality of training samples based on a random function, where each training sample includes one or more random numbers;
a dimension reduction module 702, configured to perform dimension reduction processing on the keyword feature values corresponding to the multiple keywords in each policy information to obtain a policy feature value corresponding to each policy information;
the matching module 703 is configured to input a random number in the training sample into the information pushing model, match the random number with a policy feature value corresponding to each policy information by using the information pushing model, and output predicted policy information corresponding to the random number;
and the adjusting module 704 is configured to adjust a model parameter of the information pushing model according to the random number and the policy feature value corresponding to the predicted policy information until the information pushing model converges, so as to obtain the trained information pushing model.
Therefore, a plurality of preset policy information can be obtained, the dimension reduction processing is carried out on the keyword characteristic values corresponding to a plurality of keywords in each policy information to obtain the policy characteristic value corresponding to each policy information, then the random number in the training sample is matched with the policy characteristic value corresponding to each policy information to obtain the predicted policy information corresponding to the random number, and the policy characteristic value corresponding to the policy information is obtained by carrying out the dimension reduction processing on the keyword characteristic values corresponding to a plurality of keywords in the policy information, so that the number of the policy characteristic values can be reduced, the time for matching the random number with the policy characteristic values can be greatly shortened, correspondingly, the time for pushing the information by using the information pushing model can be greatly shortened, the pushing efficiency of the policy information can be improved, and the waiting time of a user can be reduced, and the user experience is improved.
In some embodiments, in order to obtain policy information more accurately and comprehensively, the obtaining module 701 may specifically include:
and the first obtaining submodule is used for obtaining a plurality of preset policy information from the government affair system through the application program interface API gateway.
In some embodiments, to facilitate matching the random number to the policy feature value, the apparatus may further include:
and the storage module is used for performing dimension reduction processing on the keyword characteristic values corresponding to the keywords in each policy information to obtain the policy characteristic value corresponding to each policy information, and then storing the policy characteristic values into the information pushing model in the form of a one-dimensional array and a two-bit array respectively.
In some embodiments, in order to enable the training information push model to push policy information corresponding to a keyword feature value, under the condition that the training sample includes a random number, the matching module 703 may specifically include:
and the first matching submodule is used for inputting the random number in the training sample into the information pushing model, matching the random number with the policy characteristic value stored in the form of a one-dimensional array by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number.
In some embodiments, in order to train the information pushing model to be able to push policy information corresponding to a plurality of keyword feature values, when the training sample includes a plurality of random numbers, the matching module 703 may specifically include:
and the second matching submodule is used for inputting the random number in the training sample into the information pushing model, matching the random number with the policy characteristic value stored in the form of a two-dimensional array by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number.
Based on the same inventive concept, the embodiment of the application also provides an information pushing device. The information pushing apparatus provided in the embodiment of the present application is described in detail below with reference to fig. 8.
Fig. 8 shows a schematic structural diagram of an information pushing apparatus according to an embodiment of the present application.
As shown in fig. 8, the information pushing apparatus may include:
a receiving module 801, configured to receive a keyword input by a user;
the pushing module 802 is configured to input a first keyword feature value corresponding to a keyword input by a user into an information pushing model, and push target policy information corresponding to the first keyword feature value to the user by using the information pushing model, where the information pushing model is obtained by training through the information pushing model training method.
Therefore, the first keyword characteristic value corresponding to the keyword input by the user is input into the information pushing model, the information pushing model can be used for pushing the target policy information corresponding to the first keyword characteristic value to the user, the information pushing model can be a model obtained by training through the information pushing model training method, the information pushing efficiency of the information pushing model is high, the policy information pushing efficiency can be improved, the waiting time of the user is shortened, and the user experience is improved.
In some embodiments, in order to more accurately push policy information to a user, the pushing module 802 may specifically include:
and the third matching sub-module is used for inputting the first keyword characteristic value corresponding to the keyword input by the user into the information pushing model, matching the first keyword characteristic value with the policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain the target policy information corresponding to the first keyword characteristic value.
In some embodiments, if a user inputs a plurality of keywords for searching, there may be a situation where the plurality of keywords correspond to the same policy information, and in order to make the policy information pushed to the user simpler, the third matching sub-module may specifically include, under the condition that the keywords input by the user at least include the first keyword and the second keyword:
the matching unit is used for inputting a second keyword characteristic value corresponding to the first keyword and a third keyword characteristic value corresponding to the second keyword into the information pushing model, matching the second keyword characteristic value with the policy characteristic value corresponding to each policy information by using the information pushing model, and matching the third keyword characteristic value with the policy characteristic value corresponding to each policy information to obtain a first policy characteristic value successfully matched with the second keyword characteristic value and a second policy characteristic value successfully matched with the third keyword characteristic value;
and the removing unit is used for removing the third policy characteristic value included in the first policy characteristic value or the third policy characteristic value included in the second policy characteristic value under the condition that the first policy characteristic value and the second policy characteristic value include the same third policy characteristic value to obtain the residual policy characteristic value.
And an output unit for outputting the target policy information corresponding to the remaining policy feature value.
In some embodiments, in order to make the pushed policy information more accurate, the apparatus may further include:
and the second obtaining submodule is used for obtaining the target policy information corresponding to the residual policy characteristic value from the government system through the API gateway before outputting the target policy information corresponding to the residual policy characteristic value.
Fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 9, the electronic device 9 is capable of implementing an information push model training method and an information push method according to the embodiment of the present application, and a structure diagram of an exemplary hardware architecture of an electronic device of an information push model training apparatus and an information push apparatus. The electronic device may refer to an electronic device in the embodiments of the present application.
The electronic device 9 may comprise a processor 901 and a memory 902 in which computer program instructions are stored.
Specifically, the processor 901 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 902 may include mass storage for data or instructions. By way of example, and not limitation, memory 902 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 902 may include removable or non-removable (or fixed) media, where appropriate. The memory 902 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 902 is a non-volatile solid-state memory. In particular embodiments, memory 902 may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory 902 includes one or more tangible (non-transitory) computer-readable storage media (e.g., a memory device) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to a method according to an aspect of the present application.
The processor 901 reads and executes the computer program instructions stored in the memory 902 to implement any one of the information push model training methods and/or information push methods in the above embodiments.
In one example, the electronic device can also include a communication interface 903 and a bus 904. As shown in fig. 9, the processor 901, the memory 902, and the communication interface 903 are connected via a bus 904 to complete communication with each other.
The communication interface 903 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this embodiment of the application.
The bus 904 comprises hardware, software, or both that couple the components of the electronic device to one another. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 904 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The electronic device may execute the information push model training method and the information push method in the embodiment of the present application, thereby implementing the information push model training method, the information push method, the push model training device, and the information push device described in conjunction with fig. 1 to 8.
In addition, in combination with the information push model training method and the information push method in the foregoing embodiments, embodiments of the present application may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any one of the information push model training methods and/or the information push methods in the above embodiments.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (14)

1. An information push model training method, the method comprising:
acquiring a plurality of preset policy information and generating a plurality of random numbers as a plurality of training samples based on a random function, wherein each training sample comprises one or more random numbers;
performing dimension reduction processing on keyword characteristic values corresponding to a plurality of keywords in each policy information to obtain a policy characteristic value corresponding to each policy information;
inputting the random number in the training sample into an information pushing model, matching the random number with a policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain prediction policy information corresponding to the random number;
and adjusting model parameters of an information pushing model according to the random number and a policy characteristic value corresponding to the prediction policy information until the information pushing model converges to obtain the trained information pushing model.
2. The method of claim 1, wherein obtaining a predetermined plurality of policy information comprises:
and acquiring a plurality of preset policy information from the government affairs system through an Application Program Interface (API) gateway.
3. The method according to claim 1, wherein after performing dimension reduction processing on the keyword feature values corresponding to the plurality of keywords in each piece of policy information to obtain the policy feature value corresponding to each piece of policy information, the method further comprises:
and storing the policy characteristic values into the information push model in the form of a one-dimensional array and a two-dimensional array respectively.
4. The method according to claim 3, wherein, when the training sample includes a random number, the inputting the random number into an information pushing model, matching the random number with a policy feature value corresponding to each piece of policy information by using the information pushing model, and outputting predicted policy information corresponding to the random number comprises:
and inputting the random number in the training sample into an information pushing model, matching the random number with a policy characteristic value stored in a one-dimensional array form by using the information pushing model, and outputting to obtain prediction policy information corresponding to the random number.
5. The method according to claim 3, wherein, when the training sample includes a plurality of random numbers, the inputting the random numbers into an information pushing model, matching the random numbers with policy feature values corresponding to the policy information using the information pushing model, and outputting predicted policy information corresponding to the random numbers comprises:
and inputting the random number in the training sample into an information pushing model, matching the random number with a policy characteristic value stored in a two-dimensional array form by using the information pushing model, and outputting to obtain prediction policy information corresponding to the random number.
6. An information pushing method, characterized in that the method comprises:
receiving a keyword input by a user;
inputting a first keyword characteristic value corresponding to the keyword input by the user into an information pushing model, and pushing target policy information corresponding to the first keyword characteristic value to the user by using the information pushing model, wherein the information pushing model is obtained by training according to the information pushing model training method of any one of claims 1 to 5.
7. The method according to claim 6, wherein the inputting a first keyword feature value corresponding to the keyword input by the user into an information pushing model, and pushing target policy information corresponding to the first keyword feature value to the user by using the information pushing model comprises:
inputting a first keyword characteristic value corresponding to the keyword input by the user into an information pushing model, matching the first keyword characteristic value with a policy characteristic value corresponding to each policy information by using the information pushing model, and outputting to obtain target policy information corresponding to the first keyword characteristic value.
8. The method according to claim 7, wherein, in a case that the keywords input by the user at least include a first keyword and a second keyword, the step of inputting a first keyword feature value corresponding to the keyword input by the user into an information pushing model, matching the first keyword feature value with a policy feature value corresponding to each policy information by using the information pushing model, and outputting target policy information corresponding to the first keyword feature value comprises:
inputting a second keyword characteristic value corresponding to the first keyword and a third keyword characteristic value corresponding to the second keyword into an information pushing model, matching the second keyword characteristic value with a policy characteristic value corresponding to each policy information by using the information pushing model, and matching the third keyword characteristic value with a policy characteristic value corresponding to each policy information to obtain a first policy characteristic value successfully matched with the second keyword characteristic value and a second policy characteristic value successfully matched with the third keyword characteristic value;
and under the condition that the first policy characteristic value and the second policy characteristic value comprise the same third policy characteristic value, removing the third policy characteristic value included in the first policy characteristic value or the third policy characteristic value included in the second policy characteristic value to obtain a residual policy characteristic value.
And outputting target policy information corresponding to the residual policy feature value.
9. The method of claim 8, wherein prior to the outputting target policy information corresponding to the remaining policy feature value, the method further comprises:
and acquiring target policy information corresponding to the residual policy characteristic value from a government system through an API gateway.
10. An information push model training apparatus, the apparatus comprising:
the acquisition module is used for acquiring a plurality of preset policy information and generating a plurality of random numbers as a plurality of training samples based on a random function, wherein each training sample comprises one or more random numbers;
the dimension reduction module is used for carrying out dimension reduction processing on the keyword characteristic values corresponding to the keywords in each policy information to obtain the policy characteristic value corresponding to each policy information;
the matching module is used for inputting the random number in the training sample into an information pushing model, matching the random number with the policy characteristic value corresponding to each piece of policy information by using the information pushing model, and outputting to obtain the prediction policy information corresponding to the random number;
and the adjusting module is used for adjusting model parameters of an information pushing model according to the random number and the policy characteristic value corresponding to the prediction policy information until the information pushing model converges to obtain the trained information pushing model.
11. An information pushing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving keywords input by a user;
a pushing module, configured to input a first keyword feature value corresponding to the keyword input by the user into an information pushing model, and push target policy information corresponding to the first keyword feature value to the user by using the information pushing model, where the information pushing model is obtained by training according to the information pushing model training method of any one of claims 1 to 5.
12. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the information push model training method of any one of claims 1 to 5 and/or the information push method of any one of claims 6 to 9.
13. A computer storage medium having computer program instructions stored thereon, which when executed by a processor implement the information push model training method of any one of claims 1 to 5 and/or the information push method of any one of claims 6 to 9.
14. A computer program product, characterized in that instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the information push model training method according to any one of claims 1 to 5 and/or the information push method according to any one of claims 6 to 9.
CN202210598052.8A 2022-05-30 2022-05-30 Information push model training method, information push method, device and equipment Pending CN114912538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210598052.8A CN114912538A (en) 2022-05-30 2022-05-30 Information push model training method, information push method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210598052.8A CN114912538A (en) 2022-05-30 2022-05-30 Information push model training method, information push method, device and equipment

Publications (1)

Publication Number Publication Date
CN114912538A true CN114912538A (en) 2022-08-16

Family

ID=82769227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210598052.8A Pending CN114912538A (en) 2022-05-30 2022-05-30 Information push model training method, information push method, device and equipment

Country Status (1)

Country Link
CN (1) CN114912538A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116842272A (en) * 2023-08-29 2023-10-03 四川邕合科技有限公司 Policy information pushing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116842272A (en) * 2023-08-29 2023-10-03 四川邕合科技有限公司 Policy information pushing method, device, equipment and storage medium
CN116842272B (en) * 2023-08-29 2023-11-03 四川邕合科技有限公司 Policy information pushing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Hoang et al. Deepjit: an end-to-end deep learning framework for just-in-time defect prediction
US20220075670A1 (en) Systems and methods for replacing sensitive data
US20210049700A1 (en) System and method for machine learning architecture for enterprise capitalization
US20180260891A1 (en) Systems and methods for generating and using optimized ensemble models
US20230289665A1 (en) Failure feedback system for enhancing machine learning accuracy by synthetic data generation
US20210042366A1 (en) Machine-learning system for servicing queries for digital content
CN111507831A (en) Credit risk automatic assessment method and device
CN111199474A (en) Risk prediction method and device based on network diagram data of two parties and electronic equipment
CN112419029B (en) Similar financial institution risk monitoring method, risk simulation system and storage medium
CN112801498A (en) Risk identification model training method, risk identification device and risk identification equipment
Hancock et al. Impact of hyperparameter tuning in classifying highly imbalanced big data
CN116340793A (en) Data processing method, device, equipment and readable storage medium
CN114912538A (en) Information push model training method, information push method, device and equipment
CN117290508A (en) Post-loan text data processing method and system based on natural language processing
US11392648B2 (en) Calculating voice share and company sentiment
CN114581209A (en) Method, device and equipment for training financial analysis model and storage medium
US20220164374A1 (en) Method of scoring and valuing data for exchange
CN115455198A (en) Model training method, legal action information alignment and fusion method and terminal equipment thereof
CN115238588A (en) Graph data processing method, risk prediction model training method and device
CN113138977A (en) Transaction conversion analysis method, device, equipment and storage medium
Berkani Decision support based on optimized data mining techniques: Application to mobile telecommunication companies
CN117807302B (en) Customer information processing method and device
KR102663767B1 (en) Auto update method for high-risk wallet address database of virtual assets based on artificial intelligence
CN116703539A (en) Financial risk identification method, device, equipment and medium based on incremental learning
CN117194932A (en) Entity evaluation method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination